What problem does this solve?
In long-running implementation tasks, a strong development agent can gradually substitute for the target system's intended responsibilities.
This usually happens when the agent is trying to complete the current task efficiently:
- it performs implicit inference that should have become explicit schema/state/policy/interface design
- it introduces helper/service logic that quietly absorbs responsibilities intended for the target agent or target subsystem
- it makes the current implementation "work" by relying on the development agent's strength, instead of forcing the product/system boundary to become explicit
The result is a subtle but important failure mode:
- the project appears to make progress
- the implementation may even pass local checks
- but some of the system's intended capabilities are not actually modeled in the system
- they were temporarily substituted by the development agent during development
This is not the same as planning/execution boundary drift.
That boundary is about who specifies implementation details vs who carries them out.
This issue is about a different boundary:
the development agent vs the target product/system/agent being built.
Without an explicit guard, Superpowers can accidentally encourage "capability substitution":
the development agent silently does work that should instead be made explicit in the design of the target system.
Proposed solution
Add an explicit framework-level reminder/check in the implementation workflow.
Possible wording:
You are the development agent, not the target product agent/system.
Do not silently substitute for capabilities that should be explicitly modeled in the target system.
If a behavior belongs in schema, state, policy, interface contracts, or the target agent's own responsibilities, prefer explicit modeling over implementation-time inference or hidden helper logic.
I think this could be added in one or more of these places:
-
using-superpowers
- as a global boundary rule
-
writing-plans
- a short check like:
- "Are we explicitly modeling target-system responsibilities, or relying on the development agent to carry them implicitly?"
-
executing-plans
- before implementing new inference/derivation/helper logic, require a check:
- "Is this implementing the target system, or compensating for missing target-system structure?"
-
reviewer/checklist prompts
- include a review question such as:
- "Does this change make the system more explicit, or does it hide target responsibilities inside development-time convenience logic?"
What alternatives did you consider?
1. Keep this as a personal `AGENTS.md` / local-skill rule only.
- This works for individual users, but the failure mode seems general enough to deserve core awareness.
-
Rely on existing planning/execution separation.
- Helpful, but insufficient.
- A plan can still be clean while implementation quietly substitutes for target-system responsibilities.
-
Solve this through stronger reviews only.
- Also helpful, but reactive.
- I think Superpowers should make the risk explicit earlier, during planning and execution, not only at review time.
Is this appropriate for core Superpowers?
Yes.
I think this is a framework-level concern because it is:
- domain-agnostic
- stack-agnostic
- especially relevant in long-running, multi-step tasks
- about preserving correct abstraction and responsibility boundaries
This feels similar in spirit to other core boundary concerns already discussed in Superpowers, except this one is specifically about preventing the development agent from swallowing the responsibilities of the system/agent being built.
Context
Example failure pattern:
- goal: build a target agent or subsystem with explicit responsibilities
- intended design: those responsibilities should become schema/state/policy/interface structure
- actual implementation: the development agent adds implicit inference or helper logic so the current task succeeds
- outcome: the repository contains a working patch, but some product responsibilities were never truly modeled
That makes the implementation look more complete than the system actually is.
I hit this failure mode while building agent-like systems and schema-sensitive workflows, but I think the problem generalizes beyond agent projects.
What problem does this solve?
In long-running implementation tasks, a strong development agent can gradually substitute for the target system's intended responsibilities.This usually happens when the agent is trying to complete the current task efficiently:
The result is a subtle but important failure mode:
This is not the same as planning/execution boundary drift.
That boundary is about who specifies implementation details vs who carries them out.
This issue is about a different boundary:
the development agent vs the target product/system/agent being built.
Without an explicit guard, Superpowers can accidentally encourage "capability substitution":
the development agent silently does work that should instead be made explicit in the design of the target system.
Proposed solution
Add an explicit framework-level reminder/check in the implementation workflow.Possible wording:
I think this could be added in one or more of these places:
using-superpowerswriting-plansexecuting-plansreviewer/checklist prompts
What alternatives did you consider?
1. Keep this as a personal `AGENTS.md` / local-skill rule only.Rely on existing planning/execution separation.
Solve this through stronger reviews only.
Is this appropriate for core Superpowers?
Yes.I think this is a framework-level concern because it is:
This feels similar in spirit to other core boundary concerns already discussed in Superpowers, except this one is specifically about preventing the development agent from swallowing the responsibilities of the system/agent being built.
Context
Example failure pattern:That makes the implementation look more complete than the system actually is.
I hit this failure mode while building agent-like systems and schema-sensitive workflows, but I think the problem generalizes beyond agent projects.