4.1. Rule 8: Monitor Progress and Know When to Restart#

It’s tempting to just walk away and let the model work for a long time, but often the model will end up going down the wrong path, wasting time and tokens. You need to actively monitor what the AI is doing: Is it changing things you didn’t want changed? Is it ignoring the changes you actually requested? Is it introducing new problems while trying to fix old ones? When you notice the AI heading in the wrong direction, stop it immediately rather than letting it continue down an unproductive path.

Sometimes the most efficient approach is recognizing when a conversation has become too convoluted with failed attempts and conflicting corrections. When this happens, review your prompt history to identify what went wrong: Were requirements unclear? Did you add conflicting constraints? Did you forget to specify critical details upfront? Starting fresh with these lessons learned often produces better results than continuing to debug within a polluted context. Clear context and restart from memory files after adding additional details to prevent the same problem from occurring in the future. This also highlights the need for good version control; if you commit the code before undertaking a major change, it’s easy to simply revert to the previous commit and start over if the model goes astray. Fortunately coding agents are generally very good at writing detailed commit messages, making a commit as easy as prompting “commit this to git”.

4.1.1. What separates positive from flawed examples#

Flawed examples let the AI run unsupervised, assuming it will eventually figure things out. You come back to find it’s been making changes you didn’t want, introducing new bugs while fixing old ones, or heading down a completely wrong architectural path. By the time you notice, you’ve wasted significant time and tokens on code you can’t use. The conversation is now polluted with failed approaches that will confuse future attempts.

Positive examples involve active monitoring and decisive intervention. You watch what the AI is doing and stop it when it goes off track. You recognize when a conversation has gotten too tangled and needs a fresh start. You review what went wrong, update your specifications or memory files, and begin again with better context. You use version control to make restarts cheap and safe.


4.1.1.1. Example 1: Unsupervised AI Goes Off Track#

The user gives vague instructions and walks away. The AI makes assumptions about what changes are needed. By the time the user checks back, the AI has modified core functionality that shouldn’t have been touched, introduced new dependencies without asking, changed the API in breaking ways, and created bugs in previously working code. The user now has to figure out what changed, what broke, and how to salvage any useful parts. Much of the work is unusable and the conversation is polluted with bad approaches.


4.1.1.2. Example 2: Active Monitoring with Early Intervention#

The user stays engaged and watches what the AI is doing in real time. When the AI starts modifying the wrong function, the user stops it immediately. They clarify what should and shouldn’t be changed. When the AI proposes a solution that doesn’t match the architecture, the user catches it before implementation. This prevents wasted work and keeps the conversation focused on productive changes.


4.1.1.3. Example 3: Recognizing When to Restart#

After several attempts to fix a complex bug, the conversation has become tangled with conflicting constraints and failed approaches. Rather than continuing to patch, the user recognizes this is a restart situation. They commit the current (working) code, review what went wrong, update memory files with lessons learned, and start a fresh conversation with better specifications. The new attempt succeeds quickly because the requirements are now clear and the context isn’t polluted.


4.1.1.4. Example 4: Using Version Control for Safe Restarts#

The user commits working code before attempting major changes. When the AI’s approach turns out to be wrong, reverting is trivial. This makes restarts essentially free. The detailed commit messages (which coding agents excel at writing) make it easy to track what was attempted and why. Version control transforms restarts from costly failures into cheap experiments.