4.3. Rule 10: Refine Code Incrementally with Focused Objectives#

Once you have working, tested code, resist the temptation to ask AI to “improve my codebase.” Instead, approach refinement incrementally with clear, focused objectives. Be explicit about what aspect you want to improve: performance optimization, code readability, error handling, modularity, or adherence to specific design patterns. When you recognize that refinement is needed but can’t articulate the specific approach (for instance, you know certain logic should be extracted into a separate function but aren’t sure how), use AI to help you formulate concrete objectives before implementing changes. Describe what you are trying to achieve and ask the AI to suggest specific refactoring strategies or design patterns that would accomplish your goal, applying the same mindsets delineated in Rules 1 - 9 to help you along the way.

AI excels at identifying opportunities for refactoring and abstraction, such as recognizing repeated code that should be extracted into reusable functions or methods, and detecting poor programming patterns like deeply nested conditionals, overly long functions, tight coupling between components, sloppy or inconsistent variable naming conventions, and other poor patterns. When requesting refinements, specify the goal (e.g., “extract the data validation logic into a separate function” rather than “make this better”) and verify each change against your tests (while expanding your testing as you interate to reflect the latest updates and improvements) before moving to the next improvement. This focused approach prevents the AI from making changes that, while technically sound, don’t align with your project’s architectural decisions. Note that AI can inadvertently break previously working code or degrade performance while making stylistic improvements. Always test thoroughly after each incremental change, and revert if the “improvement” introduces problems or doesn’t provide clear benefits.

4.3.1. What separates positive from flawed examples#

Flawed examples ask AI to “improve” or “clean up” code without specific objectives. The AI makes sweeping changes across multiple concerns simultaneously; renaming variables, restructuring logic, changing algorithms, adding abstractions. You can’t evaluate which changes are beneficial because everything changed at once. Tests start failing but you don’t know which modification caused the problem. The AI might introduce technically correct patterns that don’t match your project’s conventions. You waste time untangling good changes from bad ones, or worse, accept problematic changes because you can’t isolate their effects.

Positive examples approach refinement systematically. You either identify specific issues yourself or ask AI to diagnose problems first, then evaluate its suggestions based on your project context. You tackle one focused objective at a time. You test after each change (improving your testing suite as you go) and revert immediately if something breaks. You recognize that not all AI suggestions are appropriate for your codebase; even good practices can be wrong if they conflict with project or field conventions for specific tendencies or methodologies. This incremental approach lets you understand each change, verify its benefit, and maintain a working codebase throughout refinement.


4.3.1.1. Example 1: Vague “Improve This” Request#

The user asks AI to generically improve code without specific objectives. The AI makes sweeping changes across multiple dimensions. When tests fail, the user tries to salvage the situation by listing what’s wrong, but the AI’s attempts to fix the problems make things worse. The conversation becomes increasingly polluted with failed attempts, conflicting constraints, and mounting confusion. What started as “improve the code” turns into a debugging nightmare where it’s impossible to tell what’s broken, why, or how to fix it. The user eventually gives up and has to revert everything.


4.3.1.2. Example 2: Ask AI for Diagnostic Feedback Before Refactoring#

The user has working code but suspects it could be improved. Instead of asking AI to fix everything, they request analysis first. The AI identifies specific improvement opportunities. The user evaluates each suggestion in the context of their project; some are valuable, others don’t fit their codebase conventions. They choose one high-priority issue to address first and give AI a focused, specific refactoring objective. This diagnostic approach works even for less experienced developers who might not recognize unideal coding patterns themselves.


4.3.1.3. Example 3: Incremental Refactoring with Testing#

Following the diagnostic approach from Example 2, the user now implements one focused refactoring objective. They work incrementally: make one change, run tests, verify behavior unchanged, commit. When a change breaks something, they catch it immediately and can revert or fix because only one thing changed. This disciplined approach maintains a working codebase throughout the refactoring process. The example shows multiple small steps with verification at each point.


4.3.1.4. Example 4: Performance Optimization with Baseline Metrics#

The user identifies a performance bottleneck through profiling. Before optimizing, they establish baseline metrics. They request one specific optimization from AI, verify it actually improves performance, and confirm correctness is maintained. When an “optimization” actually makes things worse, they catch it immediately by comparing to the baseline and revert. This metrics-driven approach prevents premature optimization and ensures changes provide real benefits.

4.3.1.5. Example 5: AI Breaks Code During Refactoring#

The user requests a specific refactoring. The AI implements it but subtly changes behavior in the process. Because the user tests after each change (following the incremental approach), they catch the breakage immediately. They can identify exactly what went wrong, provide corrected requirements, and try again. If they’d made multiple changes at once, they wouldn’t know which change caused the problem.