1.2. Rule 2: Distinguish Problem Framing from Coding#

Framing a problem in a programmatic way and coding are not the same thing [1]. Programmatic problem framing is problem solvinging is problem-solving: understanding the domain, decomposing complex problems, finding the right levels of abstraction, designing algorithms, and making architectural decisions. Coding is the mechanical translation of these concepts into executable syntax in a programming language. Using AI coding tools effectively requires that you deeply understand the problem from a programmatic perspective that you are trying to solve; in most cases this understanding transcends the particular programming language, and the actual code implementation itself. AI tools excel at coding tasks, generating syntactically correct implementations from well-specified requirements, but they currently require human guidance for programmatic problem framing decisions that involve domain expertise, methodological choices, and scientific reasoning. You can’t effectively guide or review what you don’t understand, so establish fluency in at least one programming language and fundamental concepts before leveraging AI assistance. This foundation allows you to spot when generated code deviates from best practices or introduces subtle bugs. Without this knowledge, you’re essentially flying blind, unable to distinguish between elegant solutions and convoluted workarounds.

1.2.1. What separates positive from flawed examples#

Flawed examples make vague requests without understanding the problem structure. You get code that might run, but you have no idea if it’s solving the problem correctly, efficiently, or in a way that makes scientific sense. You can’t debug it when it breaks, can’t explain what it’s doing, and can’t verify its correctness.

Positive examples demonstrate clear understanding of the problem at a conceptual level. You specify inputs, outputs, constraints, and expected behavior. You can articulate what success looks like and why. You provide enough detail that you could pseudocode the solution yourself, even if the syntax would be messy. This gives the AI a clear target and gives you the ability to evaluate whether what comes back is reasonable.


1.2.1.1. Example 1: Vibe Coding Without Understanding#

The user has no idea what problem they’re actually trying to solve. Can’t tell if correlation is even the right measure (could need partial correlation, mutual information, something else entirely). There’s no specification of what “analyze” means scientifically. Can’t evaluate if 0.7 is a meaningful threshold for anything. The loop implementation is inefficient, but the user has no way to know that. The code includes the diagonal (self-correlation equals 1) in the statistics, which is almost certainly wrong. The user can’t debug this when it inevitably breaks and can’t explain what it’s doing to collaborators.


1.2.1.2. Example 2: Problem-First Specification#

The user clearly articulates the problem structure: inputs, outputs, constraints. They specify what “connectivity” means in this context (Pearson correlation, not something else). They provide concrete requirements the AI can actually implement. They list validation criteria so they can verify correctness themselves. The NaN handling requirement gets explained explicitly. Now the user can actually check if this solution makes sense. They can debug issues because they understand what should happen. They can explain to collaborators what this code does and why it does it that way.


1.2.1.3. Example 3: Algorithmic Understanding Guides Implementation#

The user understands the algorithm at a conceptual level before asking for any code. They specify the modularity formula explicitly rather than hoping the AI gets it right. They provide concrete stopping criteria and expected behavior. They can verify each component works correctly (modularity calculation, gain computation). They know what “reasonable” output looks like for their domain. They can debug by checking intermediate modularity values match what they expect. They have a comparison point (networkx implementation) for validation.


1.2.2. References#

[1]

Robert C. Martin. Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall, Upper Saddle River, NJ, 2008. ISBN 9780132350884.