1.1. Rule 1: Gather Domain Knowledge Before Implementation#

Know your problem space before writing code. Understand data shapes, missing data patterns, field-specific libraries, and existing implementations that could serve as models. You don’t need to be an expert initially; use AI to help research domain standards, available datasets, common approaches, and implementation patterns before diving into coding. This reconnaissance phase prevents you from reinventing wheels or violating field conventions. Share your current understanding level with the AI and iteratively build context through targeted questions about tools, data structures, and best practices, asking for specific references and paper summaries. This upfront investment ensures that your code aligns with community standards and handles real-world data appropriately.

1.1.1. What separates positive from flawed examples#

Flawed examples jump straight to implementation without understanding the domain. You end up with generic code that ignores field-specific conventions, misses critical preprocessing steps, and uses inappropriate methods. Worse, you can’t even tell if what you got back makes sense because you don’t know what the right approach looks like.

Positive examples start by explicitly stating what you do and don’t know, and systematically build understanding through targeted questions. You ask for specific references, methodological papers, standard tools, and data formats. You iteratively refine your understanding before writing a single line of code. This way, when you finally ask for implementation, you can actually evaluate what the AI produces.


1.1.1.1. Example 1: Insufficient Domain Research#

Jumping to implementation without domain understanding leaves you with code you can’t evaluate. You don’t know if brain networks should be weighted or binary. Critical preprocessing steps like thresholding strategies and normalization approaches are completely missing. The code uses metrics without domain justification (average path length might be undefined for disconnected graphs, which are common in brain networks). Field-standard tools like Brain Connectivity Toolbox get ignored entirely. You have no way to verify if this approach aligns with neuroscience conventions, and you’re stuck with code you fundamentally can’t assess.


1.1.1.2. Example 2: Thorough Domain Exploration#

Starting with explicit acknowledgment of your knowledge level (“understand basic graph theory but new to brain connectivity”) sets up productive learning. Asking for specific references before any coding gets you the foundational papers you need. Building understanding iteratively through targeted questions teaches you about standard tools (bctpy), typical data formats, and methodological considerations. Getting concrete papers lets you verify information independently. Most importantly, confirming understanding before moving to implementation means you’re now equipped to evaluate whether the AI’s eventual code actually makes sense.


1.1.1.3. Example 3: Learning Community Standards#

Asking about standards and existing implementations before writing any code gets you oriented in what the field actually uses. You learn that MNE-Python is the neuroscience standard, not just a general-purpose tool. You understand preprocessing requirements specific to EEG (highpass filtering, artifact rejection, reference choice). You get reporting conventions needed for publications (frequency band definitions, dB scale, normalization approaches). Now you have concrete documentation to reference and can make informed decisions about using existing tools versus implementing custom solutions.