Context Is Everything — But Not All Context Is Useful
When people talk about "giving the model context" they often mean "dump everything I have into the session." That rarely helps. The real art is choosing the right context — the minimal, high-signal information that lets the model act correctly and quickly. This article describes how to pick that context, how to structure the flow of work (from research to synthesis to code changes), and how teams can make context reproducible and safe.
You'll get practical heuristics, a short flow you can follow, and team patterns that make AI-assisted work reliable — without turning this into a prompt cookbook. The goal is a repeatable process and a set of small habits that preserve clarity and speed.
Why "more" is often worse
Large language models are pattern-matchers. When you give them a lot of noisy or conflicting documents, they try to reconcile everything and often latch onto the wrong signals. The result is:
- long, unfocused outputs,
- hallucinated facts or invented sources,
- suggested changes that depend on assumptions you didn't intend.
Contrast that with a compact, targeted context: one file, one failing test output, one short decision criterion. The model focuses on the right thing and gives a concise, usable result.
Key takeaway: prefer relevant, accurate, and constrained context over volume.
What counts as relevant context (and what doesn't)
Think of context as the lens you hand the model. Choose a clean lens.
Relevant context categories:
- Primary facts and sources: authoritative links, test-report rows, API docs — when freshness matters.
- Local artifacts: specific files, small code snippets, stack traces, failing test output.
- Decision criteria and constraints: what metric matters, what to exclude, performance or safety limits.
- Examples of desired output: a one-line success criterion or a short sample of the format you want back.
Irrelevant or harmful context:
- Entire repositories or long logs with no pointers.
- Old, contradictory notes that haven't been reconciled.
- Ambiguous business rules without clear priority.
- Large textual dumps that contain both relevant and irrelevant details.
If you hesitate whether to include something, ask: "Will this fact materially change the single action I'm asking for?" If the answer is no, leave it out.
A short flow for building surgical context
Use this flow for a single atomic task. It keeps sessions focused and results fast.
-
One-sentence goal (15–30s)
- Example: "Fix failing signup test in
RegisterForm.tsx— failing on email validation for@companyaddresses." - This sentence anchors the model and the human reader.
- Example: "Fix failing signup test in
-
Attach the minimal artifact (30–60s)
- Paste the exact failing test output (3 lines) or the 10 lines of code around the suspected bug, or the authoritative source link (e.g., an ADAC table row).
- Provide file path(s) and line ranges with backticks around names, e.g.
frontend/src/components/RegisterForm.tsx.
-
State constraints and success criteria (15–30s)
- Constraints: environment, versions, exclusions ("do not change UI styling", "exclude marketplace sellers").
- Success criterion: a single test or behavior that proves the fix works.
-
Request a strict return format (10–20s)
- Ask for a one-sentence root cause, a short checklist, and a suggested file diff (or a short patch). This forces minimal, actionable output.
-
Evaluate quickly (1–2min)
- If the output is ≥80% right, iterate inline with a narrowly scoped correction.
- If it misses major assumptions or repeats bad behavior, start a fresh session with the distilled context.
This flow enforces the "1 file / 1 behavior / 1 success criterion" discipline that makes AI interactions reliable.
Heuristics you can adopt immediately
- The 1–2 minute test: if getting a useful first pass takes >1–2 minutes of fiddling, atomize the task.
- One file, one behavior, one success metric: smaller prompts => higher first-pass quality.
- Explicit assumptions: always state versions, environment, and exclusions that could change outcomes.
- Strict output formats: require JSON, numbered steps, or a short checklist to make the result actionable.
- Verify primary sources externally: when freshness matters, look up the source (e.g., via a web search tool) and paste the authoritative link into the session.
Team patterns: make context reproducible and safe
For teams, reproducibility beats ad-hoc experimentation. Create small, shared practices that encapsulate the surgical-context approach.
-
Issue/PR templates with "surgical context" fields
- Fields: Goal (one line), Artifacts (file path(s) or link(s)), Constraints (1–3 lines), Return format.
- Filling these fields forces the author to think like the model's teacher.
-
Session hygiene and summaries
- Habit: every 8–12 interactions, ask the model for a one‑paragraph "SUMMARY" and a 3‑item TODO. Save the summary as the seed for fresh sessions to avoid context drift.
-
Canonical snippets and artifacts
- Keep an "AI context" folder with standard file headers, environment lines, example failing test outputs, and commonly used constraints. Link to these from issues.
-
Shared decision criteria
- Maintain a short shared doc that ranks decision criteria for common domains (e.g., "safety > compatibility > price"). Use it in research tasks and product decisions.
-
Safe defaults
- Include simple safety rules in templates (e.g., "do not include user PII", "exclude marketplace-only sellers") to avoid common pitfalls.
-
Training and onboarding
- Teach new teammates the "one-sentence goal + minimal artifact + constraint" workflow. Run short exercises: give them a failing test and ask them to prepare a surgical context.
Flow examples (high-level, no prompts)
- Debugging: goal → failing test output → file path → constraints → request a concise patch + test update.
- Product research: decision criteria → authoritative sources gathered separately → paste canonical data into the session → ask for tradeoffs and ranked options.
- Content drafting: audience + tone → 1-paragraph example → list of must-have facts → ask for outline and an intro.
These are patterns, not recipes. You adapt them to your tools and team's rhythm.
Governance and safety considerations
Context selection is also a risk control mechanism. Bad context can expose sensitive data or produce unsafe recommendations.
- Redact PII before pasting logs or user data.
- Use "exclude" constraints for regulated sources (e.g., "do not recommend medical advice; always cite a licensed source").
- Keep an auditable record of the canonical sources you pasted into the model for decisions that affect customers.
Quick checklist (copy into a README or issue template)
- Goal: one sentence.
- Artifact: file path, 3-line snippet, or authoritative link.
- Constraints: 1–3 lines (exclusions, versions).
- Success: one test or behavior that proves the task is done.
- Timebox: expect a useful first pass in ~1–2 minutes.
- Summary: create a one-paragraph SUMMARY every 8–12 interactions.
Closing
"More context" is not a universal virtue. Precision, relevance, and flow are. Teach your team to think like a trainer: focus attention, remove noise, and measure success with one clear test. With that discipline, AI becomes a reliable collaborator rather than a noisy generator.
If you want, I can convert the checklist above into a GitHub issue/PR template or a short onboarding exercise for your team.