What Good Context Looks Like — Practical Patterns
Using AI well isn’t about accumulating prompts or copy‑pasting recipes. It’s about feeding the model the minimal, relevant, and structured context it needs to complete one small, clear job. This article explains what to include, what to avoid, and how to keep multi‑step work clean and reliable — without turning the page into a prompt cookbook.
What "good context" actually is
Good context is a focused lens, not a data dump. Include only items that materially affect the single action you're asking the model to perform:
- A one‑sentence goal that states success. (Or describe it in a few sentences if you don't yet know the exact goal — the important part is to show the AI which direction to take.)
- The minimal artifact(s): a short failing test output, a 10‑line code snippet, or one authoritative source link.
- Explicit constraints and priorities (what to prefer, what to exclude).
- A clear expected output format (for example: "one‑sentence root cause + 3 bullet fixes").
If a piece of information does not change the action's decision, leave it out. Excess information is noise.
Practical patterns
-
Atomize tasks
- Break work into atomic units: one file, one behavior, one success criterion.
- Smaller tasks let you attach tight context and get high‑quality first‑pass results.
-
Attach the smallest useful artifact
- Prefer a reproducible snippet (failing test + error lines) or a small code region with a file path and line numbers.
- Large files are acceptable only when they genuinely provide necessary "big picture" — otherwise they add noise.
-
Make assumptions explicit
- State environment, versions, and business rules that matter. Treat these as part of the context, not optional notes.
-
Choose tools by purpose
- Use web search tools (Google, Perplexity) for freshness and source discovery.
- Use code‑aware assistants (IDE integrations) when the model needs to read or modify repo files.
- Use general LLMs for synthesis, writing, and structure.
How I use Perplexity in the chain
Perplexity is my first stop for exploratory research. I ask high‑level questions, get a concise summary and links, then paste only the canonical excerpts or links into the next tool that will synthesize or act.
Typical flow:
- Research in Perplexity → collect 2–5 authoritative links and short excerpts.
- Validate dates and the primary source quality.
- Paste the small excerpts or links into the synthesis model and ask for tradeoffs, ranking, or a code‑oriented action.
Perplexity is useful because it summarizes and surfaces sources quickly, but it’s not the final arbiter — always verify the original source before trusting its conclusions.
Context‑bleeding: a subtle but common hazard
When a task goes through many iterations, session state accumulates rejected ideas, partial fixes, and old constraints. Models can "remember" rejected content and reintroduce it later. Symptoms include:
- A paragraph or code fragment you discarded reappears.
- An exclusion you set earlier (e.g., "exclude marketplace sellers") is ignored in a later output.
- The model repeats an earlier mistaken assumption as if it were true.
How to avoid bleeding:
- Periodically produce a compact SUMMARY (one paragraph + 3 TODOs) and use that to seed a fresh session.
- If the model repeatedly ignores a constraint, start a new session and state the single‑line rule.
- Copy only canonical sources (short excerpts or links) into synthesis prompts; avoid pasting entire noisy histories.
- Keep rejected fragments in a separate, auditable "rejected" note — do not paste them back into active context.
Treat conversation state like a cache: if it gets noisy, clear it and seed from the distilled truth.
Team practices that make context reproducible
Reproducibility beats ad‑hoc experimentation. Small, shared conventions prevent context dumps and make AI‑assisted work predictable across the team.
- Short issue/PR templates: Goal (one line), Artifacts (path or link), Constraints (1–3 lines), Desired output.
- Canonical snippets folder: file headers, environment lines, and example failing tests that teammates can reference.
- Session hygiene habit: store summaries or important parts of a conversation as files so you can reuse them in new sessions. If your tooling supports it, remove unnecessary details before saving.
- Onboarding exercises: teach teammates the "one‑sentence goal + minimal artifact" habit with a short hands‑on task.
These simple rituals reduce noise, speed up debugging, and make outputs auditable.
What to stop doing
- Stop dumping whole repositories or long logs unless you truly need them.
- Stop treating the model’s memory as authoritative — the model mirrors whatever you provide.
- Stop chasing "perfect prompts" — instead focus on the right context and a tight flow.
- Avoid repeatedly pasting rejected fragments back into the conversation; that causes the model to recycle mistakes.
Further reading
- Andrej Karpathy — "context engineering" discussion: https://x.com/karpathy/status/1937902205765607626
- Phil Schmid — "The New Skill in AI is Not Prompting, It's Context Engineering": https://www.philschmid.de/context-engineering
- LangChain — "The rise of context engineering": https://blog.langchain.com/the-rise-of-context-engineering/
- LlamaIndex — "Context Engineering: What it is, and techniques to consider": https://www.llamaindex.ai/blog/context-engineering-what-it-is-and-techniques-to-consider
- Simon Willison — practical notes on context engineering and real‑world pitfalls: https://simonwillison.net/2025/Jun/27/context-engineering/
Good context is about precision and flow, not volume. Focus on tiny, testable tasks, keep your session state clean, and treat each AI interaction like a short, repeatable experiment. Models become reliable collaborators when you teach them with clarity and discipline.