Chapter 2: What Good Context Looks Like

Chapter 2: What Good Context Looks Like

In the previous chapter, we saw why prompts fail without context. Now let's look at exactly what "good context" looks like—the practical patterns that get results on the first try.

In this chapter, you'll learn:

  • The five components every effective context needs
  • A universal task template that works for any AI request
  • Real examples showing bad versus good context
  • The simple test to know if your context is good enough

2.1 The Five Components of Context

Every effective AI context includes five components. Miss one, and your results suffer.

1. The Task

What you want AI to do—clearly and specifically.

Not just "write code" but "write a function that validates email addresses."

2. The Constraints

What AI must NOT do—the boundaries and limitations.

"Don't use regex. Must handle international emails. Maximum 50 lines. No external libraries."

3. The Background

Why you need this—the purpose and context.

"Users are signing up with invalid emails. This function will run on every form submit in our registration flow."

4. The Examples

What good output looks like—references and samples.

"Here's a similar function we use for phone validation: [code]. Match this style."

5. The Success Criteria

How you'll judge the result—what "done" means.

"Should handle edge cases like: test+filter@gmail.com, name@subdomain.company.co.uk. Must pass our existing test suite."

When you provide all five, AI has everything it needs. When you skip components, AI has to guess—and guesses are often wrong.


2.2 The Universal Task Template

Here's a template that works for any AI task. I use it daily:

## Problem
[What needs to be done or what broke]

## Context
- Files: [specific files and lines if applicable]
- History: [relevant previous changes or attempts]
- Constraints: [what must not change]

## Goal
[Clear success criterion—when is it done]

## Possible Solutions
1. [First option to explore]
2. [Second option]
3. [Third option]

## Tests/Verification
[How we verify it works]

Real Example: Bug Fix

Here's how this template looks for a real debugging task:

## Problem
User ID: 12345 can't edit their profile via the UI

## Context
- Support verified: user has 'admin' role in database
- Works: API endpoint PUT /api/profile/:id (tested manually)
- Doesn't work: "Edit" button in ProfileView.tsx
- Console error: "Permission denied at ProfileView.tsx:156"
- File: src/components/ProfileView.tsx, lines 150-160

## Goal
Edit button must work for users with 'admin' role

## Possible Solutions
1. Check how permissions are validated in ProfileView
2. Verify if user role loads correctly from state
3. Compare API permission check vs UI permission check

## Tests
- User with admin role can click Edit and see the form
- Existing admin permissions aren't broken
- Create test to catch this before users see it

Notice what's included:

  • Verified facts from support, not assumptions
  • Specific location in code (file + line numbers)
  • What works vs doesn't (API yes, UI no)
  • Clear goal not vague "fix it"
  • Directions to explore not commands to execute

With this context, AI usually finds the exact problem on the first try. In this case: the UI was checking for 'editor' role instead of 'admin'. Two-minute fix.


2.3 Before and After: Real Examples

Let's see this pattern across different task types.

Example: Article Writing

Without Context: "Write me an article about AI"

Result: Generic essay nobody wants to read.

With Context: "I need a 1000-word article for LinkedIn about Context Engineering.

Audience: technical managers who use AI occasionally Tone: direct, practical, no hype Style: here are my last 2 articles [attached] Main point: context matters more than prompts Include: one concrete example, Karpathy quote Exclude: generic AI history, predictions"

Result: Article that sounds like me and makes my actual point.

Example: Code Refactoring

Without Context: "Refactor this function to be cleaner"

Result: AI randomly splits function, breaks business logic, changes things that shouldn't change.

With Context: "Refactor processOrder function (attached).

Context:

  • TypeScript 4.9, Express + TypeORM
  • All existing tests pass (attached)
  • Problem: 800 lines, nobody wants to touch it

Goals:

  • Split into smaller functions (max 50 lines each)
  • Keep all functionality identical
  • Add TypeScript types where missing

Don't change:

  • Business logic
  • Database table names
  • API response format
  • Error messages (backwards compatibility)

Process:

  1. Identify independent parts
  2. Propose split (don't code yet)
  3. Wait for my approval
  4. Implement with tests"

Result: Clean refactoring I can deploy with confidence.

Example: Technology Selection

Without Context: "What's the best framework for admin panel?"

Result: Generic list of frameworks from 2023.

With Context: "Need framework for internal admin app.

Project:

  • 20 internal users
  • Mainly CRUD + reports
  • Team: 2 developers with React experience
  • Timeline: MVP in 1 month
  • Integration: existing NestJS REST API
  • Budget: prefer open source

Requirements:

  • Fast development (components out-of-box)
  • TypeScript support
  • Good documentation
  • Active community (2024+)

Exclude:

  • Paid solutions (Retool, Forest Admin)
  • PHP frameworks
  • Need to learn new language

Output I need: Top 3 options with time-to-MVP estimate and starter template links"

Result: React Admin, Refine, and Ant Design Pro—specific comparison with exactly what I asked for.


2.4 The Junior Developer Test

Before every AI task, ask yourself one question:

"Could a junior developer who started yesterday complete this task with just this information?"

If the answer is no—you're missing context. If the answer is yes—AI can handle it too.

This test works because:

  • Junior developers need explicit instructions (AI does too)
  • Junior developers can't read minds (AI can't either)
  • Junior developers need examples (AI learns from them too)
  • Junior developers ask clarifying questions (AI makes assumptions instead)

The difference: AI won't ask you for clarification. It will just make assumptions and proceed. Bad assumptions lead to bad output. Good context prevents bad assumptions.

The 80% Test

Here's a more precise benchmark: If you get at least 80% correct result on the first try, you have good context.

If you have to iterate more than 2-3 times, the problem isn't AI—it's your task description. Stop, improve your context, start fresh.


Chapter Summary

Key Takeaways:

  1. Five components: Task, Constraints, Background, Examples, Success Criteria—miss one and AI guesses
  2. Use the template: Problem → Context → Goal → Solutions → Tests works for any task
  3. Junior developer test: If a new colleague couldn't complete it, neither can AI

Try This: Take a task you need to do this week. Before going to AI, fill out the universal template. All five components. Then give it to AI. Notice how much better the first response is compared to your usual "just ask and see" approach.


Next: Now that you know what good context looks like, let's explore the most common mistakes people make—and how to avoid them.

Article Details

Category
context engineering new guide draft
Published
November 28, 2025
Length
1,242 words
7,549 characters
~5 pages
Status
Draft Preview

More from context engineering new guide draft

Introduction

# Introduction Imagine meeting a random colleague on the street and telling them: *"Dark Roasted Peru, Bean Lovers, sweet taste from South America"* They probably won't understand what you want. Ma...

Read article

Part 1: Understanding Context Engineering

# Part 1: Understanding Context Engineering --- # Chapter 1: Why Your AI Prompts Fail For a long time, I thought AI was like a search engine on steroids. Give it a question, get an answer. Then I t...

Read article

Part 2: The Practice

# Part 2: The Practice --- # Chapter 3: Before You Prompt - Preparation You wouldn't hand a new colleague a sticky note saying "fix the app" and expect perfect results. Yet that's exactly how most ...

Read article