Topic Analysis: Chapter 2 - What Good Context Looks Like
Metadata
- Syllabus Reference: Part 1, Chapter 2
- Primary Sources: Article 2 (What Good Context Looks Like), Article 7 (Teams)
- Secondary Sources: Interview Q3-4, Presentation
- Analysis Date: 2025-11-28
- Status: Complete
1. Source Materials
1.1 Primary Sources
From Article 2: "What Good Context Looks Like"
Perplexity workflow:
"My process looks like this: 1. Give Perplexity a short question with context (5-10 seconds), 2. Perplexity returns summary + links, 3. Take only relevant parts (not everything!), 4. Insert as context into the next tool"
Real task template example:
## User Report
User can't open the edit form for their record in the table even though they're an admin
## Support Verification
User is indeed admin of that record, we verified their profile, they should have access
## Context
Table the user mentions is from <table code file> specifically lines 100-200
## Goal
If user is administrator, they must have access to edit records from the table, not just from the detail page
## Possible Solutions
1. Check if the edit button for the record is functional
2. Check if it properly validates permissions
3. Create a test that would catch this bug before reaching users
Key insight on context size:
"If information about the problem can't be held in your head, I break it into smaller steps."
Iteration approach:
"Don't send a correction message ('sorry, I meant just the first form'). Edit the original prompt to be more specific. Continue working with new requirements."
Context contamination warning:
"Starting a new session/chat is often better - it avoids context contamination where old, wrong information misleads AI even after corrections."
Good context test:
"If you can explain an AI task so that you get at least 80% correct result on first try, you have good context."
From Article 7: "Context Engineering for Teams"
Team task template:
## Problem
[What broke / what needs to be done]
## Context
- Files: [specific files and lines]
- History: [relevant previous changes]
- Constraints: [what must not change]
## Goal
[Clear success criterion - when is it done]
## Possible Solutions
1. [First option]
2. [Second option]
3. [Third option]
## Tests
[How we verify it works]
1.2 Secondary Sources
From Interview Q3 (Context Transfer)
- Always write prompt yourself and note source ("this is from internet" or "found on Perplexity")
- Sometimes copy entire response with links
- Sometimes just relevant parts
- Edit to remove unnecessary sources
From Interview Q4 (Task Example) Real example of good task:
- User report from support
- Support verification
- Specific file and line numbers
- Clear goal
- Possible solutions to investigate
1.3 External Citations
None specific to this chapter - focus is on practical patterns.
2. Content Extraction
2.1 Key Concepts
-
Five Components of Good Context
- Task: What you want AI to do
- Constraints: What AI must not do
- Background: Why you need this
- Examples: What good output looks like
- Success Criteria: How you'll judge the result
- Source: Derived from Article 2, Article 7 templates
-
Research → Context → AI Workflow
- Definition: Use Perplexity/research first, then feed into working tool
- Example: 5-10 second question → summary + links → extract relevant → insert into Claude/ChatGPT
- Source: Article 2, Interview Q1-3
-
The Task Template
- Definition: Structured format for AI tasks
- Components: Problem, Context, Goal, Solutions, Tests
- Source: Article 2, Article 7
-
Context Contamination
- Definition: Old wrong information misleading AI even after corrections
- Solution: Start fresh session with clean context
- Source: Article 2, Article 3
-
The 80% First Try Test
- Definition: Good context = 80%+ correct on first try
- If 2-3+ iterations needed, problem is context not AI
- Source: Article 2
2.2 Key Examples
-
Bug Fix Task Example
- Context: Admin user can't edit records
- Before: "User can't edit profile, fix it"
- After: Full template with user report, verification, file lines, goal, solutions
- Source: Article 2, Interview Q4
-
Perplexity Workflow Example
- Context: Research phase for any task
- Process: Short question → Get summary with sources → Extract relevant → Feed to next tool
- Source: Article 2
-
Zed + Claude Example
- Context: AI finding files based on keywords
- Works well: When specifying function/module name
- Works poorly: Similar file names in different directories
- Source: Article 2
-
Iteration Example - Form Types
- Context: AI shows 2 implementations, change relates to one
- Solution: Edit original prompt to specify which form, don't send correction message
- Source: Article 2
2.3 Key Quotes
-
"Always write the reason - 'I need this for XYZ' helps AI understand context" - Article 2
- Use for: Practical tip
-
"If you can explain an AI task so that you get at least 80% correct result on first try, you have good context. If you have to iterate more than 2-3 times, the problem isn't AI but your task description." - Article 2
- Use for: Quality benchmark
-
"Less is sometimes more - rather precise context for small task than lots of information for big one" - Article 2
- Use for: Counter-intuitive insight
-
"When something doesn't work, start fresh - sometimes it's simpler to open a new session than fix an old one" - Article 2
- Use for: Practical decision point
2.4 Data/Statistics
- 90% Perplexity vs 10% Google for research (Interview Q12)
- 80% correct on first try = good context benchmark
- 5-10 seconds for initial Perplexity query
- 2-3 iterations max before reassessing context
3. Gap Analysis
3.1 Content Gaps
- [x] Task template provided
- [x] Research workflow explained
- [x] Bad vs good examples included
- [ ] Could expand on non-technical task templates
3.2 Clarity Issues
- None - concepts very practical
3.3 Depth Assessment
- Strong on technical examples
- Could use more non-technical templates
- Iteration concept well covered
4. Structure Proposal
4.1 Chapter Outline
Chapter 2: What Good Context Looks Like
Section 2.1: The Five Components of Context
- Main point: Every effective AI context includes five elements
- Content from: Article 2, Article 7
- Include: Task, Constraints, Background, Examples, Success Criteria
Section 2.2: The Universal Task Template
- Main point: Structured format that works for any task
- Content from: Article 2, Article 7, Interview Q4
- Include: Problem, Context, Goal, Solutions, Tests template
Section 2.3: Before and After - Real Examples
- Main point: See the difference good context makes
- Content from: Article 2, Article 8
- Include: Bug fix example, article writing example
Section 2.4: The Junior Developer Test
- Main point: If junior can't complete it, AI can't either
- Content from: Article 3, Interview Q24
- Include: "Could a new colleague complete this?" question
4.2 Opening Hook
"In the previous chapter, we saw why prompts fail. Now let's look at exactly what 'good context' looks like - the practical patterns that get results on the first try."
4.3 Key Takeaways
- Good context has 5 components: Task, Constraints, Background, Examples, Success Criteria
- The task template works universally: Problem → Context → Goal → Solutions → Tests
- If you need more than 2-3 iterations, the problem is your context, not AI
4.4 Transition
"Now that you know what good context looks like, let's explore the most common mistakes people make - and how to avoid them."
5. Writing Notes
5.1 Tone/Voice
- Practical, pattern-focused
- Show don't tell with templates
- Emphasize repeatability
5.2 Audience Considerations
- Include team task template for developers
- Include simpler template for general audience
- Show how same principles apply across domains
5.3 Potential Visuals
-
Five Components Diagram
- Visual breakdown of Task, Constraints, Background, Examples, Criteria
-
Task Template
- Fillable template format
-
Before/After Comparison Table
- Side-by-side bad vs good context
-
Research Workflow Diagram
- Perplexity → Extract → Claude/ChatGPT flow
6. Prepared Citations
Internal
- [A2] Article "What Good Context Looks Like"
- [A7] Article "Context Engineering for Teams"
- [I3] Interview Q&A, Question 3 (Context transfer)
- [I4] Interview Q&A, Question 4 (Task example)
- [I24] Interview Q&A, Question 24 (Junior test)
External
- None specific to this chapter
7. Open Questions
-
Should team template go here or in Chapter 7 (Teams)?
- Decision: Introduce here, expand in Chapter 7
-
How detailed should the task template be?
- Decision: Full template with all components
-
Include Zed-specific details or keep tool-agnostic?
- Decision: Keep principles tool-agnostic, specific tools in Chapter 5