Topic Analysis: Chapter 6 - Practical Examples
Metadata
- Syllabus Reference: Part 3, Chapter 6
- Primary Sources: Article 8 (Practical Examples)
- Secondary Sources: Article 5, Interview Q15
- Analysis Date: 2025-11-28
- Status: Complete
1. Source Materials
1.1 Primary Sources
From Article 8: "Practical Context Engineering Examples"
Example 1: Debugging Production Bug
Bad task:
"User can't edit profile, fix it" Result: AI generates 200 lines of generic checks, none solving the problem.
Good task:
## Problem
User ID: 12345 can't edit profile via UI
## Context
- Support verified: has 'admin' role in DB
- Works: API endpoint PUT /api/profile/:id
- Doesn't work: "Edit" button in ProfileView.tsx
- Console error: "Permission denied at ProfileView.tsx:156"
- File: src/components/ProfileView.tsx, lines 150-160
## Goal
Edit button must work for users with 'admin' role
## Check
1. How permissions are validated in ProfileView
2. If user role loads correctly
3. Difference between API and UI permission check
Result: AI finds exact problem - UI checks 'editor' role instead of 'admin'. Fix in 2 minutes.
Example 2: Technology Selection
Bad task:
"What's the best framework for admin panel?" Result: Generic list of frameworks with pros/cons from 2023.
Good task:
## Project Context
- Internal admin app for 20 users
- Mainly CRUD operations + reports
- Team: 2 developers (React experience)
- Timeline: MVP in 1 month
- Integration: Existing REST API (NestJS)
- Budget: Minimal (prefer open source)
## Requirements
- Fast development (components out-of-box)
- TypeScript support
- Good documentation
- Active community (2024+)
## Exclude
- Paid solutions (Retool, Forest Admin)
- PHP frameworks
- Need to learn new language
## Expected Output
Top 3 options with:
- Time to MVP
- Specific CRUD components
- Link to starter template
Result: Specific recommendation: React Admin, Refine, or Ant Design Pro with exact comparison.
Example 3: SQL Optimization
Bad task:
"Optimize this SQL query: SELECT * FROM orders WHERE status = 'pending'" Result: Adds index on status. Still slow.
Good task with full context:
## Query
SELECT o.*, u.name, u.email, p.name as product
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
WHERE o.status = 'pending'
AND o.created_at > NOW() - INTERVAL '30 days'
## Context
- PostgreSQL 14
- orders: 2M rows
- users: 100k rows
- products: 5k rows
- Indexes: orders(status), orders(created_at)
- EXPLAIN ANALYZE: [insert output]
- Runs every 10 seconds (dashboard refresh)
## Constraints
- Can't change schema (production)
- Can't add materialized view (policies)
- Max 2 seconds response time
## Goal
Query under 2 seconds with minimal changes
Result: Composite index on (status, created_at), WHERE conditions in right order, SELECT only needed columns. From 8s to 0.3s.
Example 4: Documentation Generation
Bad task:
"Write API documentation for user endpoint" Result: Generic documentation that doesn't match our style.
Good task with style reference:
## Endpoint
POST /api/v2/users/bulk-import
## Implementation
[Insert endpoint code]
## Context
- Format: OpenAPI 3.0
- Style: Like existing docs [insert example]
- Audience: External developers
- Auth: Bearer token (already documented)
## Specifics
- Max 1000 users per request
- Rate limit: 10 requests/minute
- Async processing (returns job_id)
- Validation rules: [insert from code]
## Generate
1. OpenAPI spec
2. Request/response example
3. Error codes table
4. curl example
Result: Documentation ready to copy-paste into Swagger.
Example 5: Legacy Code Refactoring
Bad task:
"Refactor this function to be cleaner" Result: AI randomly splits function, breaks business logic.
Good task with preservation rules:
## Function
[Insert processOrder function]
## Context
- Language: TypeScript 4.9
- Framework: Express + TypeORM
- Works correctly (all tests pass)
- Problem: Uneditable, 800 lines
## Tests
[Insert existing tests]
## Refactoring Goal
1. Split into smaller functions (max 50 lines)
2. Keep all functionality
3. Add TypeScript types where missing
4. Extract magic numbers to constants
## Don't Change
- Business logic
- DB table names
- API response format
- Error messages (backwards compatibility)
## Step by Step
1. Generate comprehensive tests FIRST (if needed)
2. Identify independent parts
3. Propose split (don't code yet)
4. Wait for my approval
5. Implement with tests
Result: Clean, tested refactoring deployable with confidence.
Example 6: Personal Project - Consumption Analysis
Bad task:
"Should I buy solar panels?" Result: Generic pros/cons of solar panels.
Good task with data:
## My Consumption Data
[CSV export from utility company - 12 months by hours]
## Context
- Location: Slovakia, western
- Roof: South-facing, 45°, 80m² usable
- Tariff: D2 dual, day 0.15€, night 0.08€
- Annual consumption: 4500 kWh
- Budget: flexible, show me options
## Questions
- "Will it pay off? How many years?"
- "Compare with and without battery"
- "Different kWp sizes"
## Calculate for Each Scenario
- Total investment cost
- Annual savings
- When will it pay off (years)
- % self-sufficiency
## Output
- Comparison table
- Monthly consumption vs production chart
- Recommendation with reasoning
Result: Precise analysis showing 5kWp + 5kWh battery has 7-year ROI.
1.2 Secondary Sources
From Article 5: "AI Won't Steal Your Job"
Garden example:
"We worked on garden things - planting trees, flowers, vegetables. We're new to it. AI advised us: How to properly plant what in what soil, At what depth, When and how much to water, Which plants go well together. Result? Everything grows, nothing died."
Electricity analysis:
"Claude analyzed our home electricity consumption based on data I downloaded from the utility company. It created a web page with charts and functions so we could see: How much solar panels would cost us, With how many panels, With or without battery, ROI for different scenarios."
From Interview Q15 (Wow Moments)
"Practical home uses: garden planting advice (what in what soil, at what depth, watering), electricity consumption analysis with interactive charts and ROI calculations."
1.3 External Citations
None specific - examples are from personal experience.
2. Content Extraction
2.1 Key Concepts
-
Specificity Principle
- Definition: More precise context = better result
- Example: Generic bug report vs detailed context with files/lines
- Source: All examples in Article 8
-
Constraints Principle
- Definition: Always state what MUST NOT change
- Example: "Don't change business logic, DB tables, error messages"
- Source: Example 5
-
Examples Principle
- Definition: Show AI what you expect (format, style)
- Example: "Style: Like existing docs [insert example]"
- Source: Example 4
-
Step-by-Step for Complex Tasks
- Definition: Request approval at key points
- Example: "Propose split (don't code yet), Wait for approval"
- Source: Example 5
-
Verifiability
- Definition: Define how success will be measured
- Example: "Query under 2 seconds"
- Source: Example 3
2.2 Key Examples
All 6 examples from Article 8 are key examples, each demonstrating different context engineering principles:
- Debugging - File/line specificity, support verification
- Tech Selection - Project constraints, team context, exclusions
- SQL Optimization - Data volume, performance requirements, constraints
- Documentation - Style matching, format specification
- Legacy Refactoring - Preservation rules, step-by-step approval
- Personal Project - Real data input, scenario comparison
2.3 Key Quotes
-
"Before every task, ask yourself: 'Could a junior developer who started yesterday fulfill this?' If not, add context." - Article 8
- Use for: Summary principle
-
Key principles from examples:
- "Specificity - The more precise the context, the better the result"
- "Constraints - Always state what MUST NOT change"
- "Examples - Show AI what you expect"
- "Step-by-step - For complex tasks, request steps"
- "Verifiability - Define how you'll verify success"
- Source: Article 8
2.4 Data/Statistics
- SQL optimization: 8s → 0.3s
- Legacy function: 800 lines → manageable pieces
- Solar ROI: 7 years with 5kWp + 5kWh battery
- Database sizes: 2M orders, 100k users, 5k products
3. Gap Analysis
3.1 Content Gaps
- [x] Technical examples (debugging, SQL, refactoring)
- [x] Documentation example
- [x] Tech selection example
- [x] Personal project example
- [x] Non-technical example (garden)
- [ ] Could add marketing/content example
3.2 Clarity Issues
- None - examples very concrete
3.3 Depth Assessment
- Excellent depth with full task templates
- Both bad and good versions shown
- Results quantified where possible
4. Structure Proposal
4.1 Chapter Outline
Chapter 6: Practical Examples
Section 6.1: Debugging Production Bug
- Full bad/good comparison
- Key: Verified facts, specific files/lines
- Source: Article 8
Section 6.2: Technology Selection
- Full bad/good comparison
- Key: Project context, exclusions, expected output format
- Source: Article 8
Section 6.3: SQL Optimization
- Full bad/good comparison
- Key: Data volumes, constraints, performance target
- Source: Article 8
Section 6.4: Documentation Generation
- Full bad/good comparison
- Key: Style matching, format specification
- Source: Article 8
Section 6.5: Legacy Code Refactoring
- Full bad/good comparison
- Key: Preservation rules, step-by-step approval
- Source: Article 8
Section 6.6: Personal Projects
- Garden advice, solar panel analysis
- Key: Real data input, practical questions
- Source: Article 5, Article 8
4.2 Opening Hook
"Enough theory. Let me show you exactly how context engineering looks in practice - six real examples with before and after comparisons."
4.3 Key Takeaways
- Specificity - The more precise the context, the better the result
- Constraints - Always state what MUST NOT change
- Examples - Show AI what you expect (format, style)
- Step-by-step - For complex tasks, request approval at key points
- Verifiability - Define how you'll verify success
- Junior test - If a new colleague couldn't do it, neither can AI
4.4 Transition
"These examples show individual context engineering. Now let's scale up - how do you bring these practices to an entire team?"
5. Writing Notes
5.1 Tone/Voice
- Minimal commentary, let examples speak
- Clear bad/good structure
- Practical, copy-paste ready
5.2 Audience Considerations
- Developers: Debugging, SQL, refactoring
- Product: Tech selection
- Writers: Documentation
- Everyone: Personal projects
5.3 Potential Visuals
-
Before/After Task Format
- Side-by-side comparison for each example
-
Results Summary Table
- Example, Bad Result, Good Result, Improvement
-
Principles Checklist
- Visual summary of 6 key principles
6. Prepared Citations
Internal
- [A5] Article "AI Won't Steal Your Job"
- [A8] Article "Practical Context Engineering Examples"
- [I15] Interview Q&A, Question 15 (Wow moments)
External
- None - examples are from personal experience
7. Open Questions
-
Include full task templates or abbreviated?
- Decision: Full templates for key examples, abbreviated for others
-
How many examples total?
- Decision: 6 main examples + brief mention of garden/electricity
-
Add marketing/content example?
- Decision: Could add if needed, but current coverage is good