Part 3: Real-World Application

Part 3: Real-World Application


Chapter 6: Practical Examples

Enough theory. Let me show you exactly how context engineering looks in practice—six real examples with before and after comparisons.

In this chapter, you'll see:

  • Debugging a production bug (2 minutes vs 2 hours)
  • Selecting technology for a project
  • Optimizing SQL from 8 seconds to 0.3 seconds
  • Generating documentation that matches your style
  • Refactoring legacy code safely
  • Personal projects: garden advice and solar panel analysis

6.1 Debugging Production Bug

The Bad Way

Task: "User can't edit profile, fix it"

Result: AI generates 200 lines of generic checks—authentication, permissions, database connections, form validation. None of them solve the actual problem. You spend 2 hours reviewing irrelevant code.

The Good Way

Task:

## Problem
User ID: 12345 can't edit profile via UI

## Context
- Support verified: user has 'admin' role in database
- Works: API endpoint PUT /api/profile/:id (tested manually)
- Doesn't work: "Edit" button in ProfileView.tsx
- Console error: "Permission denied at ProfileView.tsx:156"
- File: src/components/ProfileView.tsx, lines 150-160

## Goal
Edit button must work for users with 'admin' role

## Possible Causes to Check
1. How permissions are validated in ProfileView
2. If user role loads correctly from state
3. Difference between API permission check vs UI permission check

Result: AI finds exact problem in 2 minutes—UI was checking for 'editor' role instead of 'admin'. Single line fix.

What Made the Difference

  • Verified facts from support, not assumptions
  • Specific file and line numbers where error occurs
  • What works vs doesn't (API yes, UI no) narrows scope
  • Directions to explore rather than vague "fix it"

6.2 Technology Selection

The Bad Way

Task: "What's the best framework for admin panel?"

Result: Generic list of frameworks with pros/cons from 2023. Half are paid, some require PHP, none match your actual constraints.

The Good Way

Task:

## Project Context
- Internal admin app for 20 users
- Mainly CRUD operations + reports
- Team: 2 developers (React experience)
- Timeline: MVP in 1 month
- Integration: Existing REST API (NestJS)
- Budget: Minimal (prefer open source)

## Requirements
- Fast development (components out-of-box)
- TypeScript support
- Good documentation
- Active community (2024+)

## Exclude
- Paid solutions (Retool, Forest Admin)
- PHP frameworks
- Anything requiring new language

## Expected Output
Top 3 options with:
- Time to MVP estimate
- Specific CRUD components available
- Link to starter template

Result: Specific comparison of React Admin, Refine, and Ant Design Pro—exactly what the team can use, with starter template links and realistic MVP timelines.

What Made the Difference

  • Team context (React experience, 2 developers)
  • Real constraints (timeline, budget, existing API)
  • Explicit exclusions (no paid, no PHP, no new languages)
  • Specified output format (what you actually need)

6.3 SQL Optimization

The Bad Way

Task: "Optimize this SQL query: SELECT * FROM orders WHERE status = 'pending'"

Result: AI suggests adding an index on status. You add it. Query is still slow. AI doesn't know why because it doesn't know your data.

The Good Way

Task:

## Query
SELECT o.*, u.name, u.email, p.name as product
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
WHERE o.status = 'pending'
AND o.created_at > NOW() - INTERVAL '30 days'

## Environment
- PostgreSQL 14
- orders: 2M rows
- users: 100k rows
- products: 5k rows

## Current Indexes
- orders(status)
- orders(created_at)

## EXPLAIN ANALYZE Output
[paste actual output]

## Constraints
- Can't change schema (production database)
- Can't add materialized view (policy restriction)
- Query runs every 10 seconds (dashboard refresh)

## Goal
Query under 2 seconds (currently 8 seconds)

Result: AI suggests composite index on (status, created_at), reorders WHERE conditions, and removes unnecessary SELECT columns. Query drops from 8 seconds to 0.3 seconds.

What Made the Difference

  • Actual data volumes (2M rows changes everything)
  • Existing indexes (no point suggesting what's already there)
  • EXPLAIN ANALYZE output (AI sees actual query plan)
  • Hard constraints (can't change schema)
  • Specific target (under 2 seconds, not "faster")

6.4 Documentation Generation

The Bad Way

Task: "Write API documentation for user endpoint"

Result: Generic documentation in whatever format AI chooses. Doesn't match your existing docs. Wrong style, wrong structure, missing details you need.

The Good Way

Task:

## Endpoint
POST /api/v2/users/bulk-import

## Implementation
[paste endpoint code]

## Format
- OpenAPI 3.0 specification
- Style: Like existing docs [paste example from another endpoint]

## Audience
External developers integrating with our API

## Auth
Bearer token (already documented elsewhere)

## Specifics to Include
- Max 1000 users per request
- Rate limit: 10 requests/minute
- Async processing (returns job_id)
- Validation rules: [paste from code]

## Generate
1. OpenAPI spec
2. Request/response example
3. Error codes table
4. curl example

Result: Documentation ready to paste into Swagger. Matches existing style. Includes all edge cases and errors.

What Made the Difference

  • Format specification (OpenAPI 3.0, not random format)
  • Style example (AI sees what yours looks like)
  • Specific details (rate limits, validation rules)
  • Structured output request (exactly what you need)

6.5 Legacy Code Refactoring

The Bad Way

Task: "Refactor this function to be cleaner"

Result: AI randomly splits function, renames variables, changes structure. Breaks business logic you didn't know was embedded in there. Tests fail. You spend hours figuring out what changed.

The Good Way

Task:

## Function
[paste processOrder function - 800 lines]

## Environment
- TypeScript 4.9
- Express + TypeORM
- All existing tests pass

## Problem
Function is 800 lines. Nobody wants to touch it. We need to add a feature but can't understand the code.

## Existing Tests
[paste test file]

## Refactoring Goals
1. Split into smaller functions (max 50 lines each)
2. Keep all functionality identical
3. Add TypeScript types where missing
4. Extract magic numbers to constants

## DO NOT Change
- Business logic
- Database table names
- API response format
- Error messages (backwards compatibility)

## Process
1. First: Identify independent parts of the function
2. Then: Propose split structure (don't code yet)
3. Wait for my approval before implementing
4. Implement with tests proving behavior unchanged

Result: AI identifies 6 logical sections, proposes clean split, waits for approval, then implements. All original tests pass. New structure is readable and maintainable.

What Made the Difference

  • Existing tests included (AI knows what must keep working)
  • Explicit preservation rules (what must NOT change)
  • Step-by-step with approval gates (not one giant change)
  • Specific goals (50 lines max, not just "cleaner")

6.6 Personal Projects

Context engineering isn't just for work. Here are two examples from home.

Garden Advice

We were new to gardening. Instead of generic "how to garden" questions:

Task:

## Situation
New house, first garden. Zone 6b, clay-heavy soil.

## Want to Plant
- Tomatoes (beefsteak variety)
- Peppers (bell)
- Herbs (basil, mint, rosemary)

## Questions
1. How deep to plant each?
2. Spacing between plants?
3. How often to water in first month?
4. Which plants should NOT be near each other?
5. When to expect first harvest?

## Format
Table with each plant and specific instructions

Result: Detailed planting guide specific to our zone and soil type. Everything grew. Nothing died.

Solar Panel Analysis

Instead of "Should I buy solar panels?":

Task:

## My Consumption Data
[CSV export from utility company - 12 months, hourly readings]

## House Details
- Location: Western Slovakia
- Roof: South-facing, 45° angle, 80m² usable
- Current tariff: D2 dual rate (0.15€ day / 0.08€ night)
- Annual consumption: 4500 kWh

## Questions
- Will it pay off? In how many years?
- Compare: with vs without battery storage
- Compare: 3kWp vs 5kWp vs 7kWp systems

## Calculate for Each Scenario
- Total investment cost (local installers)
- Annual savings
- Payback period (years)
- Self-sufficiency percentage

## Output
- Comparison table of all scenarios
- Monthly production vs consumption chart
- Clear recommendation with reasoning

Result: AI analyzed actual consumption patterns, showed 5kWp + 5kWh battery had 7-year ROI, created interactive charts to visualize different scenarios. Made the decision easy.

What Made These Work

  • Real data (actual consumption, specific location)
  • Specific constraints (budget, roof size, zone)
  • Clear questions (not "should I..." but specific comparisons)
  • Requested format (tables, charts, comparison)

The Pattern Across All Examples

Every successful example shares these elements:

| Principle | What It Means | Example | |-----------|---------------|---------| | Specificity | The more precise, the better | File:line, data volumes, exact constraints | | Constraints | What MUST NOT change | Business logic, table names, error messages | | Examples | Show what you expect | Style reference, format template | | Step-by-step | For complex tasks, request approval | Propose first, implement after confirmation | | Verifiability | How you'll know it worked | Under 2 seconds, tests pass, ROI calculation |

Before every task, ask: "Could a junior developer who started yesterday complete this with just this information?"

If the answer is no, add context until the answer is yes.


Chapter Summary

Key Takeaways:

  1. Specificity wins—file names, line numbers, data volumes, constraints
  2. State what must NOT change—business logic, formats, backwards compatibility
  3. Show, don't describe—style examples, format templates, existing code
  4. Complex tasks need approval gates—propose before implementing
  5. Define verifiable success—specific numbers, passing tests, clear criteria
  6. This works everywhere—debugging, documentation, refactoring, and your garden

Try This: Take a task you're struggling with. Apply the debugging example format: Problem, Context, Goal, Possible Causes. Even if it's not a bug, the structure forces you to provide the information AI needs.


Next: These examples show individual context engineering. Now let's scale up—how do you bring these practices to an entire team?

Article Details

Category
context engineering new guide draft
Published
November 28, 2025
Length
1,912 words
11,375 characters
~8 pages
Status
Draft Preview

More from context engineering new guide draft

Introduction

# Introduction Imagine meeting a random colleague on the street and telling them: *"Dark Roasted Peru, Bean Lovers, sweet taste from South America"* They probably won't understand what you want. Ma...

Read article

Part 1: Understanding Context Engineering

# Part 1: Understanding Context Engineering --- # Chapter 1: Why Your AI Prompts Fail For a long time, I thought AI was like a search engine on steroids. Give it a question, get an answer. Then I t...

Read article

Chapter 2: What Good Context Looks Like

# Chapter 2: What Good Context Looks Like In the previous chapter, we saw why prompts fail without context. Now let's look at exactly what "good context" looks like—the practical patterns that get re...

Read article