Practical Context Engineering Examples - From Theory to Practice

Practical Context Engineering Examples - From Theory to Practice

Enough theory. I'll show you concrete examples of how I use Context Engineering in real situations. Each example contains bad and good task descriptions so you can see the difference.

Example 1: Debugging Production Bug

Situation

User reported they can't edit their profile. Support verified they have correct permissions.

❌ Bad Task

"User can't edit profile, fix it"

Result: AI generates 200 lines of generic checks, none solving the problem.

✅ Good Task

## Problem
User ID: 12345 can't edit profile via UI

## Context
- Support verified: has 'admin' role in DB
- Works: API endpoint PUT /api/profile/:id
- Doesn't work: "Edit" button in ProfileView.tsx
- Console error: "Permission denied at ProfileView.tsx:156"
- File: src/components/ProfileView.tsx, lines 150-160

## Goal
Edit button must work for users with 'admin' role

## Check
1. How permissions are validated in ProfileView
2. If user role loads correctly
3. Difference between API and UI permission check

Result: AI finds exact problem - UI checks 'editor' role instead of 'admin'. Fix in 2 minutes.

Example 2: Technology Selection for New Project

Situation

Need to choose framework for new admin application.

❌ Bad Task

"What's the best framework for admin panel?"

Result: Generic list of frameworks with pros/cons from 2023.

✅ Good Task

## Project Context
- Internal admin app for 20 users
- Mainly CRUD operations + reports
- Team: 2 developers (React experience)
- Timeline: MVP in 1 month
- Integration: Existing REST API (NestJS)
- Budget: Minimal (prefer open source)

## Requirements
- Fast development (components out-of-box)
- TypeScript support
- Good documentation
- Active community (2024+)

## Exclude
- Paid solutions (Retool, Forest Admin)
- PHP frameworks
- Need to learn new language

## Expected Output
Top 3 options with:
- Time to MVP
- Specific CRUD components
- Link to starter template

Reference: Here are similar projects we've done: [link to documentation]

Result: Specific recommendation: React Admin, Refine, or Ant Design Pro with exact comparison and starter kits.

Example 3: Slow SQL Query Optimization

Situation

Dashboard loads for 8 seconds.

❌ Bad Task

"Optimize this SQL query:
SELECT * FROM orders WHERE status = 'pending'"

Result: Adds index on status. Still slow.

✅ Good Task

## Query
SELECT o.*, u.name, u.email, p.name as product
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
WHERE o.status = 'pending'
AND o.created_at > NOW() - INTERVAL '30 days'

## Context
- PostgreSQL 14
- orders: 2M rows
- users: 100k rows
- products: 5k rows
- Indexes: orders(status), orders(created_at)
- EXPLAIN ANALYZE: [insert output]
- Runs every 10 seconds (dashboard refresh)

Note: This context can be gathered different ways - manually or via MCP that connects to DB. But even with MCP, it won't always work automatically. Security note: MCP/AI should NEVER have access to production data, so row counts must be provided manually.

## Constraints
- Can't change schema (production)
- Can't add materialized view (policies)
- Max 2 seconds response time

## Goal
Query under 2 seconds with minimal changes

Result: Composite index on (status, created_at), WHERE conditions in right order, SELECT only needed columns. From 8s to 0.3s.

Example 4: Documentation Generation

Situation

New API endpoint needs documentation.

❌ Bad Task

"Write API documentation for user endpoint"

Result: Generic documentation that doesn't match our style.

✅ Good Task

## Endpoint
POST /api/v2/users/bulk-import

## Implementation
[Insert endpoint code]

## Context
- Format: OpenAPI 3.0
- Style: Like existing docs [insert example]
- Audience: External developers
- Auth: Bearer token (already documented)

## Specifics
- Max 1000 users per request (this limit should be clearly documented!)
- Rate limit: 10 requests/minute (extract from code and document prominently)
- Async processing (returns job_id)
- Validation rules: [insert from code]
- Include any non-obvious behavior that's specific to your implementation
- Extract and document all limits from code

## Generate
1. OpenAPI spec
2. Request/response example
3. Error codes table
4. curl example

Reference: Use this style guide: [link to company API docs]

Result: Documentation ready to copy-paste into Swagger.

Example 5: Legacy Code Refactoring

Situation

800-line function nobody wants to touch.

❌ Bad Task

"Refactor this function to be cleaner"

Result: AI randomly splits function, breaks business logic.

✅ Good Task

## Function
[Insert processOrder function]

## Context
- Language: TypeScript 4.9
- Framework: Express + TypeORM
- Works correctly (all tests pass)
- Problem: Uneditable, 800 lines

## Tests
[Insert existing tests]

## Refactoring Goal
1. Split into smaller functions (max 50 lines)
2. Keep all functionality
3. Add TypeScript types where missing
4. Extract magic numbers to constants

## Don't Change
- Business logic
- DB table names
- API response format
- Error messages (backwards compatibility)

## Step by Step
1. If no full test coverage exists, generate comprehensive tests FIRST
2. Identify independent parts
3. Propose split (don't code yet)
4. Wait for my approval
5. Implement with tests

Result: Clean, tested refactoring I can deploy with confidence.

Note: If the code doesn't have full test coverage before refactoring, AI can generate comprehensive tests first. This follows TDD principles which work excellently with AI and aren't time-consuming at all. With tests in place, AI can verify its own refactoring solution.

Example 6: Consumption Analysis (Personal Project)

Situation

Want to know if solar panels are worth it.

❌ Bad Task

"Should I buy solar panels?"

Result: Generic pros/cons of solar panels.

✅ Good Task

## My Consumption Data
[CSV export from utility company - 12 months by hours]

## Context
- Location: Slovakia, western
- Roof: South-facing, 45°, 80m² usable
- Tariff: D2 dual, day 0.15€, night 0.08€
- Annual consumption: 4500 kWh
- Budget: flexible, show me options

## Questions (in plain language - no need for technical terms!)
- "Will it pay off? How many years until I get my money back?"
- "Show me different options with and without battery"
- "Compare smaller and larger systems so I can see how battery and kWp affect the budget"

## Scenarios to Analyze
Give me various options for both lower and higher consumption:
- Different kWp sizes (small, medium, large)
- With and without battery
- Show how each affects the budget

## Calculate for Each Scenario
- Total investment cost
- Annual savings
- When will it pay off (years)
- % self-sufficiency

## Output
- Comparison table
- Monthly consumption vs production chart
- Recommendation with reasoning

Reference: Use PVGIS data for Slovak solar irradiation: [link]

Result: Precise analysis showing 5kWp + 5kWh battery has 7-year ROI.

Key Principles from Examples

  1. Specificity - The more precise the context, the better the result
  2. Constraints - Always state what MUST NOT change
  3. Examples - Show AI what you expect (format, style)
  4. Step-by-step - For complex tasks, request steps
  5. Verifiability - Define how you'll verify success
  6. References - Include authoritative sources when accuracy matters

Tip for Conclusion

Before every task, ask yourself: "Could a junior developer who started yesterday fulfill this?"

If not, add context. If yes, AI can handle it.


Oliver Kriska solves real problems with AI daily. These examples are from his actual work (anonymized).

Article Details

Category
context engineering new
Published
September 11, 2025
Length
1,251 words
7,858 characters
~6 pages
Status
Draft Preview

More from context engineering new

Why You Give Much and Get Little (And How to Change It)

# Why You Give Much and Get Little (And How to Change It) "AI output is unusable" - I hear this constantly. Many people don't use AI precisely because of this. But the problem isn't AI. The problem i...

Read article