Appendix

Appendix

Quick reference materials for daily use.


A: The 10 Key Advices

Before You Prompt (5 Advices)

| # | Advice | What It Means | |---|--------|---------------| | 1 | Give the WHY, not just WHAT | Explain your priorities and reasoning—AI can then optimize for what actually matters to you | | 2 | Break into atomic parts | Smaller tasks = dramatically better results. If you can't explain it in 1 minute, it's too big | | 3 | One example beats 1000 words | Show AI what you want: previous work, style references, format samples | | 4 | Say what to EXCLUDE | AI includes everything unless told otherwise. Be explicit about what you don't want | | 5 | Define what DONE looks like | Clear success criteria: "Query under 500ms" not "faster query" |

During and After (5 Advices)

| # | Advice | What It Means | |---|--------|---------------| | 6 | 2-minute rule | If AI takes longer than 1-2 minutes, something's wrong. Stop and adjust | | 7 | Explain WHY it's wrong | "Wrong because X" teaches AI; "that's wrong" doesn't | | 8 | Watch for context contamination | Bad output pollutes future responses. Sometimes starting fresh is faster | | 9 | Ask for opinion, not validation | "What would you change?" beats "Is this good?" | | 10 | Edit prompts, don't send corrections | Update original request instead of adding "sorry, I meant..." |

Quick Test Before Every Task

"Could a junior developer who started yesterday complete this task with just this information?"

If no → add more context. If yes → AI can handle it.


B: Templates

Universal Task Template

## Problem
[What broke / what needs to be done]

## Context
- Files: [specific files and lines if applicable]
- History: [relevant previous changes or attempts]
- Constraints: [what must not change]

## Goal
[Clear success criterion—when is it done]

## Possible Solutions
1. [First option to explore]
2. [Second option]
3. [Third option]

## Tests/Verification
[How we verify it works]

Bug Report Template

## Bug Description
[One sentence: what's wrong]

## Steps to Reproduce
1. [First step]
2. [Second step]
3. [Step where bug occurs]

## Expected Behavior
[What should happen]

## Actual Behavior
[What actually happens]

## Environment
- File: [specific file and line if known]
- Browser/OS: [if relevant]
- Error message: [exact text]

## What I've Tried
- [First attempt and result]
- [Second attempt and result]

## Success Criteria
[How we know it's fixed]

Feature Request Template

## Feature Summary
[One sentence description]

## User Story
As a [role], I want [capability], so that [benefit].

## Context
- Why now: [business reason]
- Who requested: [source]
- Priority: [high/medium/low]

## Requirements
- Must have: [essential features]
- Nice to have: [optional features]
- Out of scope: [explicitly excluded]

## Technical Context
- Affected files: [if known]
- Related features: [existing functionality]
- Constraints: [backwards compatibility, performance, etc.]

## Acceptance Criteria
- [ ] [First criterion]
- [ ] [Second criterion]
- [ ] [Third criterion]

## Test Scenarios
- Happy path: [expected usage]
- Edge case: [unusual but valid usage]
- Error case: [what should fail gracefully]

Code Refactoring Template

## Target Code
[File path and function/class name]

## Current State
- Lines: [count]
- Problem: [why it needs refactoring]
- Tests: [existing test coverage]

## Refactoring Goals
1. [First goal - be specific]
2. [Second goal]
3. [Third goal]

## DO NOT Change
- [First constraint]
- [Second constraint]
- [Business logic / API / etc.]

## Process
1. Analyze and propose structure (don't code yet)
2. Wait for approval
3. Implement with tests
4. Verify all original tests pass

C: Tool Comparison

When to Use Which Tool

| Task Type | Best Tool | Why | |-----------|-----------|-----| | Research & Information | Perplexity | Quick summaries with sources, easy to extract key parts | | Programming & Technical | Claude (via Zed or API) | Better code understanding, context management, longer sessions | | Writing & Non-Technical | ChatGPT | Conversational, great for brainstorming, home/life tasks | | Complex Analysis | Claude | Handles nuance, technical depth, longer context | | Quick Q&A | Any | For simple questions, any tool works |

Tool Combinations That Work

| Combination | Use Case | |-------------|----------| | Perplexity → ChatGPT | Research topic → Write article/summary | | Perplexity → Claude | Find API docs → Implement feature | | Claude → ChatGPT | Generate code → Explain to non-technical stakeholder |

What NOT to Combine

| Combination | Why It Doesn't Work | |-------------|---------------------| | ChatGPT + Claude for same task | Different strengths, creates confusion | | Multiple tools in parallel on same problem | Context diverges, results conflict |

Tool Limitations

Perplexity:

  • Often lacks latest data (verify dates)
  • Sometimes "invents" sources (click through to verify)
  • Superficial on deep technical topics

Claude:

  • Struggles with similar file names in different directories
  • Performance drops with more than 3-4 files in context
  • May not know latest library versions

ChatGPT:

  • Less precise for code generation
  • Can be verbose
  • May not maintain consistency in long sessions

D: Citations and Resources

Key Quotes

Andrej Karpathy (OpenAI Co-founder, Former Tesla AI Director)

"+1 for 'context engineering' over 'prompt engineering'. People associate prompts with short task descriptions. In every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step."

— Twitter/X, 2025

George Arrowsmith (QA Perspective)

"QA is about to become a huge bottleneck in software development. AI lets us churn out HUGE amounts of code extremely fast, but you still need to make sure it works."

— LinkedIn, 2025

Milan Martiniak (Developer Time Allocation)

"Programmers spend less than 20% of their work time actually programming."

— Survey, 2025

Key Concepts Defined

| Term | Definition | |------|------------| | Context Engineering | The practice of providing AI with just the right information for the task—no more, no less | | Prompt | The specific task or question you give AI | | Context | Everything else: background, constraints, examples, success criteria | | Vibe Coding | AI-assisted coding with minimal understanding or verification | | Context Contamination | When bad AI output remains in conversation and influences future responses | | Atomization | Breaking large tasks into small, focused pieces |

The Five Components of Good Context

  1. Task — What you want AI to do
  2. Constraints — What AI must NOT do
  3. Background — Why you need this
  4. Examples — What good output looks like
  5. Success Criteria — How you'll judge the result

E: Decision Trees

Fix or Start Fresh?

Is output at least 80% correct?
├─ YES → Fix it
│   └─ Does AI keep making same mistake after correction?
│       ├─ YES → Start fresh
│       └─ NO → Continue fixing
└─ NO → Start fresh
    └─ When starting fresh:
        ├─ Include what you learned from failed attempt
        ├─ Be more specific about constraints
        └─ Add examples of what you want

Should I Use AI for This Task?

Is the task well-defined?
├─ NO → Define it first, then reconsider
└─ YES → Continue

Do I have the context needed?
├─ NO → Gather context first
└─ YES → Continue

Is it a one-time task or repeated?
├─ ONE-TIME → Use AI directly
└─ REPEATED → Create template/automation

What's the cost of errors?
├─ HIGH (production, customer-facing) → Use AI with careful review
├─ MEDIUM (internal, non-critical) → Use AI with normal review
└─ LOW (prototype, experiment) → Vibe code away

F: Quick Start Checklist

For Your First Week with Context Engineering

  • [ ] Pick one tool to start (Perplexity, Claude, or ChatGPT)
  • [ ] Use the Universal Task Template for 5 different tasks
  • [ ] Apply the 10 Key Advices consciously
  • [ ] Notice when AI fails—what context was missing?
  • [ ] Try the "junior developer test" before each prompt

For Your First Month

  • [ ] Develop your personal task templates
  • [ ] Identify which tool works best for which task type
  • [ ] Practice the fix-or-restart decision
  • [ ] Share what works with a colleague
  • [ ] Track one metric: tasks completed first-try vs. needed iteration

For Team Implementation

  • [ ] Start with 2-3 person pilot, non-critical project
  • [ ] Document what works and what doesn't
  • [ ] Create team task templates
  • [ ] Define review process for AI output
  • [ ] Set expectations: cultural change takes time

G: Common Mistakes Quick Reference

| Mistake | Fix | |---------|-----| | Task too vague | Add specific files, constraints, success criteria | | Too much context | Focus on relevant information only | | No examples | Attach previous work or style references | | Missing constraints | Explicitly state what NOT to change | | Asking for validation | Ask for criticism instead | | Sending corrections | Edit original prompt | | Waiting too long | Apply 2-minute rule—stop and adjust | | Fighting bad context | Start fresh with improved prompt | | No success criteria | Define verifiable "done" state | | Parallel dependent tasks | Run sequentially, parallel only independent work |


End of Guide

Article Details

Category
context engineering new guide draft
Published
November 28, 2025
Length
1,628 words
9,626 characters
~7 pages
Status
Draft Preview

More from context engineering new guide draft

Introduction

# Introduction Imagine meeting a random colleague on the street and telling them: *"Dark Roasted Peru, Bean Lovers, sweet taste from South America"* They probably won't understand what you want. Ma...

Read article

Part 1: Understanding Context Engineering

# Part 1: Understanding Context Engineering --- # Chapter 1: Why Your AI Prompts Fail For a long time, I thought AI was like a search engine on steroids. Give it a question, get an answer. Then I t...

Read article

Chapter 2: What Good Context Looks Like

# Chapter 2: What Good Context Looks Like In the previous chapter, we saw why prompts fail without context. Now let's look at exactly what "good context" looks like—the practical patterns that get re...

Read article