Chapter 4: During and After - Iteration

Chapter 4: During and After - Iteration

AI gave you something wrong. Do you send "sorry, I meant..." and hope for the best? Or do you know when to fix, when to restart, and how to actually teach AI what you need?

In this chapter, you'll learn:

  • The 2-minute rule that signals when something's wrong
  • How to explain errors so AI actually learns
  • When context contamination means starting fresh is faster
  • The question patterns that get real feedback, not praise

4.1 The 2-Minute Rule

Here's a signal most people miss: if AI takes longer than 1-2 minutes for a usable first response, something is wrong.

This isn't about patience. It's about recognizing that long processing almost always produces bad results. When I see AI doing many operations, searching extensively, or taking forever to generate—I know the output will be off target.

Why does this happen? Usually one of three reasons:

  1. Task too vague: AI is trying everything because it doesn't know what you actually need
  2. Task too big: AI is attempting to solve too much at once
  3. Wrong approach: AI took a path that doesn't match your requirements

What to do when this happens:

Don't wait for the bad result. Stop and adjust.

Instead of: Waiting 5 minutes, getting 500 lines of wrong code, then fixing

Do this: Stop at 2 minutes, ask "What are you working on?" or cancel and clarify your request

The 2-minute rule isn't arbitrary. In my experience, good AI responses for well-defined tasks come quickly—usually in seconds. When they don't, the context is wrong.

Practical Application

  • Code generation: Should see relevant output within 30-60 seconds
  • Research tasks: Initial results should surface in 1-2 minutes
  • Writing tasks: First draft sections should appear quickly

If you're staring at a spinning indicator for 3+ minutes, you're not being patient—you're wasting time on a path that won't work.


4.2 Explain WHY It's Wrong

When AI makes a mistake, most people say "that's wrong" or "try again." This doesn't help. AI learns from context—including your feedback.

The pattern is simple: "This is wrong because X."

Example: JavaScript Instead of TypeScript

Unhelpful correction: "That's wrong. Use TypeScript."

Result: AI might switch to TypeScript but still miss your point.

Helpful correction: "This is wrong because we use TypeScript in this project. All our existing functions have typed parameters and return values. See the example I attached."

Result: AI understands the standard AND why it matters.

Example: Wrong Architecture Decision

Unhelpful: "Don't use Redux."

Result: AI switches to something else, might pick another wrong option.

Helpful: "Don't use Redux because our app is small and we're already using React Context. Adding Redux would be over-engineering for 3 components."

Result: AI understands the constraint AND the reasoning, can suggest appropriate alternatives.

Why This Works

When you explain the reason, that reasoning stays in context. AI can now:

  • Apply the same logic to similar decisions
  • Avoid repeating mistakes for the same reasons
  • Understand your constraints better

"Wrong" tells AI to change something. "Wrong because X" teaches AI to think differently.


4.3 Context Contamination

This is one of the most important and least understood aspects of working with AI.

The problem: When AI generates bad output, that output stays in the context. Even after you say "that's wrong," the bad content is still there. AI can—and often does—reference it later.

Here's what this looks like in practice:

  1. You ask for an article. AI writes something off-tone.
  2. You say "That's too formal, be more conversational."
  3. AI rewrites, but some formal phrases keep sneaking back.
  4. Why? Because the original formal text is still in context, acting as an invisible reference.

The same happens with code:

  1. You ask for a function. AI writes 200 lines.
  2. You say "Too long, simplify."
  3. AI shortens it, but keeps patterns from the bloated version.
  4. You iterate 5 times. The original bad approach contaminates every attempt.

When to Start Fresh

The fix-or-restart decision comes down to one question: Does the bad context outweigh the good?

Fix when:

  • Output is at least 80% correct
  • AI understood the task, just made minor mistakes
  • Errors are easy to identify and explain
  • Dealing with non-technical tasks where iteration is natural

Start fresh when:

  • Output is completely off target
  • AI keeps repeating the same mistakes after corrections
  • You said "exclude X" but X keeps appearing
  • Previous wrong steps are influencing current attempts
  • You've iterated 3+ times without significant improvement

How to Start Fresh Properly

Don't just start a new session and repeat your original request. That'll produce the same bad result.

Instead:

"I'm working on [task]. Here's what I have so far [paste current good content]. I want to continue, but with these changes: [what you learned from failed attempts]."

Or even better:

"Here's my article draft [paste]. It's unfinished. Continue writing, but: don't use passive voice, keep paragraphs under 4 sentences, match the tone of the first section."

You're giving AI a clean context with explicit guidance based on what went wrong before.


4.4 Ask for Opinion, Not Validation

Here's an uncomfortable truth: current AI models are tuned to be agreeable. If you ask "Is my plan good?", you'll get praise—whether the plan deserves it or not.

This isn't AI being deceptive. It's AI being helpful in the way it was trained to be. The problem is you wanted honest feedback.

The Validation Trap

Validation question: "Is my database schema good?"

Typical response: "Your schema looks well-structured! The relationships are clear and..."

What you learned: Nothing useful.

Better Question Patterns

Opinion question: "What problems do you see with this database schema?"

Typical response: "I see a few potential issues: the Users table might benefit from an index on email for faster lookups, the many-to-many relationship could cause..."

What you learned: Actual problems to consider.

Role play technique: "You are a database architect with 15 years of experience. Review this schema and tell me what you'd change."

Typical response: Direct technical feedback from the perspective of an expert.

Question Patterns That Work

| Instead of... | Ask... | |---------------|--------| | Is this code good? | What would you change in this code? | | Is my plan solid? | What are the weakest parts of this plan? | | Did I cover everything? | What am I missing? | | Is this approach correct? | What alternative approaches should I consider? | | Do you like this design? | What would break if we shipped this? |

The pattern: Ask for criticism, not confirmation.

Role Play for Different Fields

The role play technique works across domains:

  • Code review: "You are a senior engineer doing a code review. What feedback would you give?"
  • Marketing: "You are a marketing director seeing this campaign for the first time. What concerns would you raise?"
  • Writing: "You are an editor with no patience for fluff. What would you cut from this article?"
  • Architecture: "You are a solutions architect who has to maintain this system for 5 years. What worries you?"

The role gives AI permission to be critical. Use it.


4.5 Editing Prompts Mid-Work

Here's a technique that changed how I work with AI: edit your original prompt instead of sending correction messages.

The Problem with Corrections

When you discover something mid-conversation, the natural instinct is to send a follow-up:

"Actually, I meant just the first form, not both." "Sorry, I forgot to mention we need TypeScript." "Wait, also exclude the deprecated options."

Each correction message adds noise to the context. AI now has:

  • Your original (incomplete) request
  • Its response based on that request
  • Your correction
  • More context to parse

This creates unnecessary complexity and intermediate steps.

The Better Approach

Edit your original prompt to include the new information. Then continue from the improved starting point.

Before (original prompt): "Add validation to the form."

AI reveals: There are actually 2 form implementations.

After (edited prompt): "Add validation to the registration form (the one in SignupPage.tsx, not the contact form)."

Now AI works with complete information from the start. No wasted intermediate steps.

When This Works

Some tools support this better than others:

  • Zed + Claude: Excellent support for editing prompts mid-conversation
  • ChatGPT: You can edit messages, but it restarts the conversation from that point
  • Claude (web): Similar to ChatGPT—edit resets from that point
  • API-based tools: Depends on implementation

The principle applies everywhere: complete information upfront beats corrections later.

For Tools Without Edit Support

If you can't edit, start a new session with the improved prompt. It's often faster than trying to correct course mid-conversation.

New session: "Add validation to the registration form in SignupPage.tsx (not the contact form). Use the same validation pattern as our existing email validator."

Clean context, complete information, better results.


Long Session Management

One more practical technique for extended work sessions:

Every 8-12 interactions, ask for a summary:

"Summarize what we've accomplished and list the 3 most important remaining TODOs."

This does three things:

  1. Confirms AI still understands the goal
  2. Clears confusion from accumulated context
  3. Gives you a checkpoint to restart from if needed

It's like saving your game. If the conversation goes sideways, you have a clean summary to start fresh with.


Chapter Summary

Key Takeaways:

  1. 2-minute rule: Long processing is a signal to stop and adjust, not wait patiently
  2. Explain WHY: "Wrong because X" teaches AI; "that's wrong" doesn't
  3. Context contamination: Bad output pollutes future responses—sometimes fresh is faster
  4. Ask for criticism: "What would you change?" beats "Is this good?"
  5. Edit, don't correct: Update original prompts instead of sending follow-up corrections

Try This: Next time AI gives you something wrong, try two approaches: (1) Simply say "try again" and (2) Explain exactly why it's wrong. Compare the responses. Notice how specific feedback produces specific improvements.


Next: Now that you know how to prepare context and iterate effectively, let's look at the specific tools that make this workflow practical.

Article Details

Category
context engineering new guide draft
Published
November 28, 2025
Length
1,769 words
10,918 characters
~8 pages
Status
Draft Preview

More from context engineering new guide draft

Introduction

# Introduction Imagine meeting a random colleague on the street and telling them: *"Dark Roasted Peru, Bean Lovers, sweet taste from South America"* They probably won't understand what you want. Ma...

Read article

Part 1: Understanding Context Engineering

# Part 1: Understanding Context Engineering --- # Chapter 1: Why Your AI Prompts Fail For a long time, I thought AI was like a search engine on steroids. Give it a question, get an answer. Then I t...

Read article

Chapter 2: What Good Context Looks Like

# Chapter 2: What Good Context Looks Like In the previous chapter, we saw why prompts fail without context. Now let's look at exactly what "good context" looks like—the practical patterns that get re...

Read article