Iterating Prompts and Workflows — When to Fix and When to Restart

Iterating Prompts and Workflows — When to Fix and When to Restart

This article is the operational layer: short, repeatable rules I use to decide whether to correct in the same session or to reset and start fresh. It assumes the mindset from Article 1 (prompt vs. context) and the surgical context patterns from Article 2. Here you get compact decision heuristics, a few concrete practices from my workflow, and examples (no copy‑paste prompts).


The quick decision heuristic

When you receive a model output, check these three things — fast:

  1. Coverage — did the output address the single success criterion?
  2. Correctness — are the facts/code syntactically and semantically plausible?
  3. Format — is the output in the shape you requested (checklist, diff, JSON)?

Rule of thumb:

  • If Coverage ≥ ~80% and correctness is acceptable → fix inline with a tiny, deterministic edit.
  • If Coverage < ~80% or the model persistently ignores constraints → restart with a distilled seed.
  • If the change touches privacy/security/production → escalate to a human reviewer.

This keeps loops short and avoids prolonged fiddling on fundamentally wrong outputs.


Deterministic inline moves (use these, not long explanations)

When you continue in the same session, prefer one small, clear action:

  • Correction + reason: point to the exact mistake and request the single correction.
  • Narrow the scope: force the model to do one atomic job (e.g., “produce one-line root cause”).
  • Enforce format: require an exact output shape so you can parse and verify automatically.
  • Example injection: show a 1–3 line sample of the expected answer structure.

These minimal interventions constrain the model and dramatically increase the chance of a useful next result.


When to restart — how to do it effectively

Restarting is not failure — it’s a reset. Restart when:

  • The session “bleeds” previous mistakes into new answers.
  • Conflicting constraints or rejected fragments have accreted.
  • Coverage is low and retries don’t converge.
  • A stated exclusion keeps getting ignored.

Effective restart recipe:

  1. Distill the prior conversation into a one‑sentence goal and a 3‑line summary.
  2. Attach one minimal artifact (failing test lines, a 10‑line code snippet, or one authoritative link).
  3. State 1–2 explicit constraints (single‑line rules).
  4. Ask for the exact return format.

Treat that distilled seed as canonical for the fresh session — don’t paste noisy history back in.


Practical examples from my workflow

Session memory and style signals

  • New sessions don’t inherit your prior workspace knowledge. If you want consistent style or nuance, paste examples: e.g., “Here are two of my prior articles and their traffic: Article A — 5k visits, Article B — 800 visits. Use the tone and length similar to Article A.” With sufficient examples the model can infer patterns without explicit templates.

Atomization in practice

  • I first ask the model to analyze the whole task and list steps (read-only). Then: “Do step #1 only” and I add notes for that step (e.g., use a new library function). If a step requires another file, I paste that file or its unique top lines. For big jobs I write the analysis to a markdown file and use that summary to seed subsequent sessions.

Hallucinations — repair workflow (example)

  • Detect: the model recommends products driven by marketplace reviews.
  • Diagnose: find the step where marketplace data entered the context.
  • Repair: extract the useful facts, start a new session, and instruct: “Use only safety scores from ADAC; exclude marketplace sellers.” Paste the verified ADAC rows if needed.
  • Result: the model now ranks by the correct metric and avoids the previous bias.

Manual file selection in code-aware editors (Zed + Claude)

  • When the IDE agent suggests files to add, I decide manually if:
    1. The search term is ambiguous (many similarly named files).
    2. The suggested file set is large and likely to add noise.
  • If either is true I add the specific file(s) manually (or paste the header) — this saves tokens and reduces wrong-context picks. When confident, I let the agent pick.

Keep experiments tiny and auditable

Track minimal experiment metadata:

  • Date, model/version
  • Distilled goal (one line)
  • Artifact(s) used (file path / link)
  • Action taken (inline-fix / restart / escalated)
  • Short outcome (pass/fail + why)

Small records surface recurring failure modes and make restarts cheap.


Closing

Iteration is a judgment skill more than a technical one. Use the coverage/correctness/format checks, minimal inline moves, disciplined restarts, small experiment records, and explicit escalation rules. That turns AI interactions from noisy experiments into repeatable, auditable work.

For the mindset and how to build surgical context, see:

  • Article 1 — AI is Not Magic (mindset)
  • Article 2 — What Good Context Looks Like (practical patterns)

If you want, I’ll:

  • consolidate the Elixir example (records_table.ex) across articles 1–4,
  • generate the small experiment issue template you asked about,
  • or add a short onboarding exercise for teams.

Article Details

Category
context engineering
Published
September 11, 2025
Length
838 words
5,162 characters
~4 pages
Status
Draft Preview

More from context engineering

AI is Not Magic – Why Prompt Engineering Isn't Enough

# AI is Not Magic – Why Prompt Engineering Isn't Enough Briefing - Core idea: Prompting is only half the job — the other half is supplying precise, relevant context. Treat AI like a new, very capabl...

Read article

What Good Context Looks Like — Practical Patterns

# What Good Context Looks Like — Practical Patterns Using AI well isn’t about accumulating prompts or copy‑pasting recipes. It’s about feeding the model the minimal, relevant, and structured context ...

Read article

Context Is Everything — But Not All Context Is Useful

# Context Is Everything — But Not All Context Is Useful When people talk about "giving the model context" they often mean "dump everything I have into the session." That rarely helps. The real art is...

Read article