Part 2: The Practice
Chapter 3: Before You Prompt - Preparation
You wouldn't hand a new colleague a sticky note saying "fix the app" and expect perfect results. Yet that's exactly how most people approach AI. The magic happens before you ever hit send.
In this chapter, you'll learn:
- Five preparation principles that transform AI results
- How to break tasks into atomic parts that AI handles easily
- Why showing beats telling every time
- The critical role of exclusions and success criteria
3.1 Give the WHY, Not Just WHAT
AI doesn't know your priorities. It can't distinguish between "nice to have" and "deal-breaker" unless you tell it.
Here's what I mean. I needed a car seat for my son. I could have asked:
❌ Without WHY: "Find the best car seat for a 120cm child"
Result: Generic list sorted by overall ratings—features I didn't care about mixed with safety I did.
Instead, I explained my reasoning:
✅ With WHY: "My son is 120cm tall. Get safety ratings from ADAC tests (not overall ratings!) and create a table. For me, safety is a higher priority than having to take the seat out of the car once a year. Exclude marketplace sellers (Amazon, etc.)"
Result: Table sorted by exactly what mattered to me. Perfect starting point.
The difference? AI understood that safety was my priority, not convenience features. It optimized its search accordingly.
This works everywhere:
- Code review: "Check for security issues" vs "Check for SQL injection specifically—we had an incident last month and need to ensure it's not happening elsewhere"
- Documentation: "Write API docs" vs "Write API docs for external partners who need to integrate quickly—assume no prior knowledge of our system"
- Analysis: "Analyze this data" vs "Analyze this data to find why conversion dropped 40% last Tuesday—we think it's related to the checkout flow"
When you explain WHY, AI can make intelligent trade-offs. Without WHY, it guesses—and guesses are often wrong.
3.2 Break Into Atomic Parts
One of the most counter-intuitive truths about working with AI: giving a large task often takes more time than breaking it into small steps.
Here's why. When I asked AI to "build an expense tracking application," I got 500 lines of generic code using a framework I didn't know, with features I didn't need. Completely unusable.
But when I broke it down:
✅ Atomic Approach:
- "Create HTML table with 3 columns: amount, category, date. Users can add new rows."
- "Add validation—amount must be a number, category from dropdown list."
- "Store data in localStorage, load on page refresh."
- "Add basic CSS styling for clean look."
- "Create delete button for each row."
Result: Each step produced exactly what I asked for. Clean, understandable code I could actually use.
The atomization principle works because:
- Smaller scope = better focus. AI concentrates on one thing.
- Easier to provide context. Each step gets only the relevant files or information.
- Faster feedback loops. Catch errors at step 2 instead of discovering everything is wrong at the end.
- Higher success rate. AI has an extremely high chance of correct results on first iteration when tasks are atomic.
How to Atomize
Ask yourself: "Can I explain this task to someone in under a minute?"
If not, it's too big. Break it down further.
A practical workflow:
- First analysis (read-only): "Analyze this task and break it into steps. Don't write code yet."
- Then atomic tasks: "Do only step #1." Add context specific to that step.
- Record progress: Write analysis to a markdown file for reference.
- Checkpoint every 8-12 interactions: Ask for "SUMMARY + 3 TODO" to stay aligned.
This approach feels slower initially. But you'll rarely need to restart from scratch—something that happens constantly with large, vague requests.
3.3 One Example Beats 1000 Words
You can write paragraphs describing what you want. Or you can show AI an example in 30 seconds.
Examples win. Every time.
For Writing Tasks
❌ Without Example: "Write me a LinkedIn article. Make it professional but conversational, direct without being harsh, insightful but practical."
Result: Generic article that sounds like every other AI-generated content.
✅ With Example: "Write me a LinkedIn article about context engineering. Here are my last two articles as style examples [attached]. Match this tone and structure."
Result: Article that sounds like me and follows my established patterns.
For Code Tasks
❌ Without Example: "Write a validation function that follows our coding standards."
Result: AI's guess at what "your standards" means.
✅ With Example: "Write a validation function for phone numbers. Here's our existing email validation [code attached]. Match this pattern—same error handling, same naming conventions."
Result: Consistent code that fits your codebase.
For Design Tasks
❌ Without Example: "Create a dashboard that looks modern and clean."
Result: Generic interpretation of "modern and clean."
✅ With Example: "Create a dashboard for our analytics page. Here's our existing settings page [screenshot]. Match the component styles, spacing, and color scheme."
Result: Consistent design that fits your application.
What Makes Good Examples
- Specific to the task: Don't attach random previous work—attach work similar to what you need.
- Clear what to copy: If you want them to copy the structure but not the content, say so.
- Recently validated: Use examples that represent your current standards, not legacy code you're trying to move away from.
The principle is simple: show, don't tell. AI learns better from demonstration than description.
3.4 Say What to EXCLUDE
AI is eager to please. Without constraints, it includes everything it thinks might be helpful. This creates bloat, irrelevance, and confusion.
Explicit exclusions solve this.
Research Tasks
❌ Without Exclusions: "Find React admin panel frameworks."
Result: Mix of paid solutions, outdated frameworks, PHP alternatives, and enterprise options you can't afford.
✅ With Exclusions: "Find React admin panel frameworks. Exclude: paid solutions (Retool, Forest Admin), anything not updated in 2024, non-TypeScript options."
Result: Focused list of exactly what you can actually use.
Code Generation
❌ Without Exclusions: "Refactor this function to be cleaner."
Result: AI changes variable names, adds comments, restructures logic, "improves" things you didn't ask to change.
✅ With Exclusions: "Refactor this function into smaller functions. Don't change: business logic, error messages, API response format, variable names outside the refactored functions."
Result: Targeted refactoring that doesn't break existing behavior.
Common Exclusions to Consider
- Don't add comments (unless you want them)
- Don't change naming conventions (match existing)
- Don't add features (only what was requested)
- Don't use external libraries (if you want vanilla solutions)
- Don't explain basic concepts (if you're experienced)
- Don't include deprecated options (current only)
- Don't suggest alternatives (just answer the question)
The rule is simple: AI includes everything unless told otherwise. Be explicit about what you don't want.
3.5 Define What DONE Looks Like
Vague goals produce vague results. "Make it better" means nothing. "Make it pass our test suite" means everything.
Clear success criteria serve two purposes:
- Guide AI's work: It knows what to optimize for.
- Enable your verification: You know when to stop iterating.
What Good Success Criteria Look Like
❌ Vague: "Fix the performance issues."
✅ Specific: "Query must complete in under 500ms with 100k rows. Currently takes 8 seconds."
❌ Vague: "Write good tests."
✅ Specific: "Write tests that cover: happy path, invalid input (empty string, null), edge case (exactly 255 characters—the limit)."
❌ Vague: "Make the code cleaner."
✅ Specific: "Split into functions under 50 lines each. All existing tests must still pass."
The Verifiability Test
Can you check if it's done without subjective judgment?
- "Better performance" → Subjective, not verifiable
- "Under 500ms" → Objective, verifiable
- "Good code" → Subjective, not verifiable
- "All tests pass" → Objective, verifiable
- "Nice documentation" → Subjective, not verifiable
- "Every public function has a docstring" → Objective, verifiable
If your success criterion requires "I'll know it when I see it," it's not specific enough.
Why This Matters
Without clear criteria, you end up in an endless loop of "not quite right" iterations. With clear criteria, both you and AI know exactly when the task is complete.
Done = test passes without errors. Done = loads in under 2 seconds. Done = handles all edge cases in the specification.
This isn't bureaucracy. It's efficiency.
The Preparation Checklist
Before you prompt, run through these five checks:
| Check | Question | If No... | |-------|----------|----------| | WHY | Did I explain my priorities? | AI will guess what matters most | | Atomic | Can I explain this in 1 minute? | Break it down further | | Example | Did I show what good looks like? | Attach reference work | | Exclude | Did I say what NOT to do? | AI will include everything | | Done | Can I verify success objectively? | Make criteria specific |
Five questions. Thirty seconds to answer. Results that actually work on the first try.
Chapter Summary
Key Takeaways:
- Always explain WHY—AI can optimize for your actual priority when it knows the reason
- Break big tasks into atomic steps—each step gets precise context and higher success rate
- Show examples instead of describing—one example beats 1000 words of explanation
- Explicitly exclude what you don't want—AI defaults to including everything
- Define verifiable success criteria—"done = X" enables clear evaluation
Try This: Take your next AI task. Before you send it, check all five boxes: WHY, Atomic, Example, Exclude, Done. Notice how the response changes compared to your usual approach.
Next: Now that you know how to prepare before prompting, let's look at what happens during and after—how to iterate effectively when AI doesn't get it right.