Topic Analysis: Chapter 1 - Why Your AI Prompts Fail
Metadata
- Syllabus Reference: Part 1, Chapter 1
- Primary Sources: Article 1 (Prompt Is Not Enough), Article 9 (Vibe Coding)
- Secondary Sources: Interview Q1, Q8-9, Q19-20
- Analysis Date: 2025-11-28
- Status: Complete
1. Source Materials
1.1 Primary Sources
From Article 1: "Prompt Is Not Enough"
Opening metaphor (coffee example):
"Imagine meeting a random colleague on the street and telling them: 'Dark Roasted Peru, Bean Lovers, sweet taste from South America'. They probably won't understand what you want from them."
AI definition:
"AI is like a person who knows nothing about your project, product, or problems - but can learn everything to extreme depth if you explain it correctly. It knows nothing about your product, but can mathematically express how atoms split."
Colleague comparison:
"The difference from a colleague? You have AI 24/7, it responds immediately, and never has a bad mood. A colleague needs to be available, needs time to think, study documentation, read 10 books and 100 articles to give you a similar answer. The trade-off is that AI lacks human experience, which is irreplaceable."
Prompt vs Context definition:
"Prompt = the task, question, instruction. Context = everything else that helps AI understand the task correctly (your data, examples, constraints, priorities)"
Key mistake pattern:
"When I started with AI, I was giving large tasks, wasn't specific, provided little context, and expected a lot. Just like many people who tell me 'the output from AI is unusable.'"
The 1-2 minute rule:
"If I see AI doing many operations (unless the prompt explicitly requires it) or it takes me more than 1-2 minutes to get a usable result, I know that: The task is too big, The context is bad or missing, I need to break it into smaller parts"
From Article 9: "Vibe Coding vs Context Engineering"
Karpathy quote:
"+1 for 'context engineering' over 'prompt engineering'. People associate prompts with short task descriptions. In every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information."
1.2 Secondary Sources
From Interview Q8-9 (AI as Colleague)
- AI is like meeting a random person on the street who knows nothing but can learn everything
- Trade-off: AI has all knowledge but lacks human experience/intuition
- You have AI 24/7, responds immediately, no bad mood
- Colleague has experience, can ask good questions, make suggestions
From Interview Q19 (Prompt vs Context)
- Prompt = zadanie (task/assignment)
- Context = everything else including examples of your style
- Output = anything AI does in response (text, image, code, file changes)
From Interview Q20 (Evolution)
"I was giving large tasks, wasn't specific, gave little and expected much"
1.3 External Citations
Andrej Karpathy (2025) - Twitter/X post:
"Context engineering is the delicate art and science of filling the context window with just the right information for the next step."
2. Content Extraction
2.1 Key Concepts
-
Prompt vs Context Division
- Definition: Prompt = task/instruction, Context = everything else
- Example: "Write article" vs "Write 1000-word article for LinkedIn about X for tech managers in direct tone"
- Source: Article 1, Interview Q19
-
AI as Knowledgeable Stranger
- Definition: AI knows nothing about your situation but can learn anything
- Example: Colleague on street analogy, coffee description example
- Source: Article 1, Interview Q8
-
"Giving Little, Expecting Much"
- Definition: Common mistake of minimal input expecting maximum output
- Example: "Build me an expense tracking app" → 500 lines of unusable generic code
- Source: Article 1, Interview Q20
-
The 1-2 Minute Rule
- Definition: Signal that something is wrong if AI takes longer
- Example: Seeing AI do many operations means task is too big
- Source: Article 1
-
AI vs Google vs Colleague
- Definition: AI is neither search engine nor colleague, but something between
- Example: 24/7 availability, immediate response, no mood, but lacks experience
- Source: Article 1, Interview Q1-2
2.2 Key Examples
-
Coffee Description Example
- Context: Two ways to describe what you want
- Before: "Dark Roasted Peru, Bean Lovers, sweet taste from South America"
- After: "I want to buy Dark Roasted Peru coffee from Bean Lovers brand. It should have sweet taste and is grown in South America."
- Source: Article 1, Interview Q19
-
Expense Tracker Example
- Context: User needs simple expense tracking
- Before: "Build me an expense tracking application" → 500 lines generic code
- After: Broken into 5 atomic steps with specific requirements
- Source: Article 1, Article 3
-
Car Seat Selection
- Context: Finding right car seat for child
- Before: "Find me the best car seat"
- After: "My son is 120cm tall. Get safety ratings from ADAC tests..."
- Source: Article 1, Article 3
-
Article Writing Example
- Context: Writing LinkedIn article
- Before: "Write me an article about AI"
- After: Full context with word count, platform, audience, tone, examples, sources
- Source: Article 1
2.3 Key Quotes
-
"AI is like a person who knows nothing about your project, product, or problems - but can learn everything to extreme depth if you explain it correctly." - Article 1
- Use for: Opening/hook
-
"The problem isn't AI. The problem is you're giving little and expecting much." - Article 1
- Use for: Core message
-
"Context engineering is the delicate art and science of filling the context window with just the right information for the next step." - Karpathy
- Use for: Authority validation
-
"Most people focus on the 'perfect prompt' and ignore context. That's like giving a junior developer a task title and expecting perfect production code." - Article 1
- Use for: Relatable comparison
2.4 Data/Statistics
- Interview mentions: 90% Perplexity vs 10% Google for searches
- 25+ questions in interview format to gather context
- Expense tracker: 500 lines generic vs 5 atomic steps
3. Gap Analysis
3.1 Content Gaps
- [x] Strong examples present
- [x] Clear definitions provided
- [x] External validation (Karpathy) included
- [ ] Could add more industry statistics on AI adoption failure
3.2 Clarity Issues
- None identified - concepts well explained
3.3 Depth Assessment
- Good depth on core concepts
- Examples are concrete and relatable
- Both technical (expense tracker) and non-technical (car seat) covered
4. Structure Proposal
4.1 Chapter Outline
Chapter 1: Why Your AI Prompts Fail
Section 1.1: The "Giving Little, Expecting Much" Trap
- Main point: Most people's mistake is minimal input expecting maximum output
- Content from: Article 1, Interview Q20
- Include: Coffee description example, personal evolution story
Section 1.2: Prompt vs Context - The Real Difference
- Main point: Prompt is task, Context is everything else
- Content from: Article 1, Interview Q19
- Include: Article writing example (bad vs good)
Section 1.3: AI as Your New Colleague
- Main point: AI knows nothing but can learn everything
- Content from: Article 1, Interview Q8-9
- Include: Colleague comparison, 24/7 availability, trade-offs
Section 1.4: Industry Validation
- Main point: This isn't just opinion - industry leaders agree
- Content from: Article 9
- Include: Karpathy quote, term "context engineering"
4.2 Opening Hook
"Imagine meeting a random colleague on the street and telling them: 'Dark Roasted Peru, Bean Lovers, sweet taste from South America.' They won't understand what you want. It works the same way with AI."
4.3 Key Takeaways
- Prompt = task, Context = everything else that makes the task completable
- AI is like a colleague who knows nothing but can learn everything - if you teach them
- If AI output is "unusable," the problem is usually your input, not AI
4.4 Transition
"Now that you understand why prompts fail, let's look at what good context actually looks like - the practical patterns that get results on the first try."
5. Writing Notes
5.1 Tone/Voice
- Direct, practical, no pathos
- Personal experience as evidence
- Honest about own mistakes ("I was giving little, expecting much")
5.2 Audience Considerations
- Include both technical and non-technical examples
- Coffee/car seat for general audience
- Expense tracker for developers
- Article writing for marketers/writers
5.3 Potential Visuals
-
Diagram: Prompt vs Context Split
- Visual showing input divided into prompt (10%) and context (90%)
-
Table: Bad vs Good Task Description
- Side-by-side comparison
-
Comparison: AI vs Google vs Colleague
- What each is good for
6. Prepared Citations
Internal
- [A1] Article "Prompt Is Not Enough"
- [A9] Article "Vibe Coding vs Context Engineering"
- [I1] Interview Q&A, Questions 1-2 (Perplexity workflow)
- [I8] Interview Q&A, Questions 8-9 (AI as colleague)
- [I19] Interview Q&A, Question 19 (Prompt vs Context)
- [I20] Interview Q&A, Question 20 (Personal evolution)
External
- [K1] Karpathy, A. (2025). Twitter post on context engineering. "Context engineering is the delicate art and science of filling the context window with just the right information for the next step."
7. Open Questions
-
Should we include the PostgreSQL blog article link as external reference?
- Decision: Yes, as supplementary material
-
Include tools comparison here or save for Chapter 5?
- Decision: Brief mention here, full coverage in Chapter 5
-
How much technical detail in expense tracker example?
- Decision: Keep high-level, detailed example in Chapter 6