Context Engineering for Teams - How to Introduce It Without Chaos

Context Engineering for Teams - How to Introduce It Without Chaos

"QA is about to become a huge bottleneck in software development," George Arrowsmith recently wrote on LinkedIn. He's right. AI allows us to generate huge amounts of code extremely fast, but you still need to verify it works.

Solution? Context Engineering at team level. Not as a toy for individuals, but as part of workflow.

Why Teams Fail with AI

I've seen this multiple times:

  1. Company buys ChatGPT Teams licenses
  2. Does 2-hour training on "how to prompt"
  3. Everyone tries it their own way
  4. After a month, nobody uses it
  5. "AI is hype, it doesn't work"

Problem? Lacks systematic approach. AI isn't Excel - learning functions isn't enough. You need to change the way you work.

My Suggestion: Start with Documentation

People will have to focus on writing documentation. "Everybody knows how feature X should work" won't be acceptable anymore. Doesn't matter if it's for humans or AI - both need context.

What to document:

  • Task descriptions (not just titles)
  • Expected behavior (not just "it should work")
  • Decision context (why we did it this way)
  • Technical constraints (what must not change)

Hard to say if Project Managers, Product Owners, or Developers will do this. Probably everyone a bit.

Template for Team Tasks

We use this in Linear and it works:

## Problem
[What broke / what needs to be done]

## Context
- Files: [specific files and lines]
- History: [relevant previous changes]
- Constraints: [what must not change]

## Goal
[Clear success criterion - when is it done]

## Possible Solutions
1. [First option]
2. [Second option]
3. [Third option]

## Tests
[How we verify it works]

With such a task description, AI (and juniors) can work effectively.

Practical Implementation in Team

Week 1-2: Pilot with one project

  • Choose small, non-critical project
  • 2-3 people, not entire team
  • Document everything - what works, what doesn't

Week 3-4: Standardization

  • Create task templates
  • Define when to use AI, when not
  • Set code review rules

Week 5-6: Scaling

  • Expand to other projects
  • Training for entire team
  • Introduce metrics

Who Does What in AI-enabled Team

Product Owner:

  • Writes detailed user stories with context
  • Defines success criteria
  • Prioritizes what AI can do

Developer:

  • Breaks tasks into atomic parts
  • Provides technical context
  • Reviews AI outputs

QA/Tester:

  • Defines test scenarios
  • Verifies AI-generated code
  • Creates test cases for AI

Project Manager:

  • Coordinates documentation
  • Tracks productivity metrics
  • Identifies bottlenecks

Tools for Teams

For small teams (2-5 people):

  • Shared ChatGPT/Claude prompts
  • Git for prompt versioning
  • Slack/Discord for tip sharing

For medium teams (5-20 people):

  • Linear/Jira with custom fields for context
  • Central prompt repository
  • Code review process for AI code

For large teams (20+ people):

  • Dedicated Context Engineer role
  • Custom tooling/integrations
  • Automated quality checks

Real Examples from Practice

Example 1: Bug fixing Before: Developer searches for bug 2 hours Now: AI with good context finds it in 5 minutes, developer verifies and fixes in 30 minutes

Example 2: New feature Before: 3 days development Now: 1 day AI generation + 1 day review and adjustments

Example 3: Documentation Before: Nobody writes it Now: AI generates draft, human edits in 15 minutes

Common Problems and Solutions

"AI generates bad code" → Bad context. Add more details to task description.

"Takes longer than manual" → Tasks too big. Break into smaller ones.

"Everyone does it differently" → Missing standards. Create templates, standards, AGENTS.md or other guidelines.

"Don't trust AI outputs" → Missing review process. Introduce code review.

Hint for non-technical teams: Why not apply processes that programmers have used for years? If you're writing articles or creating content:

  • Use Markdown format and store chapters as separate files
  • Use Git for version control (many UI tools make this accessible without technical knowledge)
  • Use text editors with AI integration instead of generating in ChatGPT/Claude UI
  • Edit text directly in place - AI can rewrite specific paragraphs without risking other content
  • For larger texts, split into multiple files (one per chapter) for better management
  • Export everything with one prompt to Word, PDF, or generate summaries and promo content per chapter

This approach beats the common workflow of: ChatGPT → copy to Word → manual edits → losing track of versions.

Success Metrics

What to track:

  • Time from task to done (lead time)
  • Bugs per feature
  • Time spent on review
  • Team satisfaction

What NOT to track:

  • Number of AI uses
  • Amount of generated code
  • "Time saved" (hard to measure)

Cultural Change

The biggest challenge isn't technical - it's cultural.

You need to change mindset from:

  • "AI is threat" → "AI is tool"
  • "I do everything myself" → "I collaborate with AI"
  • "Documentation is waste of time" → "Documentation is investment"

How to achieve it:

  • Show quick wins
  • Reward early adopters
  • Share success stories
  • Be patient

Conclusion: Start Small

You don't have to change everything at once. Start with:

  1. One project
  2. One team
  3. One type of task

When it works, scale up.

"Maybe developer's positions are going to be even more about writing and story telling than actual coding," I recently wrote on LinkedIn. Context Engineering is exactly about that - learning to communicate with AI as effectively as we communicate with people.


Oliver Kriska helps technical teams effectively introduce AI into their workflow. Not as revolution, but as evolution.

Article Details

Category
context engineering new
Published
September 11, 2025
Length
952 words
5,802 characters
~4 pages
Status
Draft Preview

More from context engineering new

Why You Give Much and Get Little (And How to Change It)

# Why You Give Much and Get Little (And How to Change It) "AI output is unusable" - I hear this constantly. Many people don't use AI precisely because of this. But the problem isn't AI. The problem i...

Read article