Proactive vs Reactive: Rethinking How AI Gets Its Context

Proactive vs Reactive: Rethinking How AI Gets Its Context

Author: Matthew Denman Date: October 10, 2025 Title: Enterprise AI Architect & Speaker | Full-Stack AI Platform Builder | AWS Authorized Instructor


After building AI assistants for the past two and a half years, I've discovered that one of the most important architectural decisions isn't about which AI model to use or how to write better prompts - it's about when the AI gets its data.

The Reactive Approach: Tool Calls

Most AI systems follow a familiar pattern. You give the AI a set of tools and let it decide when to call them:

System: "You have access to calendar_search and weather_api tools."
User: "What's my day looking like?"
AI: [Decides to call calendar_search]
AI: [Waits for response]
AI: "You have three meetings today..."

This reactive approach has clear advantages:

Flexibility: The AI can handle unexpected requests without pre-configuration

Efficiency: Data is only fetched when actually needed

Capability: Works for actions with side effects (sending emails, booking meetings)

But it also has trade-offs:

Reliability: The AI might not call the tool (even when it should), might call it incorrectly, or might hallucinate limitations

User Experience: Users see multiple responses - the AI thinking, calling tools, then finally answering

Latency: Each tool call adds round trips. Simple questions become multi-step processes

The reactive pattern works well for dynamic, user-driven requests. But there's a different class of problem: data you know you'll always need.

The Proactive Alternative: Pre-loaded Context

What if, for certain types of data, you could guarantee it's present before the AI responds?

We built a template system where prompt engineers can declare data dependencies directly in prompts using a simple {s:functionName(params)} syntax:

system: |
  You are a personal assistant for {userName}.

  Here's their schedule for today:
  {s:calendar(0)}

  Current weather at their location:
  {s:weather(userLocation)}

prompt: |
  Greet the user and help them start their day.

When this template renders, the functions execute before the AI sees the prompt. The AI receives:

You are a personal assistant for Sarah.

Here's their schedule for today:
9:00 AM - Team standup
11:00 AM - Client call with Acme Corp
2:00 PM - Product review
4:30 PM - 1-on-1 with manager

Current weather at their location:
Partly cloudy, 72°F

Greet the user and help them start their day.

Now the AI can respond naturally: "Good morning, Sarah! Looks like a busy day ahead. You have four meetings, starting with your team standup in an hour. It's a nice 72° out there - perfect weather for a walk between meetings if you need a break."

No tool calls. No waiting. No probabilistic behavior. Just contextually aware conversation from the first word.

This proactive approach has different trade-offs:

Advantages:

  • 100% reliability - data is always present
  • Single, clean response - better UX
  • Faster - one round trip instead of multiple
  • Deterministic - same prompt always gets same data

Trade-offs:

  • Data is fetched even if not ultimately needed
  • Requires knowing in advance what's needed
  • Best for smaller data sets that fit in prompts
  • Less flexible than dynamic tool selection

Moving Decisions from Code to Configuration

Here's where this gets really interesting. In traditional systems, developers hard-code what data gets loaded:

// Developer decides what every prompt needs
var promptData = new {
  userName = user.Name,
  currentTime = DateTime.Now,
  location = user.Location
  // Prompt engineer wants weather? File a ticket.
};

Every time a prompt engineer needs new data, they're blocked waiting for a developer to write code, test it, and deploy it.

With function-based templates, the prompt engineer controls the data:

# Week 1: Minimal context
prompt: |
  Hello {userName}

# Week 2: Add time-based greeting (just edit YAML)
prompt: |
  {s:greeting(timezone)} {userName}

# Week 3: Add personalized context (still just YAML)
prompt: |
  {s:greeting(timezone)} {userName}
  {s:getTodaySummary(userId)}

Each iteration is a configuration change, not a code deployment. Developers build the functions once; prompt engineers compose them endlessly.

Real-World Examples

Example 1: The Counselor Assistant

A therapist using an AI assistant needs immediate access to session history:

system: |
  You are a counselor working with {clientName}.

  {% if sessionType == "follow-up" %}
  Recent sessions:
  {s:recentSessions(clientId, 3)}

  Action items from last session:
  {s:actionItems(clientId)}
  {% endif %}

  Active treatment goals:
  {s:treatmentPlan(clientId)}

The conditional logic ensures expensive session data is only fetched for follow-up appointments, not initial consultations. The prompt engineer knows the data is needed, but also knows when it's needed - no AI decision required.

Example 2: The Engineering Assistant

A developer's AI pair programmer adapts its context based on what the developer is doing:

system: |
  {% if recentProjects %}
  You are helping {userName} with {s:latestProjectName()}

  Recent commits:
  {s:recentCommits(projectId, 10)}

  {% if hasPendingReview %}
  Open pull requests needing your review:
  {s:openPRs(projectId)}
  {% endif %}

  {% if workingBranch != "main" %}
  Current branch: {workingBranch}
  Branch status: {s:branchStatus(projectId, workingBranch)}
  {% endif %}

  Tech stack: {s:techStack(projectId)}
  {% else %}
  You haven't started helping {userName} with projects yet, ask them
  if they would like to get started on their first AI assisted project.
  {% endif %}

When the developer is on a feature branch, the AI gets branch-specific context. When there are pending reviews, those appear. For first-time users, the AI knows to offer onboarding instead of assuming project context. The prompt engineer controls which data loads based on application state, not AI interpretation.

Example 3: The Smart Sales Assistant

A sales rep's assistant intelligently loads pipeline data based on the day and quota status:

system: |
  You are {userName}'s sales assistant.

  {% if dayOfWeek == "Monday" %}
  Week ahead overview:
  {s:weeklyPipeline(userId)}
  {% else %}
  Today's follow-ups:
  {s:followUps(userId, 0)}
  {% endif %}

  {% if quotaProgress < 0.5 %}
  PRIORITY - You're at {quotaProgress}% of quota
  Hot leads (last 48 hours):
  {s:recentLeads(userId, 48)}

  Suggested actions:
  {s:quotaRecoveryActions(userId)}
  {% else %}
  Pipeline value: {s:pipelineValue(userId)}
  Quota progress: {quotaProgress}%
  {% endif %}

On Mondays, the assistant loads weekly planning data. When quota is behind, it automatically surfaces recovery actions and hot leads. The conditional data loading combines deterministic fetching with smart context management.

When to Use Each Approach

This isn't about eliminating tools - it's about using the right pattern for the right job:

Use Proactive Loading ({s:} functions) when:

  • Data is always needed for coherent conversation
  • Starting a new conversation or session
  • Context is relatively small (< 1000 tokens)
  • You want guaranteed reliability

Use Reactive Tools when:

  • User explicitly requests something mid-conversation
  • Data is large or rarely needed
  • Actions have side effects (sending emails, booking meetings)
  • Parameters can't be known ahead of time

Use both together:

system: |
  You're a personal assistant.

  # Proactive: Always load today's schedule
  Today's schedule:
  {s:calendar(0)}

  # Reactive: Make other dates available via tools
  For other dates, use the calendar_search tool.

This gives you guaranteed context for starting conversations, with flexible capabilities for dynamic requests.

The Architecture Matters

Behind the scenes, we use a factory pattern where functions are registered once and called by templates:

// Developer registers capabilities
factory.Register("calendar", async (context) => {
  var user = context.user;
  int daysOffset = context.params[0];
  return await FetchCalendar(user, daysOffset);
});

factory.Register("weather", async (context) => {
  var location = context.params[0];
  return await GetWeather(location);
});

Each function receives the active user, parameters, and full state. Functions handle their own authentication, caching, and error handling. The template system just calls them.

This separation means:

  • Developers focus on building reliable, performant functions
  • Prompt engineers focus on crafting effective prompts
  • Neither blocks the other

The Results

Since implementing this pattern alongside our existing tool system, we've seen:

Faster iteration: Prompt engineers ship improvements daily instead of waiting for deployments

Better UX for known contexts: Users get coherent, contextually aware responses immediately for predictable scenarios

More reliable initial responses: Guaranteed data presence for conversation starters

Clearer separation: Developers build functions; prompt engineers compose them; neither blocks the other

Each approach handles what it does best.

The Bigger Picture

The trend in AI development has been toward giving models more autonomy - letting them decide what to do, when to call tools, how to solve problems. That autonomy is valuable for open-ended tasks where you can't predict what the user needs.

But for known requirements - the data an assistant absolutely needs for its core function - there's an argument for determinism over flexibility.

A personal assistant that always knows your schedule doesn't need the ability to look it up. A medical assistant that always has patient history loaded doesn't benefit from having to retrieve it. For these scenarios, the decision about whether to fetch data isn't really a decision at all.

The architectural question isn't "which approach is better?" It's "which data should be guaranteed, and which should be available on demand?"

Get that distinction right, and you can build AI that combines reliability where it matters with flexibility where it's needed.


We're continuing to refine this pattern in our AI platform. If you're building AI assistants and struggling with context management, I'd love to hear about your approach. What's worked? What hasn't?

Article Details

Category
context engineering new codecon materials
Published
November 25, 2025
Length
1,538 words
10,530 characters
~7 pages
Status
Draft Preview

More from context engineering new codecon materials

Stop Prompting. Start Briefing.

# Stop Prompting. Start Briefing. **Author:** Tomas Cupr **Date:** August 10, 2025 > "Thinking" ChatGPT class models are landing in more hands. Results still vary. The difference is not magic, it is...

Read article

Effective context engineering for AI agents

# Effective context engineering for AI agents **Source:** Anthropic Engineering Blog **URL:** https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents --- After a few years...

Read article

Prompts vs. Context

# Prompts vs. Context **Author:** Drew Breunig **Date:** June 25, 2025 **Source:** https://www.dbreunig.com/2025/06/25/prompts-vs-context.html --- I'm not the only one thinking about how context ma...

Read article