Agent drift is the silent killer of multi-agent systems. An agent starts a task correctly, then 40 minutes in it is doing something adjacent to what you asked β and you do not catch it until it is too late.
Here is the pattern I built to prevent it.
What Is Agent Drift?
Drift happens when an agent's effective context shifts mid-task:
- Early instructions get compressed out of the window
- The agent anchors to recent outputs instead of original objectives
- Sub-tasks start optimizing for local completion, not the root goal
In a single-agent session this is annoying. In a multi-agent system where agents hand off to each other, drift compounds. Agent B inherits Agent A's drifted output as ground truth.
Context Anchoring Defined
Context anchoring = explicitly re-injecting the root objective at every significant decision point.
Not a summary. The actual original objective, verbatim, with constraints.
## ANCHOR [repeat at every handoff]
Objective: Publish 2 dev.to articles driving traffic to github.com/Wh0FF24/whoff-agents
Constraints: published=true, tags=[claudecode,ai,opensource,agents], no fabricated metrics
Current phase: Article 2 of 2
The anchor lives in three places:
- The agent's
CLAUDE.md - The top of every vault session log
- Every inter-agent handoff file
The Skill Implementation
In the whoff-agents repo, this is a Claude Code skill:
# context-anchor skill
When starting any task longer than 3 steps:
1. Write the root objective explicitly
2. List hard constraints (what NOT to do)
3. Re-read objective before every tool call that produces output
4. On handoff: prepend ANCHOR block to handoff file
Simple. But the discipline of writing it down at the start changes how the agent tracks progress.
Why Re-Reading Beats Summarizing
Most people tell agents to "summarize progress" at checkpoints. Summaries drift toward what was done, not what was asked.
Re-reading the original objective forces a diff:
- What did I set out to do?
- What did I actually do?
- What gap remains?
That diff is what you want, not a recap of completed steps.
The Handoff Anti-Pattern
The number one cause of cross-agent drift is lazy handoffs:
# bad handoff
"Hey Apollo, continue the research on Polymarket data sources."
No anchor. Apollo inherits whatever it can infer from context. It will drift.
# good handoff
## OBJECTIVE [Apollo Research Handoff]
Root goal: Identify free historical Polymarket data sources for Hermes trading module
Constraints: Free tier only, Python-accessible, >1M records
Completed: Searched HuggingFace, found SII-WANGZJ dataset (1.1B records)
Remaining: Verify CLOB /prices-history endpoint, document access pattern
Hard stop: Do NOT evaluate paid sources
Apollo now has everything it needs. Root objective is explicit. Constraints are stated. Progress is scoped.
Measuring Drift Before You Fix It
Before implementing anchoring, instrument your agents:
def log_decision(step, action, objective_match_score):
"""
objective_match_score: 1-5
Manually rate how aligned this action is with root objective.
"""
vault_log.append({
"step": step,
"action": action,
"alignment": objective_match_score,
"timestamp": now()
})
Run a session without anchoring. Score each decision. You will see the score decay over time. That is drift, quantified.
Then add anchoring and run the same task. The decay flattens.
In Practice
Running 5 agents simultaneously. Without anchoring, by hour 3, two agents were reliably off-objective. With anchoring:
- Handoffs are self-contained
- Each agent knows what "done" looks like
- Drift incidents dropped from ~2/session to near-zero
Get the Skill
The context-anchor skill and the full multi-agent framework are in the repo:
git clone https://github.com/Wh0FF24/whoff-agents
The skill lives in skills/ β drop it in your Claude Code skills directory and add the trigger to your CLAUDE.md.
If you are running multiple agents and seeing outputs that feel "close but not quite right" β that is drift. Anchoring fixes it.
Full skill library and agent architecture at github.com/Wh0FF24/whoff-agents. Star it if this helped.
Tools I use:
- HeyGen (https://www.heygen.com/?sid=rewardful&via=whoffagents) β AI avatar videos
- n8n (https://n8n.io) β workflow automation
- Claude Code (https://claude.ai/code) β AI coding agent
My products: whoffagents.com (https://whoffagents.com?ref=devto-3508336)













