Kiteworks surveyed 225 security and IT leaders for their 2026 Data Security and Compliance Risk Forecast Report. Three numbers from it:
- 63% can't enforce purpose limitations on what their agents are authorized to do
- 60% can't terminate a misbehaving agent
- 33% lack evidence-quality audit trails entirely
And 51% of these organizations already have agents in production. So the gap isn't hypothetical. Agents are running without guardrails right now, in real environments, doing real things.
What "enforce purpose limitations" actually means
The Kiteworks phrasing is specific. It's not "we don't have a policy." It's "we have a policy and can't enforce it."
That distinction matters. Most teams have some document that says "the support bot should only access customer records relevant to the active ticket." But nothing between the LLM deciding to SELECT * FROM customers and the query executing. The policy exists on paper. The enforcement doesn't exist in code.
What enforcement looks like
Here's a LangChain agent with an email-sending tool. No governance:
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const sendEmailTool = new DynamicStructuredTool({
name: "send_email",
description: "Send an email to a customer",
schema: z.object({
to: z.string().email(),
subject: z.string(),
body: z.string(),
}),
func: async ({ to, subject, body }) => {
await emailService.send({ to, subject, body });
return `Email sent to ${to}`;
},
});
The agent decides to send an email. It sends. Nobody reviewed the recipient, the subject, or the body. If the LLM hallucinated the address or wrote something unhinged, it shipped.
Now with governTools():
import { governTools } from "@sidclaw/sdk/langchain";
import { SidClawClient } from "@sidclaw/sdk";
const sc = new SidClawClient({
baseUrl: "https://app.sidclaw.com",
apiKey: process.env.SIDCLAW_API_KEY,
agentId: "support-bot",
});
const governedTools = governTools(sc, [sendEmailTool]);
Same tool. But now when the agent calls send_email, the action gets evaluated against a policy set before executing. If a policy says email-sending requires approval, the action holds. A reviewer sees the recipient, subject, body, and the agent's reasoning. They approve or deny. The agent resumes or stops.
That's the enforcement the Kiteworks 63% is missing. Not the policy. The runtime check that the policy is actually followed.
What the policy engine does per tool call
Three things:
- Evaluates the action against priority-ordered policies. First match wins. Outcomes:
allow,deny,flag(hold for approval), orlog(allow but trace). - If flagged, creates an approval request with the agent's identity, action name, input payload, reasoning, risk classification, and which policy triggered the hold.
- Records a trace event hash-chained to the previous event. Tampering with any record breaks the chain. That's the audit trail the other 33% from the Kiteworks report don't have.
What this doesn't solve
SidClaw governs actions. It doesn't filter LLM outputs for toxicity, check prompt injections, or validate that the agent's reasoning is sound. Those are different problems with different tools (Pangea, Lakera, etc.). This sits at the tool-call layer: the moment the agent decides to do something in the real world.
It also doesn't help if the 60% who can't kill a misbehaving agent don't have a policy that denies the misbehavior in the first place. You still need to define what's allowed and what isn't. The policy engine enforces. It doesn't write your policies for you.
Kiteworks report: kiteworks.com
Docs: docs.sidclaw.com
TypeScript SDK: npm install @sidclaw/sdk
Python SDK: pip install sidclaw









![Defluffer - reduce token usage 📉 by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)


