◆ FOR CTOS & VP ENGINEERING
How it's actually
built.
Powered by Claude Agent SDK — the same engine that powers Claude CoWork. The runtime does the reasoning. We built the enterprise layer — multi-tenant isolation, zero-trust security, audit trails, and the infrastructure that makes AI agents as a service production-ready.
Context That Never Decays
The engine behind Claude CoWork — Claude Agent SDK
Claude CoWork is widely regarded as the most capable agentic AI available. The engine underneath it — Claude Agent SDK — is what shiftagent runs on. It uses tools, reasons in steps, maintains context across a complex multi-hour task, and never loses the thread. When we evaluated what to build on, there was no close second.
Our contribution is the enterprise layer: we took that engine and made it cloud-native, multi-tenant, and production-grade. Per-session containers, tenant-level isolation at every layer, persistent context across sessions, and full audit trails on every action.
Every agent session runs in its own secure container — spun up per tenant, per session — with full capability intact: tools, multi-step reasoning, code execution, file system access. Behind every agent session: a secure container spins up per tenant, per session. Tenant-level isolation at every layer. The agent has full capability: tools, multi-step reasoning, code execution, file system access. Your data never touches another tenant's environment.
And when the session ends, the container is gone. Context is persisted securely between sessions — the agent's memory carries forward. The next session resumes exactly where the last left off, with full context of everything it's learned about your tenant.
No cross-tenant bleed. Ever.
◆ AGENT IN ACTION
While you sleep, it works.
Agent reasoning trace — step-by-step execution
Task detail — full context, timeline, output
The LLM Never Sees Real Credentials
PCI DSS by architecture, not by policy
In embedded payments, the LLM never sees API keys, OAuth tokens, PANs, or any sensitive data. This isn't a policy. It's enforced by architecture.
Every sensitive value is vaulted at the point of entry. The agent environment contains only aliases — opaque identifiers that mean nothing if extracted. When an agent needs to make an outbound call, our forward proxy intercepts the request, resolves aliases to real values at the network boundary, and authenticates the action via CIBA and OAuth 2.0. The LLM never participates in that exchange. It never knows the real credentials exist.
This is PCI DSS by architecture: the sensitive data never enters LLM context. Not "we don't send sensitive data to the model by policy" — we make it structurally impossible. The forward proxy is the only component that can resolve vault aliases. Nothing else can.
Every agent action is logged. Every tool call recorded. Every outbound request tracked with timestamp, session ID, tenant ID, action type, and approval status. A full enterprise audit trail — immutable, queryable, and complete. Access logs, action logs, CIBA approval records (pending/approved/denied) — all accessible per tenant, per session, per time window.
Credential flow
Agent
Sees aliases only
Forward Proxy
CIBA · OAuth 2.0
External API
Real credentials
Determinism Over Hallucination
LLMs are not accurate with numbers. We built around that.
LLMs are not accurate with numbers. This is documented, well-understood, and still the most underrated risk when deploying AI in financial operations. A hallucinated fee extraction in a compliance filing isn't a bug — it's a liability.
Our architecture separates reasoning from computation. The LLM reasons about structure: what is this document, where are the fee columns, what extraction strategy applies to this schema. Deterministic pipelines do the math. No LLM involvement in arithmetic. Multi-gate evaluation steps verify the output before it persists — cross-checking totals, flagging anomalies, confirming against prior session data.
Structured output schemas enforce the shape of every financial record. Memory systems persist verified data across sessions — the agent doesn't re-derive a number it already confirmed, it recalls it. Grounded generation means the LLM narrates findings; the structured data is the source of truth, not the LLM text.
Agents are continuously updated with the latest industry-standard extraction methodologies, scientific studies, and regulatory documentation — so the reasoning layer stays current with the industry it operates in.
Financial extraction pipeline
Structure Recognition
LLM identifies document type, fee columns, schema
Deterministic Extraction
Code parses values — no LLM involved in arithmetic
Multi-Gate Verification
Cross-checks totals, flags anomalies, confirms against prior sessions
Grounded Generation
LLM narrates findings; structured data is source of truth