◆ FOR CTOS & VP ENGINEERING

How it's actually
built.

Powered by Claude Agent SDK — the same engine that powers Claude CoWork. The runtime does the reasoning. We built the enterprise layer — multi-tenant isolation, zero-trust security, audit trails, and the infrastructure that makes AI agents as a service production-ready.

01 The Engine

Context That Never Decays

The engine behind Claude CoWork — Claude Agent SDK

Claude CoWork is widely regarded as the most capable agentic AI available. The engine underneath it — Claude Agent SDK — is what shiftagent runs on. It uses tools, reasons in steps, maintains context across a complex multi-hour task, and never loses the thread. When we evaluated what to build on, there was no close second.

Our contribution is the enterprise layer: we took that engine and made it cloud-native, multi-tenant, and production-grade. Per-session containers, tenant-level isolation at every layer, persistent context across sessions, and full audit trails on every action.

Every agent session runs in its own secure container — spun up per tenant, per session — with full capability intact: tools, multi-step reasoning, code execution, file system access. Behind every agent session: a secure container spins up per tenant, per session. Tenant-level isolation at every layer. The agent has full capability: tools, multi-step reasoning, code execution, file system access. Your data never touches another tenant's environment.

And when the session ends, the container is gone. Context is persisted securely between sessions — the agent's memory carries forward. The next session resumes exactly where the last left off, with full context of everything it's learned about your tenant.

Per-session containers Context window management Session continuity Multi-tenant orchestration Stateful agent execution Ephemeral sandboxes
Per-session isolation
Tenant A · Session 7f3c
Claude Agent SDK Running
🧠 Context window 124k tokens
🔒 Isolated sandbox Active
📂 Tenant context Checked out
Tenant B · Waiting
Agent Runtime Standby
🔒 Isolated sandbox Dormant
Tenant C · Queued
Agent Runtime Queued
🔒 Isolated sandbox Pending

No cross-tenant bleed. Ever.

◆ AGENT IN ACTION

While you sleep, it works.

Agent reasoning trace showing step-by-step execution

Agent reasoning trace — step-by-step execution

Task detail view showing full context, timeline and output

Task detail — full context, timeline, output

02 Zero-Trust Security + Audit

The LLM Never Sees Real Credentials

PCI DSS by architecture, not by policy

In embedded payments, the LLM never sees API keys, OAuth tokens, PANs, or any sensitive data. This isn't a policy. It's enforced by architecture.

Every sensitive value is vaulted at the point of entry. The agent environment contains only aliases — opaque identifiers that mean nothing if extracted. When an agent needs to make an outbound call, our forward proxy intercepts the request, resolves aliases to real values at the network boundary, and authenticates the action via CIBA and OAuth 2.0. The LLM never participates in that exchange. It never knows the real credentials exist.

This is PCI DSS by architecture: the sensitive data never enters LLM context. Not "we don't send sensitive data to the model by policy" — we make it structurally impossible. The forward proxy is the only component that can resolve vault aliases. Nothing else can.

Every agent action is logged. Every tool call recorded. Every outbound request tracked with timestamp, session ID, tenant ID, action type, and approval status. A full enterprise audit trail — immutable, queryable, and complete. Access logs, action logs, CIBA approval records (pending/approved/denied) — all accessible per tenant, per session, per time window.

Zero-trust Forward proxy CIBA OAuth 2.0 Vault aliases Enterprise audit trail PCI DSS SOC 2

Credential flow

🤖

Agent

Sees aliases only

alias_key_7f3c
🛡

Forward Proxy

CIBA · OAuth 2.0

real_secret_xyz
🌐

External API

Real credentials

Audit Log ● Live
09:14:22 tool_call search_transactions · approved
09:14:25 outbound_request api.payments.internal · 200 OK
09:14:31 ciba_approval submit_chargeback · pending…
03 Financial Data Accuracy

Determinism Over Hallucination

LLMs are not accurate with numbers. We built around that.

LLMs are not accurate with numbers. This is documented, well-understood, and still the most underrated risk when deploying AI in financial operations. A hallucinated fee extraction in a compliance filing isn't a bug — it's a liability.

Our architecture separates reasoning from computation. The LLM reasons about structure: what is this document, where are the fee columns, what extraction strategy applies to this schema. Deterministic pipelines do the math. No LLM involvement in arithmetic. Multi-gate evaluation steps verify the output before it persists — cross-checking totals, flagging anomalies, confirming against prior session data.

Structured output schemas enforce the shape of every financial record. Memory systems persist verified data across sessions — the agent doesn't re-derive a number it already confirmed, it recalls it. Grounded generation means the LLM narrates findings; the structured data is the source of truth, not the LLM text.

Agents are continuously updated with the latest industry-standard extraction methodologies, scientific studies, and regulatory documentation — so the reasoning layer stays current with the industry it operates in.

Deterministic extraction Multi-gate verification Structured output schemas Grounded generation Memory persistence Financial-grade accuracy

Financial extraction pipeline

Structure Recognition

LLM identifies document type, fee columns, schema

Deterministic Extraction

Code parses values — no LLM involved in arithmetic

Multi-Gate Verification

Cross-checks totals, flags anomalies, confirms against prior sessions

Grounded Generation

LLM narrates findings; structured data is source of truth

Interchange fee 1.65% Verified
Monthly processing vol $2,847,293.14 Verified
Auth fee delta +$0.0034 Review
04 Embeddability

The Right AI for the Right Problem

Most companies are chasing the wrong thing

Every vertical SaaS company now has an implicit market requirement: ship AI. Most are chasing the wrong thing — route optimization, generic chatbots, document search. The real value is AI that runs the operations at the center of your product.

Building that yourself means: enterprise security compliance, multi-tenant architecture, LLM orchestration, context management, audit trails, CIBA approval workflows — months of work before a single skill runs. And that's before you've written a single line of domain logic specific to your vertical.

The <shift-agent> web component is one line of HTML. Shadow DOM isolation means your product stays yours — no style bleed, no script collision. Fully white-labeled: your logo, your colors, your domain. SSO, SCIM, and custom theming cascade through the partner hierarchy. Your customers never see shiftagent — they see your AI workforce.

shiftagent is the infrastructure layer. Your team focuses on what only you know: the domain logic, the edge cases, the industry relationships. We handle the rest.

Shadow DOM Web component White-label Multi-tenant SSO SCIM CIBA approval flows Real-time task interface
your-product.html
<!-- Embed the entire AI workforce -->
<shift-agent
tenant="your-customer-id"
theme="your-brand"
vertical="payments"
></shift-agent>

What your customers get

AI workforce — branded as you
Playbook execution, not just advice
Real-time task & approval interface
Enterprise audit trail — built in
Zero-trust by architecture — always
SSO, SCIM, compliance-ready

◆ BUILD ON THE PLATFORM

Ready to build on the platform?

Early access is open for vertical SaaS companies and payment processors. Get the technical details and scope your integration.