How to Add Compliance Monitoring and Audit Trails to LangGraph Agents
LangGraph agents need decision audit records — not just LangSmith traces — to satisfy EU AI Act Article 12, HIPAA audit controls, SOC 2 CC7.2, and GDPR Article 22. This guide shows how to add compliance-grade logging to LangGraph using callbacks and state snapshots, with code examples for single agents, multi-agent graphs with decision_id correlation keys, and human override capture. Ghost SDK integrates in under 2 lines of code and adds less than 5ms overhead.
Why LangSmith Traces Do Not Satisfy Compliance
LangSmith captures execution traces — spans for each LLM call, token counts, latency, prompt/response pairs. Compliance frameworks require decision records: per-decision entries that link the person affected (subject_id), the complete context at decision time, the reasoning chain, the chosen action, and the downstream outcome. LangSmith traces are mutable (can be deleted), stored in a vendor-controlled cloud (not under organization control), and organized by execution run rather than by decision event affecting a specific person. EU AI Act Article 12 requires automatic logging enabling post-hoc reconstruction of each operation; HIPAA §164.312(b) requires audit controls linking access to patient records; GDPR Article 22 requires meaningful information about the logic of automated decisions available per data subject. LangSmith satisfies none of these as a primary compliance record.
Adding Audit Logging via LangGraph Callbacks
LangGraph exposes callbacks at the graph, node, and edge level through BaseCallbackHandler. A compliance callback captures state at key decision points without modifying graph logic: on_chain_start captures intent and inputs, on_chain_end captures outputs and timing. Inside the callback, Ghost SDK's fire-and-forget capture() call queues the decision record asynchronously — no blocking, under 5ms overhead. The capture includes: decision_type (what kind of decision), subject_id (who was affected), context (complete input state), reasoning (LLM explanation), action (chosen decision), confidence (model certainty), and metadata (model_version for tracking LLM API updates). This pattern works with any LangGraph node and requires no changes to existing graph logic.
Multi-Agent Graph Compliance with Correlation IDs
LangGraph supports nested subgraphs and multi-agent coordination. When a decision results from multiple agents (orchestrator → specialist → tool), the compliance record must link all steps. The pattern: generate a decision_id at the orchestrator level, pass it through config to all subgraphs, and include it in every Ghost SDK capture() call. This creates a linked audit trail for the complete decision chain. When a regulator requests the complete decision trail for a specific case, retrieving all records with the same decision_id returns the full reasoning path from orchestrator intent to final decision.
Human Override Capture for LangGraph
EU AI Act Article 14 requires human oversight capability with documented override procedures. GDPR Article 22(3) requires human review rights for automated decisions. Ghost SDK's capture_override() method records human corrections: original_decision_id (linking to the AI decision), reviewer_id (who reviewed), original_action (what the AI decided), corrected_action (what the human changed it to), and reason (why the correction was made). Override records satisfy the Article 14 documentation requirement and are exported as JSONL in OpenAI fine-tuning format — human corrections that improve agent behavior over time.