OpenAI Agents SDK: How to Add Compliance Audit Logging with AgentHooks and RunHooks
OpenAI Agents SDK exposes AgentHooks and RunHooks — lifecycle callback interfaces that are the correct integration points for compliance audit logging. This guide shows how to implement ComplianceAgentHooks (on_start, on_tool_call, on_handoff) and ComplianceRunHooks (on_run_end) with Ghost SDK for EU AI Act Article 12, HIPAA, and SOC 2 compliance. Includes Guardrails integration for pre-decision validation, multi-agent pipeline correlation with shared decision_id, and comparison with LangGraph and CrewAI approaches.
AgentHooks vs RunHooks: Which to Use
OpenAI Agents SDK provides two hook interfaces: AgentHooks fires for events tied to a specific agent (on_start when an agent begins executing, on_end when it finishes, on_tool_call_start/end for each tool, on_handoff when control transfers to another agent). RunHooks fires for events scoped to the entire run (on_run_start, on_run_end, on_agent_start/end, on_tool_call_start/end at the run level). For compliance logging, AgentHooks captures agent-level reasoning and tool use; RunHooks captures the final outcome of the complete pipeline. Use both: AgentHooks for intermediate audit points, RunHooks.on_run_end for the definitive compliance record with full context and outcome.
Implementing ComplianceAgentHooks
Subclass AgentHooks to add capture at key decision points. In on_start, record agent identity, run_id, and input context — store in a DecisionContext dataclass for correlation across the agent lifecycle. In on_tool_call, capture tool name, tool input, and timestamp — tool calls are the primary source of consequential actions in most agent workflows (credit lookups, database writes, patient record access). In on_handoff, record which agent is receiving control and what context is being passed — handoffs define the boundary between reasoning phases. Each capture uses ghost.capture() with the decision_type matching the agent's role, subject_id from context, and metadata including agent name and version. Attach the hooks via agents.Runner.run(agent, hooks=ComplianceAgentHooks()).
Implementing ComplianceRunHooks for Final Records
RunHooks.on_run_end fires after the complete agent pipeline finishes, with access to the final RunResult. This is the primary compliance capture point: you have the complete output, all intermediate tool call results, and the full execution context. In on_run_end, call ghost.capture() with the definitive action (parsed from RunResult.final_output), the complete context, aggregate confidence if your agents emit confidence scores, and metadata including all agent names in the pipeline. The on_run_end record is the tamper-evident primary compliance record; on_tool_call captures are supporting audit points. Both are linked by a shared decision_id generated at pipeline entry.
Guardrails Integration for Pre-Decision Validation
OpenAI Agents SDK Guardrails run input_guardrail and output_guardrail functions before and after agent execution. Integrate compliance capture here for validation events: a guardrail that rejects an input (e.g., PII detection, out-of-scope request) generates a compliance record with action=REJECTED and reason from the guardrail result. This satisfies EU AI Act Article 14 human oversight requirements by documenting that the system's safety controls fired. Use ghost.capture() with decision_type="guardrail_rejection" and include the guardrail name and trigger condition in metadata. Guardrail records share the run's decision_id, enabling auditors to see the full pipeline including rejections.