India DPDPA AI Compliance: Automated Decisions, Significant Data Fiduciaries, and the DPBI Framework
India's Digital Personal Data Protection Act (DPDPA), 2023, received Presidential assent on August 11, 2023 and applies to any organization processing digital personal data of India's 1.4 billion residents — including overseas AI companies offering services to Indian users. DPDPA is notable for two AI-specific features: a strict consent framework with no general "legitimate interests" exception for commercial AI processing (unlike GDPR, which allows legitimate interests for most B2B AI processing), and a "significant data fiduciary" (SDF) designation creating enhanced obligations including India-based DPO, independent audit, DPIA, and algorithmic accountability requirements. DPDPA does not include an explicit GDPR Article 22-style prohibition on automated decisions — but Section 11 (right to information) and Section 10 (SDF algorithmic accountability) create functional AI transparency obligations. The Data Protection Board of India (DPBI) enforces the law with fixed penalties: up to ₹250 crore (~$30M) for major violations. DPDPA Rules (DPDP Rules, 2025) are being finalized and will establish specific operational requirements including breach notification timelines and SDF criteria.
DPDPA Architecture: Data Fiduciaries, Data Processors, and Data Principals
DPDPA uses the same three-party structure as GDPR but with Indian terminology. Data fiduciary (=GDPR controller): any person or entity determining the purpose and means of processing personal data. AI companies and enterprises deploying AI systems that determine how Indian users' data is processed are data fiduciaries — they bear primary DPDPA compliance obligations. Data processor (=GDPR processor): any person or entity processing personal data on behalf of a data fiduciary. Cloud AI API providers (OpenAI, Anthropic, Google Cloud, AWS) processing Indian personal data under customer instructions are data processors. Data principal (=GDPR data subject): the individual to whom the personal data relates. DPDPA applies to digital personal data — data that is digitized or data that is in digital form (the Act does not cover paper records, although digitized versions of paper records are covered). Geographic scope: DPDPA applies to processing of digital personal data within India; to processing outside India where it is in connection with offering goods or services to data principals within India. Foreign AI companies with no Indian presence but processing personal data of Indian users for Indian-targeted services are within DPDPA scope.
Section 6: Consent — The Strict Standard for AI Processing
Section 6 of DPDPA establishes consent requirements. For AI teams, the critical constraint is the absence of a general "legitimate interests" processing basis. GDPR Article 6(1)(f) permits processing where the controller's legitimate interests override the data subject's fundamental rights — broadly used by commercial AI platforms for behavioral analytics, personalization, and model training without explicit consent. DPDPA Section 7 provides deemed consent (=legitimate interests equivalent) only for specific, narrow purposes: performance of a contract to which the data principal is a party; compliance with legal obligations; responding to medical emergencies to prevent death or injury; activities of the state for public interests; employment purposes. Deemed consent for commercial AI processing of Indian personal data is not available. Commercial AI platforms (behavioral profiling, recommendation engines, model training on user data) must obtain explicit, specific consent for each processing purpose. Section 6(4) requires that withdrawal of consent be as easy as giving it. AI systems must implement consent withdrawal propagation — when a user withdraws consent, the withdrawal must stop AI processing, trigger data deletion requests to downstream processors, and prevent future processing without re-consent. Consent withdrawal does not affect processing lawfulness before withdrawal.
Section 10: Significant Data Fiduciary — Enhanced AI Obligations
DPDPA Section 10 allows the Central Government to notify entities as "significant data fiduciaries" based on: volume of personal data processed; sensitivity; risk of harm to data principals; potential impact on sovereignty, integrity, national security, or public order; risk to electoral democracy; and scale of potential impact on rights. SDF designation triggers enhanced obligations relevant to AI: (1) Data Protection Officer physically based in India, reporting directly to the board — not a shared/offshore DPO; (2) Independent data auditor appointed annually to audit DPDPA compliance, specifically covering AI systems; (3) Data Protection Impact Assessment (DPIA) for high-risk processing activities including AI systems that affect data principals significantly; (4) Algorithmic accountability — DPDPA Rules are expected to specify transparency, explainability, and auditability requirements for AI systems operated by SDFs that use personal data to make significant decisions. Organizations that have or expect SDF designation should pre-build these compliance programs: audit trails for AI decisions, DPIA frameworks, India-based DPO role, and independent audit processes. The Rules will set deadlines after designation — building infrastructure pre-designation avoids compliance crunches.
Section 9: Children's Data — Absolute AI Prohibitions
DPDPA Section 9 creates one of the strongest children's data protection regimes globally for AI. For data principals under 18, data fiduciaries must: obtain verifiable parental consent before processing; and comply with an absolute prohibition: data fiduciaries must not undertake processing of personal data of a child that is likely to cause any detrimental effect on the well-being of the child. Section 9 additionally creates categorical prohibitions: (a) tracking or behavioral monitoring of children; (b) targeted advertising directed at children. These are absolute prohibitions — there is no consent exemption. For AI systems: any AI recommendation system, behavioral analytics engine, or targeted advertising model must implement robust age verification and, for identified or likely-under-18 users, completely disable behavioral tracking and targeted advertising features. A nominal age gate in terms of service does not satisfy Section 9 — DPDPA Rules are expected to require verifiable age verification measures. AI systems found to have conducted behavioral monitoring of children face the maximum penalty tier (₹200 crore, ~$24M USD).
DPDPA Penalties, DPBI Enforcement, and the Rules Timeline
The Data Protection Board of India (DPBI) is the enforcement authority — a government-appointed board with quasi-judicial powers. DPBI can impose financial penalties after an inquiry process, with amounts fixed rather than revenue-percentages: failure to implement security safeguards (Section 8) — up to ₹250 crore (~$30M); breach of significant data fiduciary obligations (Section 10) — up to ₹200 crore; failure to notify personal data breaches — up to ₹200 crore; violation of children's data provisions (Section 9) — up to ₹200 crore; breach of data principal rights — up to ₹10,000 per instance for individual complaints. Maximum aggregate penalty in a single inquiry is ₹500 crore. DPDPA Rules timeline: the draft DPDP Rules were published for public consultation in January 2025. Final Rules are expected in 2025, with compliance timelines to be set out in the Rules themselves. AI teams should use the Rules-finalization window to build compliance infrastructure — once Rules are final, enforcement timelines begin. Key Rules awaited: SDF designation criteria, breach notification format and timeline, consent mechanism standards, and algorithmic accountability requirements for significant data fiduciaries.