Editorial Policy — how Tenet AI publishes regulatory content
How Tenet AI researches, drafts, reviews, and updates content covering AI compliance regulations (EU AI Act, HIPAA, SOC 2, GDPR, ISO 42001). Author identification, expert review, primary-source citation standards, AI-assistance disclosure, and correction policy. This page documents the human editorial process behind every article on tenetai.dev.
Authorship and expertise
Every article is attributed to a named author with verifiable professional context. Author identity is encoded in structured data (schema.org Person) and links to the author public profiles on LinkedIn, GitHub, and ProductHunt. The lead author for regulatory and engineering content is Igor Fedorov, Founder and CEO of Tenet AI, who has shipped production AI systems in regulated industries (fintech, healthtech, govtech) and works directly with compliance leaders on EU AI Act, HIPAA, SOC 2, and ISO 42001 implementations. Where domain expertise beyond the lead author primary specialism is required (clinical workflow, legal interpretation, industry practice), articles are co-authored or reviewed by an internal subject-matter expert or a named external advisor; the reviewer is disclosed in the article metadata and in the JSON-LD reviewedBy property.
Use of AI assistance and human editorial control
Tenet AI uses large language models (currently GPT-4o and Claude Opus 4.x) to accelerate research, draft section structure, and surface candidate citations. AI is a drafting tool, not the editorial voice. Every published article is read end-to-end by a human editor before publication. The editor verifies every regulatory citation against the primary source (eur-lex.europa.eu, hhs.gov, NIST, FDA, FINRA, SEC, ICO, CNIL), replaces any unverifiable claim with a sourced statement or removes it, removes AI hallucinations and stale statistics, ensures every numerical claim has a dated source citation, and rewrites any section that reads as templated rather than substantive. This follows Google guidance on AI-generated content: AI is acceptable as a production aid when supervised by a human expert who takes editorial responsibility for the output. Content that is mass-produced without such supervision falls under Google scaled content abuse policy and is not published on this site.
Source standards and citation tiers
Every regulatory claim is linked to a primary source. We distinguish between three citation tiers. Tier 1 — primary regulatory text: eur-lex.europa.eu (EU AI Act, GDPR), hhs.gov (HIPAA), aicpa.org (SOC 2 TSC), iso.org (ISO 27001, ISO 42001), congress.gov and cfr.gov (US federal statutes), fda.gov, finra.org, sec.gov, federalreserve.gov, ftc.gov, cfpb.gov, ico.org.uk, cnil.fr. Tier 2 — supervisory guidance: EDPB guidelines, NIST AI RMF 1.0, official supervisory authority interpretations, EU AI Office guidance, FCA and PRA consultation papers, Federal Reserve SR letters. Tier 3 — secondary analysis: Big-Four advisory papers, named law firm client alerts, academic publications, cited only when Tier 1 or Tier 2 is unavailable and the source is named in-text. Statistical claims (penalty amounts, market size, adoption rate, breach frequency) are linked to a dated primary source. Extrapolated or modelled figures are disclosed explicitly with methodology.
Update, correction, and retraction policy
Regulatory text changes. Penalty amounts change. Supervisory authorities revise their interpretations. We treat content maintenance as part of the publishing commitment, not optional polish. Quarterly review: every published article is re-read at least once per quarter; dateModified in schema.org metadata reflects the most recent meaningful edit, not cosmetic changes. Triggered review: when a regulation enters force, is amended, or a supervisory authority issues new guidance, every affected article is flagged for review within seven days. Correction notices: material factual corrections (wrong penalty amount, wrong effective date, wrong article number) are noted at the bottom of the affected article with the date of correction. Retraction: articles that are no longer accurate and cannot be repaired are retracted with a notice explaining why and a redirect to the closest current article.
Conflicts of interest and competitive coverage
Tenet AI is a commercial software product. We are not impartial — we believe decision auditability is the right architecture for high-stakes AI, and our content advocates for that position. Where we compare Tenet AI to a competitor product (LangFuse, LangSmith, Arize, IBM watsonx, Datadog, Dagster, Trigger.dev), we follow three rules. First, we never describe competitor products inaccurately; capability statements are derived from competitor public documentation. Second, we name explicitly where the competitor is the right choice — each comparison page contains a sincere "where the competitor wins" section. Third, we do not pay for, solicit, or accept editorial input from competitors, and we do not run comparative ads attacking competitors by name. Where an article cites a customer or pilot relationship, the relationship is disclosed in-text.
Independence from legal advice and reader feedback
Tenet AI is not a law firm and our content is not legal advice. Articles describe our reading of regulatory text and supervisory guidance as of the date of publication. Decisions with material legal or financial consequence should be confirmed with qualified counsel in the relevant jurisdiction. We name specific external advisors and law firms only where they have authored or reviewed an article. If a reader finds a factual error, an outdated citation, or a regulatory interpretation they believe is incorrect, they may email editorial@tenetai.dev with the article URL and the specific claim. We acknowledge every report and respond with a decision (correct, retract, disagree with reasoning) within five working days.