AI Risk Disclosures in the S&P 500: Reputation, Cybersecurity, and Regulation

AI Risk Disclosures in the S&P 500: Reputation, Cybersecurity, and Regulation

Summary

This Conference Board/ESGAUGE analysis of S&P 500 Form 10-K filings (2023–2025) shows AI quickly becoming a mainstream enterprise risk. Disclosure of at least one material AI risk jumped from 12% in 2023 to 72% in 2025. The increase is concentrated in financials, health care, industrials, IT and consumer discretionary firms.

Across sectors, disclosures cluster around three broad themes: reputational risk, cybersecurity risk and legal/regulatory risk. Companies also call out privacy, intellectual property, supply-chain and operational risks tied to specific AI technologies — generative models, machine learning decisioning, computer vision, autonomy and dependency on third-party infrastructure.

Key Points

  • 72% of S&P 500 firms disclosed at least one material AI risk in 2025, up from 12% in 2023.
  • Top sectors increasing disclosures: financials, health care, industrials, IT and consumer discretionary.
  • Reputational risk is the most cited AI concern (38% of firms), driven by bias, hallucinations, privacy lapses and visible customer-facing failures.
  • Cybersecurity risk (20% of firms) emphasises AI as a force multiplier for attacks and the vulnerabilities from vendor and cloud dependencies.
  • Legal and regulatory risk is rising as jurisdictions diverge and new AI-specific rules (e.g. the EU AI Act) create compliance and enforcement uncertainty.
  • Technology-specific exposures include generative AI (misinformation, copyright), ML decisioning (bias, opacity), computer vision/autonomy (safety and operational failure), and AI supply-chain concentration risks.
  • Other emerging concerns: intellectual property disputes, privacy/regulatory penalties, environmental footprint of model training/inference, workforce disruption and the future risk from agentic AI systems.
  • Disclosures are likely to evolve from broad statements to control-specific commitments: watermarking and provenance, bias-testing thresholds, post-deployment monitoring, AI red teaming and independent attestations.

Context and Relevance

This analysis matters for boards, C-suites, investors, risk and compliance teams because it documents how corporate disclosure practices are catching up with rapid AI adoption. The shift from pilot projects to mission‑critical systems means AI failures can quickly have financial, regulatory and reputational consequences. Regulators are already signalling higher expectations for oversight, and fragmented global rules will force firms to track jurisdictional divergence and strengthen cross-functional governance.

For practitioners, the report highlights where to prioritise governance: embed AI within enterprise risk frameworks; strengthen vendor oversight and cyber resilience; adopt measurable controls for bias, provenance and monitoring; and prepare clearer, decision-useful disclosures for stakeholders.

Why should I read this?

Because if you work in governance, legal, security or the C-suite, this is the short, sharp snapshot of where AI risk disclosure is heading — and what boards will be asked about next. It saves you the time of wading through dozens of 10‑Ks by pulling the trends, numbers and near-term actions into one crisp overview. Read it to know what questions to ask your teams tomorrow.

Source

Source: https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-regulation/