Something is changing in boardrooms from Singapore to New York, as evidenced by the way executives now discuss artificial intelligence. AI was the desired slide at the back of the deck not too long ago; it was the futuristic gesture, the optimistic flourish.
The same slide has a different weight these days. When it shows up, there’s a feeling that the room is quieter, the questions are more pointed, and no one is quite sure who should respond.
| Category | Details |
|---|---|
| Topic | Corporate AI Governance & Boardroom Accountability |
| Primary Concern | AI misinformation, hallucinations, deepfake fraud, biased automation |
| Key Stakeholders | Boards of Directors, CEOs, CIOs, Chief Risk Officers, Investors |
| Boards With Formal AI Oversight (S&P 100) | Approximately 54% disclosed |
| Boards With Both Oversight & Formal AI Policy | Roughly 28% |
| Investor Expectation | ~two-thirds want mandatory AI governance disclosure |
| Top Boardroom Worry (2025–26) | Technology and AI risk, ahead of macroeconomic concerns |
| Forecast Horizon | AI agents to handle most large-scale business transactions within 5 years (Davenport & Bean) |
| Critical Risks | Hallucinated outputs, prompt injection, copyright exposure, reputational damage |
| Action Priority | Build governance frameworks, foster internal AI testing capability, codify oversight in committee charters |
It took time for that change to occur. A chatbot creating a refund policy, a hallucinated legal citation appearing in a public filing, or a deepfaked CFO telling a junior employee to wire money are just a few examples of the minor embarrassments that allowed it to infiltrate. On its own, each incident appeared to be a footnote. Together, they have changed the direction of the discussion. Investors who used to be courteous are now asking the board in writing about AI misinformation, rather than the data team fixing it over the weekend.
A portion of the story is revealed by the numbers. Less than one-third of companies have anything approaching a thorough AI governance plan, despite surveys of legal and compliance leaders placing technology risk above macroeconomic issues at the top of the board’s worry list. Approximately two-thirds of American investors think that every business should reveal how its board manages AI ethics. Only 54% of S&P 100 proxy statements that Glass Lewis examined revealed any board-level AI oversight at all, and only 28% revealed both oversight and a formal policy. When something goes wrong, that gap—which is clearly visible—is the kind of thing that ends careers.

The familiarity of this pattern is difficult to ignore. Ten years ago, cybersecurity was considered an IT issue until breaches began to result in CEO resignations and shareholder lawsuits. The same trajectory appears to be being followed by AI, albeit more quickly and with greater public theater. 2026 is a “level-set year” for artificial intelligence, according to Thomas Davenport and Randy Bean, who wrote for the MIT Sloan Management Review. The excitement is waning. The bills are coming in.
Even if they are uncomfortable, decision makers are starting to understand the action items. Because prompt injection attacks and hallucinations keep humans firmly in the loop, agentic AI, the much-promised class of systems that can reason and act on their own, isn’t quite ready for serious deployment. Businesses that have staked their productivity claims on complete automation are carefully reversing that trend in the hopes that no one will notice. Depending on the day, Davenport and Bean’s belief that agents will handle the majority of large-scale business transactions within five years seems both optimistic and inevitable.
Practically speaking, the technology itself is more glamorous than what boards can accomplish. In committee charters, codify oversight of AI. Demand independent benchmarking instead of embracing the narrative that management has crafted. Increase internal capacity to audit and test models before they are used by clients. Misinformation risk should be measured, disclosed, and owned, just like financial restatement risk is handled by auditors. In an odd and somewhat poetic sort of role reversal, some directors are already using AI tools to pressure-test the very data that their management teams submit.
As this develops, it seems as though the business world is undergoing a quiet reckoning. When the machines speak for the company, it’s not a crash, it’s not a scandal, it’s just a gradual reevaluation of accountability. Investors will continue to inquire. The regulators are getting warmer. The boards with the most ostentatious AI strategy won’t be the ones who discover this early. When someone eventually asks where they were, they will be the ones who can clearly respond.
