News

Companies Are Racing Into AI. Oversight Isn’t Keeping Up

Inside executive suites, the AI conversation has shifted.

The question is no longer whether to adopt artificial intelligence. It is how aggressively—and how safely—to deploy it across the enterprise. In many industries, the perceived stakes have escalated beyond incremental advantage. Leaders increasingly view AI not as a feature, but as a structural pivot point.

“There is this perception, rightly or wrongly, that making the right bets on AI is existential,” said Keith Enright, partner and co-chair of the Technology Innovation Group at Gibson Dunn and former global chief privacy officer at Google, during a recent installment of Newsweek’s AI Agenda webinar series, hosted by Suraj Srinivasan, professor at Harvard Business School. “Speed is a survival imperative.”

That framing has profound consequences. If velocity becomes synonymous with survival, the systems designed to slow organizations down begin to feel like liabilities.

Yet governance exists for a reason.

“They are created to slow things down so we don’t make big mistakes,” Srinivasan said in the webinar discussion, referring to compliance and oversight structures inside companies. Corporate governance mechanisms are not accidental friction; they are intentional design features meant to introduce deliberation into high-stakes decisions.

Artificial intelligence is now stress-testing that design.

The Compliance Debt Trap

When AI becomes a declared strategic imperative, pressure cascades quickly. Boards demand acceleration. CEOs push enterprise-wide deployment. Product leaders are told to move, and legal and compliance teams are reminded not to obstruct innovation.

That combination produces a predictable temptation.

“If you have a very established governance and compliance organization and policy environment inside of a company,” Enright warned, “the obvious and simplest solution… is create a short circuit.”

Instead of routing AI initiatives through established review channels, organizations carve out alternative paths—executive overrides, expedited sign-offs, streamlined approvals. It feels pragmatic. It preserves speed. It signals commitment to innovation.

It also fragments oversight.

When decisions migrate outside normal governance protocols, leadership loses situational awareness. Risk accumulates in pockets. Trade-offs are made without a coherent view of aggregate exposure.

The result is compliance debt—a buildup of unmanaged risk that does not immediately surface but compounds over time, particularly as artificial intelligence becomes embedded across HR systems, pricing engines, marketing platforms, customer service workflows and internal productivity tools.

In that environment, short-circuiting oversight is not a one-time deviation. It becomes a structural vulnerability.

In many cases, the problem begins long before AI is deployed.

“If you’re deploying new tech tools, and especially now AI, on top of messy, ungoverned knowledge, including conflicting policies and outdated content with no clear ownership, you’re not actually moving fast,” Philip Brittan, CEO of enterprise intelligence platform Bloomfire, told Newsweek in a recent interview. “You’re accelerating toward a problem you won’t see until it gets really expensive to fix.”

The alternative is more difficult but more durable: evolve the compliance apparatus itself. Increase throughput. Improve cross-functional coordination. Recalibrate risk assessment processes for a higher-velocity environment.

“The faster the car, the better the brakes that you need,” Srinivasan said during Newsweek‘s webinar.

That metaphor is instructive. High-performance systems do not eliminate braking mechanisms; they strengthen them. Governance that cannot operate at speed becomes irrelevant—and governance that is bypassed entirely becomes dangerous.

“The organizations I’ve seen moving fastest have built governance systems early enough that they became a permission structure, rather than a constraint,” Paul McDonagh-Smith, senior lecturer at the MIT Sloan School of Management, told Newsweek.

“Governance is scaffolding: a structural condition to make ambitious experimentation possible without systemic collapse.”

Ownership Without Clarity

Speed pressures also expose a second structural tension: accountability.

Organizations have responded by minting new titles—chief AI officer, AI governance councils, cross-functional task forces. The instinct is understandable. When a risk spans the enterprise, companies seek clear ownership.

But artificial intelligence complicates traditional organizational structures.

“[AI] cuts across every domain, every business function,” Enright said. “It is a strategic business driver. It is a massive risk driver.”

That duality complicates organizational design. If everyone owns AI risk, no one does. But if a single executive is assigned responsibility without matching authority, the role becomes symbolic.

Leadership crises often stem from “a significant gap between what people are accountable for and what they have the authority and the ability… to execute,” Enright observed.

The challenge grows as AI spreads across more parts of modern organizations.

“AI usage is simply too sprawling to have one person or a small team solely responsible for the entire company,” Matt Kunkel, chief executive officer and co-founder of governance platform LogicGate, told Newsweek.

AI governance magnifies that gap. Product leaders want autonomy, legal teams want guardrails and boards want assurance. Few executives are eager to become the sole “throat to choke,” as Enright put it, for a technology that is simultaneously transformative and unpredictable.

There may not be a single elegant organizational blueprint. What matters is alignment: responsibility matched with power, authority matched with resources and oversight integrated across functions rather than siloed.

Titles alone will not close governance gaps. Structure must follow strategy.

The Limits of Notice and Consent

Beyond internal design lies a deeper structural challenge: Much of modern privacy law rests on notice and consent. AI strains that foundation.

For decades, digital compliance has relied on a straightforward premise. If companies inform users how their data will be used and obtain consent, processing is legitimized. The model is imperfect, but it has served as a legal anchor.

Enright argues that anchor has been loosening for years.

“As our lives become increasingly intermediated by thousands of complex technologies,” he said, “the idea that every one of those interactions… should have a disruptive moment where they’re explaining what’s happening and asking for your consent breaks everything.”

AI, again, intensifies that tension. Inference-based systems do not simply process individual data points; they derive patterns, predictions and conclusions from combinations of inputs. Sensitive information can be inferred from signals that appear benign in isolation. Privacy frameworks built around static categories of personal data struggle to keep pace with systems that generate new knowledge from aggregated behavior.

The solution is unlikely to be longer disclosures or additional checkboxes. Shifting the burden entirely onto individual users is neither practical nor effective.

Instead, governance in the AI era increasingly requires principled judgment inside institutions—deliberate internal processes capable of weighing benefits against harms, opportunity against exposure.

In other words, brakes.

The Real Test

Artificial intelligence presents an unprecedented concentration of opportunity. It also introduces volatility and structural strain inside organizations built for slower cycles of technological change.

The real governance test is not whether companies can slow innovation. It is whether they can sustain it.

Speed without structure creates fragility. Structure without adaptability creates stagnation. The task before leaders is to redesign internal systems so that acceleration and accountability reinforce rather than undermine each other.

The tools are not entirely new. Institutions have navigated disruptive transitions before. Regulatory landscapes have evolved. Governance mechanisms have been recalibrated.

“We know how to build brakes,” Srinivasan said.

The question is whether companies will invest in upgrading them before the curve sharpens.

Businesses are going to move forward one way or the other, as Enright noted. Forward motion may be non-negotiable. But durability is not automatic.

In the AI era, true resilience will belong to organizations that treat governance not as friction to bypass but as infrastructure to modernize—strong enough to handle speed, flexible enough to adapt and credible enough to withstand scrutiny when it inevitably arrives.

Read More

Show More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button