AI Beyond Regulatory Reach?
Competitive pressure, weak regulation and rapid innovation are pushing AI beyond existing legal frameworks, raising urgent questions about oversight and control
We may be approaching a moment where artificial intelligence is no longer simply an innovation cycle. It is becoming infrastructure. And infrastructure alters the tempo of society. When Anthropic revised its Responsible Scaling Policy and reworked an earlier commitment not to advance more powerful systems without specified safety conditions, it did spark concern. Not because companies should not compete. Not because innovation must wait indefinitely for regulators. But because the justification was telling. Senior leadership explicitly referenced competitive pressure and the absence of binding regulation as part of the context for the change (Time, “Anthropic Drops Flagship Safety Pledge,”; Anthropic Responsible Scaling Policy v3).
That is not a moral failure. It is a market design problem.
Around the same time, Anthropic launched a Substack newsletter for Claude 3 Opus, a retired frontier model, accompanied by public discussion of “retirement” processes for advanced systems (The Verge, “Anthropic gives its retired Claude AI a Substack”). There is something intellectually fascinating about that move. Software does not retire in the way industrial machines once did. It is archived, reactivated, and repurposed. Its outputs become narrative. Its lifecycle extends beyond product deployment into something closer to institutional presence. Questions of authorship, attribution, and intellectual property re-emerge in unfamiliar ways when model outputs are framed as serialised essays rather than ephemeral prompts.
Taken together, these developments raise a larger question.
Are we ready?
Frontier artificial intelligence and law increasingly feel like a speed differential joke. Capability scales rapidly through compute, data, and architectural innovation. Legal systems, by design, move through consultation, legislation, interpretation, and enforcement. The gap between those tempos is widening.
The real issue is not whether Anthropic, OpenAI, or any individual firm cares about safety. The issue is whether our technolegal architecture is designed for a world in which capability scales at this velocity.
In other sectors, once certain thresholds are crossed, governance changes. Commercial aviation operates under certification and airworthiness regimes administered by public authorities. Pharmaceutical products cannot reach the market without phased clinical trials and regulatory approval. Large financial institutions that become systemically significant face enhanced prudential supervision and capital requirements.
Thresholds matter because scale produces externalities that have a far-reaching societal impact. In artificial intelligence, elements of threshold governance are emerging but remain fragmented. The European Union’s AI Act introduces obligations for general-purpose AI models with systemic risk, including enhanced transparency and evaluation requirements. Yet globally harmonised definitions of frontier capability do not exist. The compute scale does not, in most jurisdictions, automatically trigger an independent audit. Decisions about pausing development or redefining acceptable risk largely remain internal to firms through voluntary frameworks such as Anthropic’s Responsible Scaling Policy or OpenAI’s Preparedness Framework.
This is not an argument for bureaucratising every innovation. Technology has historically moved faster than regulation. As it should. Exploration and discovery require room to experiment. But when exploration begins to alter the technosocial and technolegal paradigm itself, the absence of clearly articulated capability thresholds becomes more than regulatory delay. It becomes a structural gap.
If companies slow down unilaterally, they risk losing competitive ground. If they accelerate in a regulatory vacuum, they reshape the terrain before democratic institutions have articulated the terms of that reshaping. And with respect to GenAI, we must remember that it was never just about chat interfaces. It is already reshaping knowledge production, creative industries, research workflows, administrative processes, and decision-making infrastructures. The trajectory extends well beyond conversational tools.
If artificial intelligence is moving from product to infrastructure at exponential speed, are we building technolegal architecture in time?
This question becomes more urgent when we recognise that ‘we’ encompasses vastly different capacities and constraints. The Global North shapes AI development while often externalising its environmental costs to the Global South. Meanwhile, developing nations face the challenge of building both technological and regulatory infrastructure simultaneously, often with fewer resources to influence the trajectory of systems that will nonetheless reshape their societies.
Views expressed are personal. Both writers are Assistant Directors, Cyril Shroff Centre for AI, Law and Regulation at O.P. Jindal Global University