Why Won’t Governments Regulate AI?

Enthusiasts Say the Technology Will Herald a ‘Fourth Industrial Revolution.’ Unchecked, It’s More Likely to Intensify Inequality and Corporate Power

Politicians have been hesitant to regulate artificial intelligence companies. UC Berkeley’s Brian Judge explains what’s at stake. Courtesy of Igor Omilaev/Unsplash.

This piece publishes as part of the Zócalo, Arts for LA, ASU Narrative and Emerging Media Program, and LACMA program “Is AI the End of Creativity—Or a New Beginning?” Register to join in person in downtown L.A. or to watch the livestream on Tuesday, November 28 at 7 p.m. PST.


What happens when a globe-spanning corporation becomes so powerful that even nations have to answer to it? In the 18th century, the British East India Company (EIC) came close. Founded by royal charter to act as a trading arm of the British monarchy, the company grew into an imperial power in its own right.

After winning the Battle of Plassey in 1757 near Calcutta, the Company became the de facto ruler of Bengal and eventually much of South Asia. Then, in China, the EIC repeatedly violated the ruling Qing dynasty’s prohibition on the opium trade, helping to precipitate the Opium Wars and emerging victorious after strong-arming the emperor into legalizing the trade. Conquering faraway lands and evading countries’ efforts at domestic regulation, the EIC became as powerful as a nation, a colonial power ruling a vast area, and a power base that overshadowed its alleged rulers at home.

Today, a very different corporate force extracts monopoly profits and threatens national sovereignty, as the tech companies that already control the digital systems and data of billions of people worldwide—Google, Facebook, Amazon, and Microsoft—now invest aggressively in artificial intelligence, a technology poised to transform the future by changing the ways we work, shop, learn, and communicate.

Since the generative AI app ChatGPT launched publicly in November 2022 and became the fastest growing consumer application in history, commentators have rushed to celebrate AI’s potential or decry its risks. But few have focused on an important factor in how these technologies will shape the world: The companies building these systems are unprecedented in their economic might and political influence.

The tech giants’ globe-spanning power and influence hearkens back to the EIC’s vast rule and lack of accountability. Like the EIC, the tech powers are not cowed by national governments. The Pentagon’s new cloud infrastructure will be built and run by Amazon, Google, Microsoft, and Oracle. OpenAI threatened to “cease operating” in the EU in response to regulation. It’s dangerous for companies to get so large and powerful that they can dictate what regulations they will accept—no longer having to answer to state controls intended to serve the public interest. As French President Emmanuel Macron asked in 2019, “who can claim to be sovereign, on their own, in the face of the digital giants?”

If AI heralds a ‘fourth industrial revolution,’ as some enthusiasts claim, the fact that no politician has even proposed a regulatory regime comparable to those ensuring the safety of car taillights, Tylenol, or ground beef should give us pause.

Such outsized power makes regulating AI particularly important. Academics, activists, and civil society groups have made the current and potential harms from AI abundantly clear, while technologists and researchers have criticized the biases built into many AI systems and warned of the existential risks potentially posed by super-intelligent machines. These risks include turbocharged propaganda, privacy violations, and the loss of human control over advanced AI systems.

To protect citizens from these potential harms, governments should regulate as they do elsewhere—limiting the risks from AI in the same way that they protect consumers from unsafe products and practices. Despite its novelty, and complexity, AI can—and should—be regulated just like any other technology. Training data can be disclosed, models can be licensed, legal accountability for harms can be established, and consumers can be protected. If AI heralds a “fourth industrial revolution,” as some enthusiasts claim, the fact that no politician has even proposed a regulatory regime comparable to those ensuring the safety of car taillights, Tylenol, or ground beef should give us pause.

Instead, politicians around the world are enthusiastically embracing AI with little to show for their stated concern for monitoring its safety. The U.S. Secretaries of State and Commerce, echoing Silicon Valley hype, recently wrote that AI “holds an exhilarating potential to improve people’s lives and help solve some of the world’s biggest challenges, from curing cancer to mitigating the effects of climate change to solving global food insecurity.” Similarly, U.N. Secretary-General António Guterres claimed AI “has the potential to turbocharge global development, from monitoring the climate crisis to breakthroughs in medical research.”

These politicians seem to need to believe that AI will solve the spiraling crises of global warming, inequality, and authoritarian backsliding, and kickstart a productivity boom benefiting the average worker. As a result, they yield to ever-greater political and economic power of the big tech companies, and pay mere lip service to meaningful regulation.

Global policymakers do speak of voluntary commitments, frameworks, non-binding orders, advisory committees, and ethical guidelines. President Biden’s recent executive order on AI instructs agencies to write reports, conduct risk assessments, and hire “chief AI officers.” The only binding requirement on AI companies is a reporting requirement if they exceed a certain threshold of computational intensity. Even the European Union’s AI Act, derided as overly interventionist by politicians in the U.S. and U.K., may ultimately exempt large language models (LLM) like ChatGPT from oversight.

None of these should be mistaken for a serious commitment by national governments to regulate AI. We should be clear about the emerging regime of AI governance: self-regulation with a government imprimatur. A similar approach, in which regulators trusted the wisdom of supposedly “efficient markets,” led to the disastrous 2008 financial crisis.

Maybe someday AI will cure cancer and solve climate change, as some policymakers prophesy. Or maybe someday the machines will take over and become our overlords, as others warn. What’s more likely in the meantime is that without meaningful regulation, AI will make our corporate overlords even stronger. Far from liberating us from precarity, the AI revolution will intensify inequality and corporate power.

Already the big tech companies building advanced AI—dubbed “the magnificent seven” for their outsized stock returns over the last few years—now comprise 28% of the S&P 500 index and have outperformed the broader market by roughly 40 percentage points in 2023. There’s little chance this will trickle down to workers: Earnings calls with major corporations reveal expectations that AI will reduce labor costs—which translates into job losses for workers replaced by computers.

We should remind ourselves what a more historically normal trajectory of AI regulation might look like. The current approach is not a prudent response to technological novelty but a reflection of massive power imbalances between the tech giants building AI and national governments. The echo of the East India Company reminds us how dangerous these imbalances can become. The starting point for framing this new technological and economic era should not be the financial interests of big tech but established models of regulation, capable of steering corporate profit-seeking towards the common good.


×

Send A Letter To the Editors

    Please tell us your thoughts. Include your name and daytime phone number, and a link to the article you’re responding to. We may edit your letter for length and clarity and publish it on our site.

    (Optional) Attach an image to your letter. Jpeg, PNG or GIF accepted, 1MB maximum.