Sections

Commentary

The coming AI backlash will shape future regulation

May 27, 2025


  • Tech companies and executives have gained significant influence within the federal government, including expanded access to sensitive data and a rollback of previous AI regulatory measures.
  • Despite claims from some industry leaders that AI oversight is unnecessary, widespread public concerns and documented problems—including privacy risks, algorithmic biases, and security breaches—underscore the need for responsible regulation.
  • Historical patterns show that as emerging technologies raise public alarm, demands for government intervention grow, making transparency and accountability essential for maintaining trust and the sector’s long-term success.
OpenAI CEO Sam Altman testifies before a Senate Commerce, Science, and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” on Capitol Hill in Washington, D.C., U.S., May 8, 2025.
OpenAI CEO Sam Altman testifies before a Senate Commerce, Science, and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” on Capitol Hill in Washington, D.C., U.S., on May 8, 2025. REUTERS/Jonathan Ernst

It is a glorious time for the tech sector. Many of its top chief executive officers accompanied President Donald Trump on his recent trip to the Middle East and garnered multi-billion dollar contracts for artificial intelligence (AI), data centers, and other emerging technologies. Some of the sector’s leading lights have been given top administration positions overseeing areas like AI, cryptocurrency, and space exploration.  

And these leaders don’t just have greater access—they have fewer obstacles. Congress is considering legislation that would preempt state regulations on AI and stop enforcement for the next 10 years. Elon Musk’s Department of Government Efficiency (DOGE) team has gained extensive access to sensitive U.S. government data and is deploying AI and large-language models (LLMs) across federal agencies. Trump has repealed former President Joe Biden’s AI executive order that imposed guardrails on AI applications within the federal government.  

But there is a coming AI backlash that could reverse many of these gains. In his 2023 Brookings Press book, Techlash: Who Makes the Rules in the Digital Guilded Age?,” Tom Wheeler predicted the combination of extraordinary tech innovation and highly concentrated wealth reminiscent of the early 20th century would usher in public demands for oversight and regulation to protect people from consumer harms, anti-competitive practices, and predatory behavior. 

Since that book came out two years ago, the need for human guardrails has become even more apparent, notwithstanding claims of many tech titans that their sector requires little oversight. It is possible that AI can provide positives, such as enhanced efficiency and productivity while still necessitating responsible regulation.  

When you look at public opinion polls, large numbers of Americans harbor major doubts about AI. In a 2025 Heartland survey, 72% of U.S. adults said they had concerns about AI. Among other issues, they are worried about privacy intrusions, cybersecurity risks, a lack of transparency, and racial and gender biases emanating from AI algorithms.  

Doubts about emerging technologies span both Republican and Democratic voters, with the electorate showing less polarization on tech issues than is common in other policy areas. Rather than falling along partisan lines, concerns about innovation are broadly shared, as both liberals and conservatives are increasingly hesitant to embrace certain new technologies until there is a demonstrated track record that they are safe, fair, and secure.

It is not just the public expressing doubts. Leading scientists, including those employed or financed by tech companies, have documented many problematic features of AI. They have written white papers and policy reports detailing their concerns and expressing the need for regulation. 

Major tech leaders have testified before Congress about how AI problems require oversight and regulation. Leaders such as OpenAI’s Sam Altman called for AI regulation in a 2023 congressional hearing. Sitting before Congress again in May 2025, Atlman reversed his stance—saying everything was fine in his sector and there was no need for regulation.  

There are many known and well-documented problems. AI hallucinations where algorithms generate imaginary facts persist. Rather than getting better as AI advances, this problem is actually spreading. As generative AI produces more false or misleading content, that material is increasingly being incorporated into LLMs and presented to users as factual information. 

Data abuses are proliferating. DOGE staffers have gained extraordinary access to government data with little oversight, raising widespread concerns about privacy intrusions carried out under the banner of fraud investigations. As people witnessed the abuse of AI in practice, backlash over its unbridled application quickly emerged. DOGE and Elon Musk are now unpopular federal entities despite their short-lived presence. Musk’s “chainsaw” approach to federal reform has already sparked widespread backlash and fueled Democratic gains in state and local elections. His AI-generated budget cuts are expected to feature prominently in Democratic campaign ads in the year ahead.

Cyberattacks are occurring regularly, including at major firms that tout top-tier security protections. Leading tech companies—as well as federal, state, and local agencies—have been breached, including in Rhode Island, where a major hack may have exposed the confidential health data of hundreds of thousands of residents.

Facial recognition software is widely known to have systematic racial biases, with significantly lower accuracy rates for identifying non-white faces compared to white individuals. These issues extend beyond facial recognition, as AI systems trained on incomplete or unrepresentative data often produce inaccurate outcomes or identifications.

None of these issues will disappear simply because the current political leadership or Big Tech CEOs say they are not a concern. Ignoring widespread public doubts and documented abuses does not make the problems go away. Magic may be entertaining on stage, but disappearing acts don’t work in real life. 

Where is the public trust? 

The long history of technology innovation shows that adoption depends in part on public trust and confidence. When people do not trust emerging technologies, its slows adoption and fuels calls for greater oversight and regulation. If autonomous vehicles harm people or facial recognition is biased, there will be widespread demands for public accountability and oversight. Having top leaders proclaim that algorithms are fair and don’t discriminate doesn’t eliminate the considerable evidence to the contrary. 

As AI abuses are documented and publicized, the political tides will turn and people will demand action. That is a common reform cycle with many emerging technologies. Advocates tout wide benefits from the new tools such as cars, television, and the internet, but as problems accumulate, the public calls for intervention.

Policymakers typically respond by introducing new oversight and regulations based on documented evidence. Political figures who dismiss technology concerns risk losing credibility, and public backlash tends to prompt corrective measures. History shows this pattern has repeated many times. Industry leaders should be transparent about AI’s capabilities and challenges to maintain the long-term viability of the sector and their own credibility.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).