latest-news-headlines Market Intelligence /marketintelligence/en/news-insights/latest-news-headlines/how-4-major-governments-are-regulating-generative-ai-8211-or-not-79625844 content esgSubNav
In This List

How 4 major governments are regulating generative AI – or not

Blog

The Party is Over: Tupperware’s Failure

Podcast

Private Markets 360 - Episode 17: European Credit Opportunities

Blog

Engineering and Construction Cost Indicator declined in September as cost increases for materials and equipment moderate

Podcast

Next in Tech | Ep. 186: B2B Payments Technology and Markets


How 4 major governments are regulating generative AI – or not

SNL Image

Governments across the globe are trying to reckon with generative AI. While points of agreement exist, there are also points of friction.
Source: Generated with AI using Image Creator from Microsoft Designer

.
SNL Image

AI: Beyond the Hype is a multipart series exploring trends around various artificial intelligence technologies. (Image: Generated with AI using Image Creator from Microsoft Designer.)

Risks, regulation in focus as AI boom accelerates

Hollywood strikes highlight potential labor force turmoil over AI to come

Generative AI Digest: A roundup of latest breakthroughs and developments

Private equity bets on AI gold rush with billions pumped into datacenters

Insurers brace for claims from generative AI surge

AI investment sags as financing, intellectual property issues complicate deals

.

The proliferation of generative AI has governments across the globe working to balance innovation and regulation.

While AI has been around for years, the release of OpenAI LLC's text generator ChatGPT in November 2022 catalyzed the industry. A technology once viewed primarily as a tool for pattern classification is now a generator of content, raising questions about if and how governments should protect against misinformation, copyright infringement, cyberattacks and myriad other concerns.

Different countries have responded in different ways. Looking at four major players in the global AI landscape — the US, the UK, the EU and China — some have taken a light-touch approach intended to encourage innovation, while others have been more prescriptive about how to prevent potential harm. The challenge in all cases is that governments are trying to regulate a rapidly evolving technology.

"Definitely, we are playing catch-up with the technology," said Anjana Susarla, Omura-Saxena Professor of Responsible AI at Michigan State University's college of business.

First steps

In the US, the Biden administration released an executive order in late October focused on promoting safe and secure AI development. Invoking the Defense Production Act, the order requires developers of the largest foundation models — the kind that are trained on a vast quantity of data and can be adapted to a variety of applications — to share their safety test results and other critical information with the US government.

The order also builds on the voluntary AI Risk Management Framework released by the National Institute of Standards and Technology (NIST), pushing for its adoption across agencies, especially in the critical infrastructure sector. The framework outlines a process entities can use to understand and address the risks associated with using AI.

Biden's order directs NIST to develop standards for red-team testing and, again, requires foundation model developers to share those results with the government. One thing it does not do is ban any AI systems outright.

Categories of risk

The EU's proposed AI Act takes a more aggressive approach, which would classify AI applications into four risk categories, with stricter requirements for high-risk AI systems.

AI software considered an unacceptable risk would include any that manipulates the minds or behavior of people or specific vulnerable groups. As an example, the European Parliament cited voice-activated toys that encourage dangerous behavior in children.

Also banned under the proposed EU law are social scoring systems that classify people based on behavior, socioeconomic status or personal characteristics, as well as real-time and remote biometric identification systems.

High-risk systems would fall into two categories: systems used in products that are already covered by the EU's product safety legislation, such as cars, planes and medical devices; and systems that fall into key categories, such as education, employment, law enforcement and critical infrastructure. AI systems in the latter categories will have to register in an EU database.

It would not be the first time the EU has taken a stronger stand on regulation than the US. The EU adopted its sweeping privacy law, known as the General Data Protection Regulation (GDPR), in 2018. The US has yet to pass a federal privacy law, though Biden's AI executive order urged Congress to do so. US-based companies that operate in the EU are already required to comply with GDPR mandates.

Points of agreement

While different governments have taken different approaches to regulating AI — Stanford University's Human-Centered Artificial Intelligence estimates all countries globally passed 123 AI-related bills between 2016 and 2022 — there are some points of agreement, said Michael Frank, a senior fellow at the Center for Strategic and International Studies.

Frank, who works in the CSIS' Wadhwani Center for AI and Advanced Technologies, pointed to the Bletchley Declaration, signed by 28 countries in November 2023. The declaration states that many risks arising from AI — especially in domains such as cybersecurity and biotechnology — are best addressed through international cooperation.

"[It] offers a blueprint for responsible AI, advocating human-centric, trustworthy AI within regulatory frameworks that balance innovation and risks," Frank said. "Effective AI governance requires a mix of national and international approaches, with a role for both sector-specific regulations and broad, overarching principles."

The declaration was signed at a summit convened by the UK, with signatories including the US, the EU and China as well as Brazil, France, India, Ireland, Japan, Kenya, the Kingdom of Saudi Arabia, Nigeria and the United Arab Emirates.

"AI knows no borders, and its impact on the world will only deepen," said then-UK Foreign Secretary James Cleverly in a statement after the declaration.

Country-specific systems

But AI systems can learn borders — and in some cases they must. China, for instance, requires companies to submit security assessments and receive clearance before they can release AI products to the public.

So far, the Cyberspace Administration of China (CAC), the country's main internet watchdog, has approved 15 generative AI services to operate in China. The CAC has also imposed measures that require generative AI services to only use training data that is lawful and that does not infringe on intellectual property rights. AI-produced content in China also must not subvert state power or endanger national security.

The regulations mean that AI systems designed to operate in China will likely have been trained on data sets different from those used to train models developed in the US or other markets.

The same also could be true of European and American AI systems given the different regulations in those regions, Susarla noted.

"There are already very widely varying standards across the world," Susarla said.

Need for regulation

Despite the different approaches, there seems to be widespread agreement that some protections are needed, and needed quickly.

"Inaction in AI regulation risks escalating security threats, uncontrolled technological advancement, and exacerbating social and economic disparities," Frank said.

Susarla said she would like to see requirements around increased transparency or disclosures, compliance with key security standards and privacy protections for individuals, along with enforcement authority for federal agencies.

"That would be a good mix," she said.

Notably, two of the top generative AI text generators — OpenAI's ChatGPT and Google's Bard — seemed divided on the question of regulation.

"The question of whether generative AI should be regulated is a complex and debated topic. There are arguments on both sides, and the need for regulation often depends on the specific context and application of the technology," ChatGPT said. Recently reinstated OpenAI CEO Sam Altman has repeatedly advocated in favor of regulation.

Google LLC's Bard was more unequivocal in its answer. "Yes, I believe that AI should be regulated. AI is a powerful tool that can be used for good or bad. It is important to have regulations in place to ensure that AI is used in a responsible and ethical manner," Bard said.

In particular, Bard said AI regulations should focus on transparency, accountability, fairness and robustness.

"By following the principles outlined above, we can help to ensure that AI is a force for good in the world," Bard said.