latest-news-headlines Market Intelligence /marketintelligence/en/news-insights/latest-news-headlines/biden-administration-ai-policy-efforts-to-be-complex-balancing-act-in-2023-73477426 content esgSubNav
In This List

Biden administration AI policy efforts to be complex balancing act in 2023

Podcast

MediaTalk | Season 2 | Ep. 29 - Streaming Services, Linear Networks Kick Off 2024/25 NFL Showdown

Podcast

MediaTalk | Season 2 | Ep. 27 - College Football Preview & Venu Injunction

Podcast

Next in Tech | Ep. 181: Lighting up Fiber

Podcast

MediaTalk | Season 2 | Ep. 26 - Premier League Kicks Off


Biden administration AI policy efforts to be complex balancing act in 2023

The White House, Congress and federal agencies will have to carefully consider the costs of where and how to regulate artificial intelligence technologies in the coming year, AI policy experts say.

While AI systems and related processes like machine learning and algorithmic decision-making techniques provide more efficient ways to analyze large amounts of data, those automated methods also sometimes lead to discrimination and civil liberties infringements. For instance, an algorithm designed to speed up hiring at a company might discriminate against women based on historical hiring data where mostly men were onboarded. In another instance, a facial recognition system trained on people with lighter skin might have challenges detecting the faces of people with darker skin tones.

Algorithms have become a focal point of proposed social media regulations in Congress. Some bills have addressed how recommended content harms users' mental health or spreads misinformation. Other tech policy matters like federal privacy legislation and children's online safety also touch upon AI. These continued concerns about harmful algorithms have put lawmakers and regulators in a difficult position as they balance spurring American innovation while working to keep citizens safe from potential harms.

"Any policy that fails to take into account the different uses of AI systems will likely be both over- and under-protective, limiting relatively benign uses of the technology while failing to fully address uses that have an outsized impact on people's lives," said Austin Mooney, an associate attorney focusing on privacy, cybersecurity and emerging tech affairs at law firm McDermott Will & Emery LLP in Washington, D.C.

SNL Image

National strategy

The U.S. AI regulation landscape at the start of the new year reveals patchwork research efforts at the federal level. Several governing bodies have been directed to research and report to the White House about AI's impact on various sectors, though there are no lawmaking directives.

For example, the National AI Advisory Committee created in 2020 aims to explore AI research and development, as well as how AI can address new market opportunities. The National Insitute of Standards and Technology is researching trustworthy AI technologies and technical standards.

The tension between wanting to encourage innovation while also mulling regulation poses a challenge for impactful federal AI policies, said Merve Hickok, senior research director at the Center for AI and Digital Policy and data ethics lecturer at the University of Michigan. She believes President Joe Biden's administration needs to develop a national strategy on AI matters because it will better guide Congress and agencies to enact AI laws and rulemaking.

Hickok said the strategy should focus on the creation of AI systems that are easier to understand and interpret.

With a lack of movement at the federal level, state and city government AI laws are more likely to surface in the coming months, said Ryan Hagemann, co-director of the policy lab at International Business Machines Corp. He cited New York City's passage of Local Law 144, which prevents employers from using automated selection algorithms for hiring. The law was set to take effect on Jan. 1, but the city has postponed enforcement until April 15.

One potential AI area that could garner bipartisan support in Congress is requiring the creation of impact assessments on certain systems, Hagemann said. Impact assessments serve as risk mitigation tools that analyze an AI system's benefits, limitations and possible unintended effects.

A U.S. Senate panel in September 2022 discussed regulating AI and other emerging technology, with proposed solutions including for the federal government to invest in testbeds, or hardware platforms that evaluate the performance, usability and ethics of AI models.

At the time, Sen. John Hickenlooper, D-Colo., who leads the Senate Commerce Committee's Subcommittee on Space and Science, told S&P Global Market Intelligence the U.S. should lead in establishing AI fairness standards. However, "what form that takes, whether it'd be a rulemaking or as legislation," remains unclear, the senator said.

The Biden administration is developing a national AI research entity for companies, universities and other stakeholders known as the National Artificial Intelligence Research Resource, or NAIRR. Lynne Parker, the former director of the White House's National Artificial Intelligence Initiative Office, told Market Intelligence last year that the NAIRR could guide Congress in developing legislation to crack down on harmful or abusive algorithms.

Nonbinding bill of rights

The Biden White House has made several efforts to bring more attention to AI regulations. The administration in September 2022 released guidance that called for more algorithm transparency following a roundtable discussion on contemporary tech policy topics.

The most high-profile effort the administration has made on AI was the October 2022 release of its AI Bill of Rights, which provided guidelines on developing technical standards and policies aimed at reducing unintended harms caused by AI systems. Among the recommendations included in the document were preventing discriminatory algorithms and granting users the ability to opt out of algorithmic targeting.

But experts say the document has weaknesses, mainly that it has no legal teeth and merely serves as a guide for what the administration thinks policymakers should be addressing.

Mooney from McDermott Will & Emery said he would not expect the document to have much of an effect in Congress because it does not contain an accompanying push from the White House's legislative affairs office.

Similarly, IBM's Hagemann believes the document lacks the details that would be needed to actually apply the recommendations it makes.

"It's great that [the White House Office of Science and Technology Policy] is putting out these principles," Hagemann said. "But there are still outstanding questions of implementation ... to advance trust for the AI within the government."

Automated balancing act

Arguably the greatest challenge in regulating AI systems is considering the effects that legislation has on use cases.

An automated modeling technique, for instance, can be used in less controversial settings like recommending a movie or clothing item to a user online. But it may also harm an individual in a different context, such as recommending a lower amount of money granted to a person of color in credit lending. While lawmakers may seek to impose requirements on that modeling technique in order to make it fairer, it may have unintended effects on outside use cases.

In the meantime, less-regulated AI usage can help policymakers make more informed decisions about how best to regulate AI in the long term because it allows humans to better recognize algorithms' limitations, said Orly Lobel, a law professor at the University of San Diego. Lobel authored "The Equality Machine," a book that calls for a wide-ranging adoption of AI technologies.

That argument extends to data collection practices, Lobel said. For instance, to prevent algorithms from discriminating against certain populations, it might be necessary to collect further data on individuals so that a model predicts more accurately. The view contradicts current standards set in federal privacy legislation and an ongoing Federal Trade Commission rulemaking that seeks to limit the amount of data that companies collect from users.

Ultimately, any pending technology regulation that lies ahead will involve changing or retraining AI systems, said Mooney. But there may be benefits to discussing AI regulation that could actually simultaneously encourage innovation.

"If you deliberate [AI] more, you can actually create more innovative technologies," Hickok from CAIDP said. "It is more in line with American values and American overall strategy in the world to innovate ... deliberately."