S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Research — 17 Apr, 2024
By Zach Ciampa
The recent explosion of AI has resulted in ongoing debate regarding its benefits and risks. In a prior report, we explored how consumers largely recognized practical uses for more evolved AI tools, but their concerns over its potential for job replacement, fraud and misuse — and even to gain sentience — were still highly prevalent. The US population appears to have moved toward a consensus on the need for regulations regarding AI. This would provide guidance in an uncertain time.
New technology has always played a role in human evolution, and fears have always followed. From the harnessing of fire to the introduction of electricity, hesitation has been at the forefront of practically every innovation, and AI is no different. As a cultural phenomenon since at least the 1800s (with the invention of a steam-powered humanlike robot in 1868), AI may face a more pronounced bias. Generative AI developments have fueled practical concerns around the potential for intentional misuse at the hands of other people. These fall into one of two buckets: that some individuals or groups will use AI to deliberately cause harm, or that companies focused on adopting AI to remain competitive will not consider its mass-scale implications. Although organizations will need to determine how to best navigate AI's role, federal regulation toward a checks-and-balances system would help provide a greater sense of security from the potential fallout — not as a means to restrict business, but rather to prioritize the well-being of human society.
An overview of consumer use and sentiment
According to 451 Research's 2023 Digital Maturity survey, more than half (53%) of US businesses plan to incorporate more AI into their applications over the next 24 months, and according to our 2024 Trust & Privacy survey, more than one-third (35%) of US consumers have already used generative AI in the past year, while an additional 23% plan to.
As with many societal issues, consumers are evenly split over whether AI advancements will be helpful or harmful. However, a majority believe it will require regulation. In our Trust & Privacy survey, about two-thirds (65%) would be somewhat likely to support federal regulation of AI in the US. This is on par with attitudes around regulating data privacy, which 70% of consumers would be likely to support. Respondents to our Trust & Privacy survey who have either used AI or plan to use it see the most value in the following areas: general information search (45%), writing assistance (33%) and customer support (29%). However, that does not mean they are unaware of the potential downsides.
Consumers' top concerns largely highlight the risks already prevalent in a ubiquitously connected world, the top ones being scams/fraud (54%), risks to their data privacy (47%), misuse with ill intentions (46%) and disinformation/misinformation (46%). Much of the potential regulation that exists, either in the US or abroad, aims to tackle these types of nefarious acts.
Federal initiatives before generative AI
The Select Committee on Artificial Intelligence was an initiative by the Trump administration in 2018 to advise the White House on interagency AI research and development priorities and streamline federal AI efforts to ensure that the US remains a leader in AI.
The National AI Initiative Office was another effort, this time in 2021, to recognize the value of AI as a driver for both the US economy and security. A first for national AI strategy, it committed to doubling investments in AI research, established a series of national AI research institutes and provided formal AI regulatory guidance.
Federal initiatives after generative AI
The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act is a bipartisan bill sponsored by senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN) and Thom Tillis (R-NC). It seeks to protect actors, musicians and other performers from unauthorized digital re-creations created or hosted by individuals or companies.
The Federal AI Governance and Transparency Act is a bipartisan bill drafted by US House Committee on Oversight and Accountability Chairman James Comer (R-KY) and ranking member Jamie Raskin (D-MD) to ensure that the government's implementation of AI will be used to improve government operations while prioritizing the protection of privacy, civil rights and liberties, and upholding American values.
The AI Bill of Rights is an initiative developed by President Joe Biden in conjunction with the White House Office of Science and Technology Policy that identifies five principles that should guide the design, use and deployment of automated systems as a means of protecting the American public in the age of artificial intelligence.
The five rights are as follows: the right to safe and effective systems; the right to algorithmic discrimination protections; the right to data privacy granting people agency over how their data is being used, as well as protection from predatory and/or abusive data practices; the right to notice and explanation of when and where AI-based systems are in use; and the right to human alternatives, consideration and fallback that allow an individual to access a real person who can consider and fix problems quickly.
The Biden administration put forth the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to govern the development and use of AI safely and responsibly through governmentwide policy. It incorporates eight guiding principles to account for the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and what the US government deems to be other relevant organizations, all as a means to develop standards around AI responsibility, accountability and safety.
The AI Insight Forum is a series of nine closed-door forums led by Senator Chuck Schumer (D-NY) to gather perspectives from lawmakers, business executives, rights leaders, etc., on how the US Congress should develop its AI legislation about topics such as workforce risks and disinformation.
The Task Force on Artificial Intelligence is an effort led by Rep. Jay Obernolte (R-CA) and Rep. Ted Lieu (D-CA) that seeks to produce a comprehensive report of guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.
The EU Artificial Intelligence Act
When it comes to legislation, the EU has a reputation for being ahead of the curve. For example, even before the EU's 2016 sweeping data privacy act, the General Data Protection Regulation, it had adopted a predecessor over 20 years before called the Data Privacy Directive. Since 2021, the EU has been working on similar measures that would introduce legal frameworks around the development and use of AI — and this was before the generative aspect had taken off. The intent, according to the governing body, was to take a human approach and protect the rights of individuals. It received overwhelming support in the EU Senate.
Titled the Artificial Intelligence Act, it primarily looks to regulate what the EU considers to be "high-risk" systems, including but not limited to those designed to be deceptive or exploitative or to quantify individuals as a means of profiling them. Systems that are deemed to be of minimal risk — which it says includes the majority of AI applications currently available on the EU market — are left unregulated, although that may change now that generative AI is part of the equation.
However, providers of general-purpose AI systems and models must still provide technical documentation and use instructions, comply with the Copyright Directive, and provide summaries that can be used for training. At a high level, the AI Act seeks to minimize the possibility of either intentional or unintentional harm, while allowing it to still be used in regular capacities.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
451 Research is a technology research group within S&P Global Market Intelligence. For more about the group, please refer to the 451 Research overview and contact page.