articles Ratings /ratings/en/research/articles/231204-artificial-intelligence-what-are-the-key-credit-risks-and-opportunities-of-ai-12934107 content esgSubNav
In This List
COMMENTS

Artificial Intelligence: What Are The Key Credit Risks And Opportunities Of AI?

COMMENTS

Retail Brief: European Retailers Set Out Their Stalls For The Golden Quarter

COMMENTS

Instant Insights: Key Takeaways From Our Research

COMMENTS

Digital Assets Brief: Crypto's Trump Card

COMMENTS

Sustainability Insights: Rising Curtailment In China: Power Producers Will Push Past The Pain


Artificial Intelligence: What Are The Key Credit Risks And Opportunities Of AI?

(Editor's Note: In this series of articles, we answer the pressing Questions That Matter on the uncertainties that will shape 2024—collected through our interactions with investors and other market participants. The series is aligned with the key themes we're watching in the coming year and is part of our Global Credit Outlook 2024.)

Artificial Intelligence's (AI) potential to replace, transform, and regenerate human-work processes promises significant efficiency and productivity gains. While this could be a boon for companies' financial and operating performance it comes with dangers linked to data privacy, cyber security, and AI safety that could exacerbate operational and reputational risks if not properly governed and managed.

How this will shape 2024

Generative AI use will continue maturing in 2024.   Companies have begun rapidly developing, acquiring, and integrating generative AI into their operations. This promises to dramatically transform global markets, particularly as corporations expand their understanding of the nuances, options, and capabilities of AI-based technologies. As adoption continues to mature next year, it will likely result in an expansion of AI's capacity to generate content and perform complex tasks with relative autonomy, interactively, and in real time. But such expansion will come with increased data input requirements and the need to understand the effects of AI adoption on customer demand.

While efficiency gains should emerge, technical challenges will be key.   Improvements in productivity due to generative AI applications may meaningfully materialize next year in the form of cost savings and scalability benefits--particularly at large technology companies that made swift and focused investments in 2023. Productivity enhancements could reduce operating expenses and improve efficiency, though near-term benefits will be offset by investment requirements. Conversations about how AI influences operations-level success will gradually clarify and help define the technologies' benefits and how they are measured. Over the next few years, we expect AI to deliver a combination of efficiency gains (that may improve financial performance), stronger product differentiation, and shifts in competitiveness. These changes will demand thorough evaluation, not least to avoid technical pitfalls such as hallucination (where AI generates incorrect information that appears to be correct), exacerbating bias (where algorithms contribute to unfair discrimination), and risk management issues, including relating to data privacy and cyber risk.

Chart 1

image

Developed regions will enact AI-focused regulations.   New rules focused on education, governance, and protection are likely to be developed and deployed in 2024 as major economies react to the widespread adoption of AI technology. There is already evidence of this rapidly evolving regulatory environment. The European Union's 'AI Act' will set rules that establish obligations for providers and users, varied in risk level as defined by the law. In August, China introduced a similarly groundbreaking law targeting the regulation of generative AI. The U.S. is taking a more decentralized approach, though elements of privacy legislation include provisions and protections applicable to AI, notably due to a combination of data privacy laws and algorithmic accountability and fairness standards. On Oct. 30, 2023, US President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" to create safeguards. Similar regulations and policies are being developed or discussed in Canada and some Asian countries.

What we think and why

The opportunities offered by AI will vary across sectors, geographies, and company sizes.   Early adopters of AI technology will primarily be larger companies with deep pockets and the motivation to invest in custom technology stacks (though small- to medium-sized enterprises that are able to leverage open source technologies will also lead adoption).This will be most prominent in developed economies, where capital spending capacity is higher, and where companies can take advantage of AI's various open-source models, frameworks, and applications. Sectors with more flexible business models (including high-tech, banking, medical devices, education, media and entertainment, and telecommunications) and those with greater discretionary capital spending will likely yield earlier benefits from AI in terms of cost efficiencies, profitability, and competitive positioning. Sectors characterized by capital-intensive infrastructure demands and fiscal and operational rigidity are likely to prove laggards.

Companies confronting manufacturing and supply chain issues could turn to an incremental use of AI for analytical solutions to enhance their competitivity--potentially deepening AI usage among automakers and other manufacturing sectors, many of which already use machine learning and telemetric instrumentation in their processes.

Charts 2 and 3

image

AI will have replacement, transformative, and regenerative effects on labor productivity.   Replacement, or job displacement, seems the most prominent fear related to AI's adoption but is likely overstated and will largely be limited to automation of manual and repetitive cognitive processes. The transformative (and disruptive) effects of the technology will typically offer possibilities to augment people's workplace efficiency and effectiveness. This should develop further in 2024 as companies discover advancements in machine learning and analytics, and implement more and better telemetric instrumentation. The regenerative implications of AI refers to the technologies' ability to intelligently redesign processes and create new types of jobs. Generative AI should continue its early experimental development of regenerative innovation in 2024, but we believe that significant impacts likely remain some way off.

AI is likely to contribute to sustainability and social goals in the longer-term.   We expect AI will progressively realize its potential to deliver social benefits, beyond financial gains. Companies and public organizations will increasingly look at AI technologies as a sustainable tool to reduce the adverse effects of issues including climate change, supply chain disruption, and gender, social and wealth imbalances. For example, in developing economies, the use of digital data coupled with advanced, machine-driven analytics and robotics could significantly widen access to healthcare through remote diagnosis, surveillance, and telemedicine, and promises improvements to agriculture production, through automatic irrigation and pest control.

What could go wrong

AI-related risks could worsen in the short-term.   Data privacy and security risks, including cyber risk, could increase with the rapid growth in accessible data stored across the digital ecosystem. Additionally, threat actors can be expected to adopt new techniques, including sophisticated social engineering. Amplified by technologies that power "deep fakes", AI could have negative implications for businesses, nation-states, and society if the technology is used illegally and unethically to power misinformation.

Unequal access to AI could increase the digital divide.   Utilization of AI technologies will remain unbalanced and could exacerbate digital inequalities based on geography and socioeconomic differences. Access to education and digital infrastructure will significantly determine the extent to which AI helps or hinders a company's operating efficiency and revenue growth, and could thus weigh on its financial performance and creditworthiness.

Inadequate AI adoption may lead to nonfinancial risks for companies.   As adoption of AI technologies becomes more widespread, companies'operational and reputational risks may increase if development lags in education, governance, and protection. Depending on the consequences (e.g., regulatory breaches), nonfinancial risks have the potential to evolve into financial risks and hurt companies' financial health.

Regulatory complexity will likely increase.   AI regulations are rapidly evolving and vary depending on region, meaning companies are facing an evolving and increasingly complex environment that heightens regulatory risks.

Related Research

This report does not constitute a rating action.

Primary Credit Analyst:Sudeep K Kesh, New York + 1 (212) 438 7982;
sudeep.kesh@spglobal.com
Secondary Contact:Miriam Fernandez, CFA, Madrid + 34917887232;
Miriam.Fernandez@spglobal.com
Secondary Credit Analyst:Simon Ashworth, London + 44 20 7176 7243;
simon.ashworth@spglobal.com

No content (including ratings, credit-related analyses and data, valuations, model, software, or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced, or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor’s Financial Services LLC or its affiliates (collectively, S&P). The Content shall not be used for any unlawful or unauthorized purposes. S&P and any third-party providers, as well as their directors, officers, shareholders, employees, or agents (collectively S&P Parties) do not guarantee the accuracy, completeness, timeliness, or availability of the Content. S&P Parties are not responsible for any errors or omissions (negligent or otherwise), regardless of the cause, for the results obtained from the use of the Content, or for the security or maintenance of any data input by the user. The Content is provided on an “as is” basis. S&P PARTIES DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE CONTENT’S FUNCTIONING WILL BE UNINTERRUPTED, OR THAT THE CONTENT WILL OPERATE WITH ANY SOFTWARE OR HARDWARE CONFIGURATION. In no event shall S&P Parties be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Content even if advised of the possibility of such damages.

Credit-related and other analyses, including ratings, and statements in the Content are statements of opinion as of the date they are expressed and not statements of fact. S&P’s opinions, analyses, and rating acknowledgment decisions (described below) are not recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security. S&P assumes no obligation to update the Content following publication in any form or format. The Content should not be relied on and is not a substitute for the skill, judgment, and experience of the user, its management, employees, advisors, and/or clients when making investment and other business decisions. S&P does not act as a fiduciary or an investment advisor except where registered as such. While S&P has obtained information from sources it believes to be reliable, S&P does not perform an audit and undertakes no duty of due diligence or independent verification of any information it receives. Rating-related publications may be published for a variety of reasons that are not necessarily dependent on action by rating committees, including, but not limited to, the publication of a periodic update on a credit rating and related analyses.

To the extent that regulatory authorities allow a rating agency to acknowledge in one jurisdiction a rating issued in another jurisdiction for certain regulatory purposes, S&P reserves the right to assign, withdraw, or suspend such acknowledgement at any time and in its sole discretion. S&P Parties disclaim any duty whatsoever arising out of the assignment, withdrawal, or suspension of an acknowledgment as well as any liability for any damage alleged to have been suffered on account thereof.

S&P keeps certain activities of its business units separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain business units of S&P may have information that is not available to other S&P business units. S&P has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process.

S&P may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P reserves the right to disseminate its opinions and analyses. S&P's public ratings and analyses are made available on its Web sites, www.spglobal.com/ratings (free of charge), and www.ratingsdirect.com (subscription), and may be distributed through other means, including via S&P publications and third-party redistributors. Additional information about our ratings fees is available at www.spglobal.com/usratingsfees.

 

Create a free account to unlock the article.

Gain access to exclusive research, events and more.

Already have an account?    Sign in