S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Language
Featured Products
Ratings & Benchmarks
By Topic
Market Insights
About S&P Global
Corporate Responsibility
Diversity, Equity, & Inclusion
Featured Products
Ratings & Benchmarks
By Topic
Market Insights
About S&P Global
Corporate Responsibility
Diversity, Equity, & Inclusion
31 October 2023
This is a thought leadership report issued by S&P Global. This report does not constitute a rating action, neither was it discussed by a rating committee.
Highlights
Banks are adopting generative AI, which promises earnings growth, improvements to decision-making, and better risk management. But it also comes with new risks, concerns, and costs that banks will have to manage.
We believe testing of generative AI solutions will accelerate over the next two to five years, while benefits are likely to prove incremental.
Banks that successfully deploy AI could benefit from capabilities and efficiencies that lead to competitive advantages and differentiation. This may, eventually, have implications for our views on creditworthiness, primarily through three areas: business franchise, financial performance, and risk management.
Given banks' material investment capacity, management of large amounts of proprietary data, and often fluid business models, it was perhaps inevitable that they proved to be enthusiastic early adopters of machine and deep learning technology (so-called traditional AI). These systems have (for decades, in fact) been used to improve risk management processes, loss mitigation, fraud prevention, customer retention, and to deliver efficiency gains and profit growth.
For the same reasons, it is also little surprise that banks are now poised to take a further step in integrating more powerful 'generative AI' technology in their operations.
This new wave of AI promises to reshape the industry, at a steady and incremental rate, by providing new capabilities, revenue opportunities, and cost reductions. Over time, that could tilt the competitive landscape in favor of those banks that best utilize AI's potential. S&P Global Ratings believes that the changes AI will usher in could also have implications for our assessment of banks' credit quality.
Chart 1
To date, most AI use cases in banking have aimed to either automate tasks or generate predictions. This work has been done by supervised and unsupervised machine learning (ML) models (and sometimes more complex deep learning models) that require significant computing capacity, and large amounts of data. The application of machine learning in banking accelerated in the late 2000s with the development of Python for Data Analysis, or pandas--an open-source data analysis package written for the Python programming language. Pandas, along with other machine learning software libraries, like SKLearn and TensorFlow, made data structuring and analysis easier, more systematic, and thus opened the door to more accessible machine learning algorithms and powerful analytical frameworks. Financial analysis has also been a natural recipient of innovative, data-intensive applications, notably from other disciplines. Examples include life tables from insurance, Monte Carlo simulations and stochastics from physics, which, in turn, drove new developments in machine learning and related technologies.
It is testament to the benefits of this earlier AI that (despite its complexities) banks, financial service providers, and the insurance sector emerged as some of its most active users. Machine learning in banking, financial services, and insurance accounted for about 18% of the total market, as measured by end-users, at end-2022 (see chart 2).
In their ML strategy, financial services companies seem to primarily rely on cloud-based machine learning services, such as AWS, Microsoft Azure, or Google ML (see chart 3). Furthermore, most (71%) still use private cloud environments, rather than the public cloud, according to a study by the TMT Research unit of S&P Global Market Intelligence, a division of S&P Global.
Chart 2
Chart 3
AI-led automation has principally facilitated banks' operational simplification and cost reduction, as banks have identified manual and mechanical tasks performed by staff and replaced them with computers that are not only cost effective but also less prone to operational failures. An example of AI in banking driving automation is Standard Chartered's document processing system, called Trade AI Engine, which was developed with IBM. It can review unstructured data in different formats, identify and classify documents, and learn from its own performance.
Banks have also used AI capabilities and data, both proprietary and external, to augment employees' capabilities, enabling them to perform tasks that were previously beyond them. For example, prediction and recommendation models have leveraged AI's ability (primarily through unsupervised machine learning) to analyze vast amounts of data and uncover hidden patterns that wouldn't be apparent to a human. This has facilitated more accurate and faster decision-making.
This pattern recognition has notably been employed in fraud detection and financial forecasting, where about 40% of financial services companies rely primarily on machine learning for both use cases, according to S&P Global Market Intelligence TMT.
Other examples of AI use cases in banking are set out below (see table 1).
Table 1
Bank | AI application |
Barclays | Fraud detection using AI: The bank has an AI-tool that seeks to prevent fraud by predicting potential instances using real-time monitoring of merchant payment transactions. |
Banco Santander | AI in risk management: Santander's Corporate and Investment Banking division developed an AI-tool, called Kairos, that shows how a corporate client could be impacted by economic events, creating prediction patterns that enable employees to make more informed investment and lending decisions. |
Bank of America | AI in research analysis: A platform, called Glass, helps sales and trading employees uncover hidden market patterns to anticipate client needs by consolidating market data across asset classes and regions with the bank's inhouse models and machine learning techniques. |
Discover Financial Servicess | AI for credit underwriting:Through a partnership with an AI software provider,the bank can improve credit underwriting and reduce default rates. |
Source: Banks' websites and public announcements.
While AI-facilitated automation and prediction are common parts of banks' digital transformation (at least for larger, and technologically sophisticated banks), investment in and adoption of tools driven by newer, generative-AI powered systems remain nascent (see chart 4). The possibilities (and risks) are thus yet to be fully tested. But the potential for the new AI to reshape banking seems vast.
It is also worth recognizing that this new wave of AI will also deliver opportunities for the large and growing network of financial technology (fintech) companies. The resulting capabilities could magnify fintech's potential to disrupt the banking sector, and because of that increases pressure on banks to explore new applications for generative AI.
The powerful possibilities offered by Generative AI stem from its ability to create content based on the analysis of large amounts of data, including text, image, video, and code. That capability means it can, for example, be used to summarize content, answer questions in a chat format, and edit or draft new content in different formats. More specifically that means generative AI in banking could rapidly and cheaply (once the models are deployed at scale) generate hyper-personalized products and services, or accelerate software engineering, IT migration, and modernization of programs. It could also augment humans' abilities, through AI chatbots or virtual assistants--this is the focus of a partnership between Morgan Stanley and OpenAI, the U.S. research laboratory behind ChatGPT.
Global spending on artificial intelligence is expected to reach $166 billion in 2023 (with banking one of the largest contributors by industry at about 13%), rising to about $450 billion by 2027, according to a report by International Data Corp. (IDC), a provider of technology market intelligence and advisory services.
The ways in which generative AI will be used by banks is likely to hold some surprises, but it seems certain that the new technology will result in both an evolution and an expansion of AI's role within the banking sector.
Notable changes due to the application of generative AI in banking are unlikely to be immediate. We expect banks will continue testing generative AI models, and investing heavily in them, for the next two years to five years, before scaling up deployment to customers and engaging in more transformative projects. Furthermore, the bulk of banks’ near-term use cases will likely focus on offering incremental innovation (i.e., small efficiency gains and other improvements across business units) and will be based on specific business needs. Finally, we expect employees will remain in an oversight role, known as human-in-the-loop (HITL), to ensure results meet expectations (in terms of accuracy, precision, and compliance) as the technology matures.
Despite that gradual onset, the potential for wide-ranging application of generative AI means the banking sector is among those likely to experience the biggest impact from the advancement. On an annual basis, generative AI could add between $200 billion and $340 billion in value (9%-15% of banks’ operating profits) if the use cases are fully implemented, according to a 2023 report by McKinsey & Co, a management consultant.
Apart from new business use cases, banks are also likely to apply generative AI (through foundation models) to existing and older AI applications, with the aim of improving their efficiency. For instance, the digitalization and automation of customer-facing processes generates a digital data trail that generative AI can use to fine-tune both the service and its internal processes. This could then deliver further digitalization, including hyper-scale customization, that might enable better client segmentation and retention. Digital data trails could also be used to improve risk management, data collection, reporting, and monitoring.
How banks go about developing their generative AI capabilities is likely to depend on their scale and investment capacity. Options range from outsourcing (via contracting to a third-party) to in-house development, and a wide range of hybrid solutions involving the fine-tuning of existing models. While most generative AI applications in banking remain at early stages of development, the spectrum of projects and approaches is already apparent (see table 2).
Table 2
Bank | Use case |
Wells Fargo | The bank uses Dialogflow, Google’s conversational AI to empower its virtual assistant, called Fargo. It also is using a large language model (LLM) to help clarify what information clients must provide to regulators. |
Mizuho | The Japanese bank has announced a trial with Fujitsu’s generative AI technology to streamline the maintenance and development of its systems. |
Morgan Stanley | The wealth management division is developing a service leveraging OpenAI's GPT-4 technology to help employees locate relevant inhouse intellectual information, such as company insights across sectors and regions, information on asset classes, as well as data from capital markets. |
Goldman Sachs | Its developers are experimenting and testing generative AI tools to assist with code writing and testing. |
JP Morgan | The bank has applied to trademark a tool called IndexGPT that could act as a financial investment advisor. |
Source: Banks' websites and public annoucements.
Generative AI's potential to benefit the banking sector, more than other sectors, comes from its ability to understand so-called natural language (language as it is commonly used). That is because natural language is often used in knowledge-based jobs characterized by higher education, effective communication, human collaboration, and logical-linguistic skills. While the impact of generative AI on various bank operations will differ, the benefits could be significant. For example, improvements in the productivity of customer-service employees could deliver cost reductions and efficiency improvements. Chart 5 summarizes some of the potential benefits we expect to emerge with increased application of generative AI in banking.
As with any deployment of new technology, the adoption of generative AI will come with risks, costs, and concerns. That is particularly the case for the new iteration of AI given the pace at which the technology is evolving, its deployment capabilities across entire business functions, and the potential breadth of its use across industries and society.
Those concerns include generic issues that are applicable to many industries, and others that are specific to banks. In that first basket are AI-related ethical concerns, such as the ability to explain generated content or biases embedded in data. Selection bias in banking, for instance, might perpetuate profiling issues based on gender, race, ethnicity, that could lead to unfair credit scoring and customer discrimination. Another key generic issue is environmental concerns (and criticism) due to the high levels of energy consumed by AI models--training a generative AI model consumes more energy per year than 100 American homes, according to estimates.
Issues could also arise as still-new AI regulatory frameworks mature, with the potential for differences to emerge in oversight and requirements across regions. That could affect many industries but will be particularly relevant for the banking sector, which is heavily regulated and faces higher conduct, reputational, and systemic risks than other sectors. As AI becomes increasingly regulated and new regulations extend across major geographies, banks could be exposed to the risk of fines or the regulatory suspension of some operations should models lead to poor customer outcomes (such as customer discrimination or data leaks), result in risk management failures that could have been avoided, or fail to meet transparency, safety, and robustness requirements.
Other AI risks that are particular to the banking sector include issues revolving around security and privacy, risks related to workforce displacement by AI, and the risk of escalating AI investment required to keep pace with the digital transformation (see table 3).
Table 3
Concern type | The risks for banks | Mitigation strategies |
Ethical concerns | The most prevalent types of biases that affect banks and which could lead to customer discrimination on credit decisions and financial inclusivity issues are: Interaction bias—where AI absorbs bias from the users it interacts with. Latent bias—due to correlations inherent in datasets. Selection bias--which occurs when datasets over or under represent certain groups. Banks' AI models can also suffer from explain ability issues(black box problems),which refers to an inability to identify why a model made a specific decision. Other typical risks include copy-right violations due to results that resemble existing content; and results that can be fictitious or lack sense (known as hallucinations). | • Ensure compliance with algorithmic impact assessments (firms should demonstrate that their models comply with requirements for trustworthy AI systems). • Build methods to identify biases. • Update models regularly, use more and improved data. • Use mathematic de-biasing models which manually adjust certain features to avoid bias. |
Security, privacy, and control risks | The potential exists for huge concentration of data at a few large private companies (known as critical third-party providers). Additionally, banks may violate customers' privacy rights by inadvertently, and without specific consent, gathering publicly available customer data for profiling and prediction. Data constraint risks occur because some internal and customer data is private and confidential. Its use to train generative AI models can therefore be risky as it may unintentionally expose data externally. Furthermore, malicious actors could weaponize generative AI, for example by creating deep fakes to fraudulently open new accounts, or by employing LLMs to generate phishing content. | • Incorporate privacy and protection by design, ensuring compliance with privacy regulations. • Collect customer data only with consent. • Maintain strict security procedures for AI models. |
Nascent AI regulation | Regulatory reactions, and regulators' responsiveness, to rapid AI developments and new use cases creates room for regional differences and uncertainties in regulatory objectives and requirements. This may impact the competitive landscape. Regulation relating to AI varies by jurisdiction, meaning banks operating in multiple locations can face different rules. In Europe, potential penalties of up to 7% of banks’ revenue for regulatory breaches under the EU AI Act. In China, interim measures regulating generative AI were issued in August 2023 and focused on services that are accessible to the general public. | • Enhance the transparency of AI models (particularly foundation models, which are ones powering generative AI). • Design explainability into AI processes and outputs, with a focus on explaining rationale, responsibility, data, safety, performance,and impact. |
Workforce risks | Potential for accelerated job displacement, particularly for jobs requiring mathematical and verbal intelligence, as opposed to social, creative, or perception skills. Short-term costs and risks due to the need to retrain employees to use AI to augment their jobs, and the challenge for banks to maintain a human-centric AI approach. | • Share AI scope and knowledge with employees; providing training to re-skill them. • Use AI to augment existing jobs and empower employees to make better decisions. • Ensure AI models are intentionally inclusive and diverse, and incorporate human judgement. |
Investment required to integrate new AI with legacy infrastructure | Banks that fall short in investing in AI and upgrading IT infrastructure could experience bottlenecks due to limited graphics processing unit (GPU), networking capabilities, memory, and storage capacity, any of which could pose execution and operational risks. | • Use AI-coding to accelerate legacy code conversion, software development, and migration and integration of legacy IT infrastructure. • Invest in higher-performance networking. |
Environmental cost | Although this applies to many sectors (not just banking), training AI models (particularly LLMs) is highly energy-intensive and can have a direct impact on a company's CO2 emissions. | • Measure AI-model environmental impact and compensate for it. • Optimize AI models to run on lower parameters and reduce the data they require. |
Source: S&P Global Ratings
The banking sector is a regulated services industry that relies heavily on technology. Banks' ability to design and implement strategies that effectively capture AI's operational benefits could, like other new technologies (and potentially more so), have implications on our view of their credit quality (see chart 6).
Chart 6
AI strategies have the potential to provide competitive advantages to banks that have the capacity and flexibility to make best use of them. Well deployed AI could enhance operating revenues, by improving employees’ decision-making and by unlocking the revenue potential of clients--not least due to personalized services and products. And there could be a significant positive effect on costs, given the potential for a robust AI strategy in banking to simplify operations, reduce operating expenses, and thus improve efficiency and profitability.
Shifts in these factors can be relevant to our assessment of banks’ business position, notably where they exacerbate the difference between banks’ competitive position by contributing to stronger franchises, and more agile and profitable business models.
Generative AI in banking promises to exacerbate these differences by also playing a role in banks' ability to upscale and modernize legacy IT systems--notably with low-code /no-code software that could offer important savings. This could also reduce operational risks and costs that arise from running banks on old infrastructure and labor-intensive systems. For example, we estimate that a 10% reduction in bank staff costs would, on average, improve return-on-equity by about 100 basis points and cost-to-income ratios by about 3 percentage points, based on S&P Global Ratings' Global Top 200 rated banks. These potential gains would have to be balanced against the investment in tech that they require, and against the opportunities for banks to reduce employee numbers (while maintaining revenues). Thus it remains to be seen to what extent banks that successfully deploy AI strategies materially outperform those that are AI laggards.
AI also has the potential to enhance risk management and could thus influence our view of a bank’s risk profile, albeit indirectly. For example, in credit risk, a bank that can accurately price risk and use patterns hidden in data to determine the likelihood customers will repay debt (or become problematic) will improve its workout models, reduce problematic loans, and improve the accuracy of its provisioning. In effect, its risk-adjusted performance should improve, relative to peers’.
There are numerous ways that AI could be used to enhance risk management practices (see table 4). Yet, poor deployment of AI could equally lead to reputational and operational risks that could be detrimental to our view of a bank’s risk position.
Table 4
Type of risk | Use case |
Credit | • More accurate and faster 'Probability of Default and Loss Given Default' assessment. • Enhancement of early warning asset-quality deterioration metrics (both internal and external--drawing on a customer’s internet footprint) |
Market | • Forecasting of market trends. • Improved identification of insider trading, market manipulation, and predictions of the probability of trading misconduct. |
Operational (fraud and cyber) | • Identification of fraudulent card transactions based on behavioral data. • Using prior attack data to detect anomalies, prevent data breaches, and propose corrective action. |
Source: S&P Global Ratings
Not all banks, or indeed regions, will move at the same speed. Local competitive environments, regulatory developments, banks’ investment capacity, and customer preferences will all play a role in determining the extent to which regional AI usage proves to be conservative or more transformative. For example, we expect large scale and bespoke AI-driven client services to emerge first in countries where customers have more permissive attitudes to new technology, such as China, U.S.A., U.K., the Nordics, and Australia (see chart 7).
While generative AI is still in its infancy, it has the potential to become a general-purpose technology that is pervasive, enables complementary innovation, improves the quality of products and services, and reduces costs (similar to the internet revolution). Yet, in considering those potential benefits equal weight should be given to understanding the related risks and concerns (both known and yet to emerge).
It thus remains to be seen at what speed, and to what extent, it makes business sense for banks to invest in transformational AI strategies.
S&P Global Ratings
Chief Innovation Officer
S&P Global Ratings
Managing Director, Chief Analytical Officer - Financial Institutions Ratings
S&P Global Ratings
Senior Director – EMEA Financial Institutions
S&P Global Market Intelligence
Research Director
S&P Global Ratings
Associate Director, Financial Institutions
S&P Global Ratings
Sector Lead European Financial Institutions
S&P Global Ratings
Senior Director, Financial Institutions
S&P Global Ratings
Director, Global Corporate Governance Specialist, Sustainability Research
S&P Global Ratings
Credit Analyst, GAC
S&P Global Ratings Editorial
Paul Whitfield
Editorial Lead
S&P Global Ratings Editorial
Joe Carrick-Varty
Digital Designer