S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Research — 8 Mar, 2022
By Rachel Dunning and Nick Patience
Introduction
Artificial Intelligence and machine learning have already had a good run at or near the top of most lists of technology adoption for the last few years, and have long been central to digital transformation projects, which have accelerated over that period. Indeed, 451 Research's Voice of the Enterprise: AI & Machine Learning, Use Cases 2021 survey saw 95% of respondents say AI/ML was very or somewhat important to their digital transformation efforts. As such, we expect plenty of progress and adoption in 2022. This report pinpoints six key trends that we believe should be on the radar of any AI stakeholder this year.
As a general-purpose technology — similar to electricity — AI can enable all sorts of business processes and automation across any industry. Its impact is felt across industries, countries and all levels of society. Our focus, predominantly on enterprise use of technology, still leaves a broad swath to examine, so we are identifying a small subset of what we will be concentrating on in 2022. These choices are a mix of technologies, macro-level strategies, politics and industries. However, we would emphasize that although AI has the potential to do so much, we remain entrenched in the early stages of its adoption by organizations. Chatter about the technology reflects numerous real-world use cases, but they remain very narrow and in particular domains, which is why we intend to deepen our industry focus in 2022. We remain, we believe, in the "speculation phase" of AI.
Machine learning operationalization tools
AI and its machine learning subset are being adopted across enterprises to optimize, automate and augment digital transformation strategies. However, the technology comes with plenty of its own challenges — from deploying at scale to overcoming bottlenecks and skill shortages during development. As more enterprises adopt and push forward with their AI initiatives, it becomes all the more critical to address these challenges. One way to address this is with ML operationalization, or MLOps, tools.
According to 451 Research's Voice of the Enterprise survey data, nearly 3 in 4 enterprises have either invested in MLOps tools or plan to do so in the next 12 months, and we expect the adoption of MLOps to continue unabated in 2022. The definition of MLOps will also continue to morph. Some think MLOps refers specifically to what happens once a model has been deployed (management and monitoring), which is the classical definition. But a growing number take the term to mean the entire development process — from the acquisition of data and training of the model through deployment and beyond.
Driving this shift in definition is the evolving MLOps tools market, which, although young, is teeming with vendors and growing rapidly. From hyperscale cloud providers like Google LLC and Amazon Web Services Inc. to specialists such as Modzy Inc., iguazio Systems LTD and Seldon Technologies Ltd., many aim to capitalize on this market.
AI regulation
The widespread and ever-increasing adoption of AI technology all but guarantees that it will become a regulated sector. Although we are still in the early stages, regulation is on its way and becoming a key enterprise concern. As such, two-thirds of respondents to our Voice of the Enterprise: AI & Machine Learning, Infrastructure 2021 survey are either very or somewhat concerned about government regulation of AI. How concerned should they really be at this stage?
AI regulations will probably not come into force in any meaningful way in the U.K. and Europe during 2022. But progress will be made, especially within the EU. In April 2021, the European Commission proposed the AI Act. The act outlines various categories of risk, applying stricter regulation and restriction to use cases that fall under higher-risk categories, such as facial recognition, and outright banning use cases such as those that manipulate human behavior to circumvent users' free will or systems that enable "social scoring" by governments. The categories of risk, how they are measured and how conformity assessments are carried out and by whom will all be major topics debated throughout the year. The European Parliament is scrutinizing the act in various committees. Additionally, the European Council, the grouping of the heads of state and government for each member, will want its say. The council is highly influential because of who sits on it but has no formal legislative power.
The U.S., in a midterm election year, will not pass significant AI-focused legislation in 2022, although work is ongoing on a bill of rights for the AI age. The Federal Trade Commission, however, may make some moves. The agency has already forewarned companies about issues such as algorithmic discrimination and how existing legislation can be used to go after those identified as stepping over the line. The FTC now includes Meredith Whittaker as a senior adviser on AI. Whittaker was a research professor who organized walkouts at Google in 2018 over its culture and other issues and left the company for good in 2019.
Meanwhile, China has already implemented two major laws in 2021: its Data Security Law and its Personal Information Protection Law. The former came into effect in September 2021 and forces companies to classify data based on its economic value and relevance to China's national security, while the latter, which went into effect in November 2021, dictates how citizens' data can be used. More is likely to come.
Platforms vs. applications
As with any area of software, some companies end up being the platforms on top of which other vendors' applications run. What do we mean by platforms? Development tools and environments, operating systems, infrastructure, databases — any combination thereof. Myriad companies have tried to portray themselves as platforms, but few will actually become them — maybe fewer than 10 and quite possibly fewer than five eventually. Their fate instead lies as providers of applications to solve problems — usually in specific industries, such as finance or manufacturing.
In the world of AI, this is starting to look increasingly like a cloud-first world and then a hybrid cloud world (including a private cloud run on-premises). Despite the fact that the cloud is where early experimentation in AI takes place, it is by no means where all AI models end up being deployed, so traditional platform vendors are also a factor here.
The hyperscale cloud vendors are not satisfied with being platform providers alone. Witness their nudges into contact centers, document classification and other areas that may be classed as full-fledged applications in their own right but are building blocks that can make existing applications seem more intelligent or form the basis of completely new applications.
Then you can add the largest application providers, such as Oracle Corp. and SAP SE, which play both sides of the platform/application divide but do so on a massive scale compared with AI startups. Such startups face a double squeeze from the bottom and the top and must offer something different, which should be their understanding of how AI can transform business, automate processes and enable them to do things that were just not possible with traditional rules-based approaches. Here, the people involved become a key factor and often drive investment decisions.
Another consideration is how many platforms or applications customers will spread their AI models across. This becomes an MLOps issue, as discussed earlier in this report; since models are not static things, they need to be managed, monitored and often retrained. This speaks to both the potential for a vibrant market of independent MLOps vendors, policing the performance of models neutrally, regardless of their location, as well as the challenge of companies trying to be either new platform vendors or combinations of platforms and applications. In 2022, we expect many more to give up trying to be either of those and focus on being value-added AI application providers, running on other vendors' platforms.
Industry-specific applications
Vertical industry use cases will be a major focus for us, especially in the first half of 2022, driven in part by our forthcoming Voice of the Enterprise: AI & ML Use Cases survey data and reports. These will detail use cases in financial services, manufacturing, telecommunications, retail, energy/oil and gas and healthcare/life sciences.
We expect to see increasing take-up of more advanced use cases, in addition to the long-standing use cases, such as fraud detection in financial services and churn analysis in telecom; they are well understood at this stage. What will be more interesting are those seeking the next opportunities, such as using AI for employee safety in manufacturing, clinical trial analysis in healthcare/life sciences and vision analytics for infrastructure inspection in telecom.
Large language models
Large language models are pre-trained models that predict the probability of a sequence of words and generate text. They can do summarizations, translate language and even write poetry (of a sort). They are deemed large mainly for their number of parameters but also for the amount of compute resources they cost to run and the large volumes of data needed to train them.
These models are fueling the resurgence of interest in AI in natural language processing as it opens up many more use cases than were possible with earlier NLP techniques. The introduction of transformer language models by Google in 2017 changed this area; these consider all the words in a sentence at once, rather than in sequential order and can process much longer strings of information. So many developments have happened since then in terms of the size of these models and the amount of time they take to be trained. So what might happen this year?
It would be nice to think they would reach the mainstream in terms of commercial viability, but we doubt that will happen. Instead, we think there will be better demonstrations of text generation as well as generation of other sequences, because these models can, in theory, generate any sequence of data. This includes generating computer code, images, virus mutations and proteins. This has the potential to open up all sorts of tasks to automation.
We also expect to see more investment, following what was a strong 2021 as investors bet on the future success of large language models, including a $124 million series A for ANTHROPIC PBC in May, a $40 million series B for Hugging Face Inc. in March, a $110 million series C for Primer Technologies Inc. in June and a $40 million series A for Cohere AI in September.
Hardware accelerators
AI, particularly deep learning, can put a strain on traditional IT infrastructure, which is why a raft of startups emerged in the past few years to develop hardware accelerators to speed the execution of AI jobs. The big cloud vendors also chose to develop their own chips to do the same. According to our Voice of the Enterprise: AI & Machine Learning, AI Infrastructure 2021 survey, when asked what hardware resource would improve the performance of their AI workloads, 46% of organizations selected cloud-based accelerators, with 27% choosing cloud accelerators as the single most effective performance enhancement, 7 percentage points ahead of faster standard servers with x86 processors.
Of course, this survey data does not mean that such accelerators are not already in use — this points to future demand. In the same survey, 77% of organizations claim they currently use hardware accelerators whether on-premises or in the cloud. That could be one graphics processing unit under a desk, a rack in a datacenter or widespread use of a cloud vendor's accelerators. This year will see more of the startups get to maturity and begin shipping in some sort of volume, as long as they can get their chips made by fabricators like Taiwan Semiconductor Manufacturing Co. Ltd. or Samsung Group when they have limited or no spare capacity.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
Location
Segment