S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Corporations
Financial Institutions
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
By Jean Atelsek and William Fellows
In their quarterly earnings calls, hyperscaler parent companies Amazon.com Inc., Microsoft Corp. and Alphabet Inc. touted their AI credentials and plans to invest ahead of an expected market shift enabled by large language models. But a steady drumbeat of optimization prevailed, with the companies taking their own medicine to keep spending in line with revenue growth. Reporting structures shed light on the hyperscalers' approaches to meeting these challenges.
By the same token that serverless processing relies on servers, artificial intelligence relies on compute, storage and bandwidth — resources hyperscalers have plenty of. The explosion of interest in generative AI has created something of an arms race in using it for search and developer enablement, but at the same time, Amazon Web Services Inc., Azure and Google Cloud are trimming their payrolls and rebalancing investments to accommodate slowing growth. In a way, the hyperscalers — like their customers — are attempting to protect their revenues by tying investments more closely to desired outcomes, as opposed to the growth-at-all-costs philosophy that prevailed during the pandemic. Organizational boundaries appear to reflect the ethos of the big cloud providers as they move into an uncertain future.
Generative AI takes front seat
Announcements related to generative AI have been coming thick and fast since OpenAI LLC unleashed ChatGPT last fall. Among the hyperscalers, these releases are primarily associated with AI-powered enhancements to existing platforms and developer-focused offerings aimed at shortening the distance between application concept and running code.
Microsoft was first out of the gate with ChatGPT integrations into its Bing search engine, a preview of its ChatGPT-based Azure OpenAI Service and a test of Microsoft 365 Copilot, an add-on to its productivity suite that leverages data from a user's documents, presentations, emails, notes and calendars to deliver prompts related to Microsoft 365-based workflows. These releases came in the wake of its integration of OpenAI technology into GitHub Copilot, which was trained on open-source contributions to the popular code repository. During the first-quarter 2023 earnings call, CEO Satya Nadella boasted about the company's long-standing partnership with ChatGPT developer OpenAI and cited plans to "lead in the new AI wave across our solution areas and expand our total available market." Nadella emphasized new business enabled by the alliance: "We are now seeing conversations we never had … because [companies] have gone to OpenAI and are using their API. These were not customers of Azure at all. … These are all new workloads that we really were not in the game in the past, whereas we now are."
Google LLC (which has legitimate claims to having kicked off the generative AI trend) responded with its own announcements encompassing new foundation models (PaLM), a new developer tool (MakerSuite), a conversational AI service (Bard) and product enhancements across its portfolio. The company continues to emphasize its pioneering work in AI and large language models, which have powered improvements to the Search experience for years. The challenge now (as is often the case for Google) is to expose its technical prowess in a way that is consumable and helpful externally. "Our North Star is getting it right for users," said Alphabet CEO Sundar Pichai during the company's first-quarter 2023 earnings call. Pichai stressed Alphabet's commitment to its AI principles and "information integrity," with the goal of "[helping] ensure the safety of generative AI applications before they are released to the public."
Amazon makes extensive use of AI in its retail business as well as via cognitive and managed services on AWS, and it is on its second generation of instances powered by custom-designed processors purpose-built to speed machine-learning training (Trainium) and inferencing (Inferentia). The company joined the generative AI fray in April with the release of Amazon Bedrock, a managed infrastructure offering in limited preview, providing API access to language models offered by AI21 Labs LTD. and Anthropic PBC and pledging to make its own Titan large language models (LLMs) available to customers. Amazon CEO Andy Jassy referred to the larger implications of this new chapter on the company's first-quarter earnings call: "All of the large language models are going to run on compute. And the key to that compute is going to be in the chips. ... If you look at the really significant leading large language models, they take many years to build and many billions of dollars to build. And there will be a small number of companies that want to invest that time and money, and we'll be one of them." In keeping with its focus on application builders, Amazon Bedrock is designed to integrate with other AWS tools and enable buyers to customize and tune models using their proprietary data. On the developer front, the company released Amazon CodeWhisperer into general availability, allowing programmers to use natural language prompts to generate relevant code recommendations.
The providers see an opportunity not only as users of LLM and generative AI technology for their own services but also as resellers of capacity to third parties. AWS' Jassy cited this as an example of the cloud opportunity writ large: "A lot of folks… don't realize the amount of nonconsumption right now that's going to happen and be spent in the cloud with the advent of large language models and generative AI. I think so many customer experiences are going to be reinvented and invented that haven't existed before. And that's all going to be spent, in my opinion, on the cloud." Microsoft's decision in January to invest an additional $10 billion in OpenAI, which has been a partner since 2019, was a move to secure future workloads that will land on Azure.
If there is any doubt about the impact generative AI will have on the next generation of cloud computing, consider this trio of statements by the hyperscaler parent company CEOs:
– Microsoft's Nadella: "You can expect us to do what we've done with GitHub Copilot pretty much across the board."
– Alphabet's Pichai: "We are bringing our generative AI advances to our cloud customers across our cloud portfolio."
– Amazon's Jassy: "Every single one of our businesses inside Amazon are building on top of large language models to reinvent our customer experiences."
Given the demands of LLMs on compute infrastructure, the fact that only a handful of companies can afford that infrastructure and the scrum to deliver new services that rely on generative AI, it is not hard to imagine a condition of scarcity — and a corresponding price increase — in the accelerated compute capacity needed to fuel these efforts. As Jassy put it, "To date, I think a lot of the chips there, particularly GPUs, which are optimized for this type of workload, they're expensive and they're scarce. It's hard to find enough capacity." The providers have recently signaled a slowdown in buildout of their datacenter footprints — Google was the only one that projected more new regions than the year before — and this reprioritization is bound to affect the mix of server types in new and existing locations.
Cost optimization is top of mind
In the foreground of the first-quarter 2023 earnings calls was the drive for optimizing costs, not only on behalf of customers but also internally. The need is especially acute given higher borrowing costs, macroeconomic uncertainty that has led to slower cloud adoption and spending reprioritization needed to build out new AI-driven services. The big providers are feeling their customers' pain: While spending more to create new services is justifiable, knowing how much to spend and which services are worth creating requires stricter financial discipline than before.
Amazon CFO Brian Olsavsky drew a direct line between customer cloud cost optimization and AWS's slower revenue growth during the quarter. Echoing Jassy's annual letter to shareholders, Olsavsky said, "We're working to build customer relationships and a business that will outlast all of us… [and helping them] optimize their AWS spend so that they can better weather this uncertain economy." The company expects capital expenditures to be lower in 2023 than last year's $59 billion, with increased investment in LLMs and AI being more than offset by savings in the core fulfillment and transportation areas of its retail business.
Alphabet's Pichai pointed first to the company's own belt-tightening efforts: "We are making our data centers more efficient, redistributing workloads and equipment where servers aren't being fully used. ... Improving external procurement is another area where data suggests significant savings ... and we are taking concrete steps to manage our real estate portfolio." CFO Ruth Porat cited slower growth in the Google Cloud division as a result of customers optimizing their costs on Google Cloud Platform and said the company expects to increase capex on technical infrastructure while saving on office facilities.
Microsoft CFO Amy Hood also noted that customers are being more cautious with their cloud spending and stressed the importance of finding a balance between investment in new initiatives and continuing to execute at a high level on existing operations. When asked about the cost difference between AI and "classic" cloud workloads, Nadella said, "Accelerated compute is what gets used to drive AI, and the thing that we are very, very focused on is to make sure that we get very efficient in the usage of those resources." He also pointed out that continuing optimization on customers' behalf contributed to lower usage during the quarter, saying, "We incent our people to help our customers with optimization because we believe, in the long run, that's the best way to secure the loyalty and long-term contracts."
The three companies' cost engineering moves also included layoffs in recent months and extension of the useful lives of servers and networking equipment that has been happening over the past few years.
Conway's Law and financial reporting lines
While it is natural to put AWS, Azure and Google Cloud on the same plane when it comes to public cloud, considering them in the context of their parent companies offers another view. Conway's Law suggests that systems designed by organizations tend to reflect the communications structures within those organizations. Jeff Bezos's 2002 "API mandate," which stipulated that all Amazon businesses must expose their data and functionality, and communicate with each other, via externalizable APIs, is reflected in the excruciating level of detail in the platform's SKUs and spawned the well-known "two-pizza teams" that fueled the rapid creation of hundreds of services on the AWS platform.
AWS is the only one of the big three clouds whose financial performance is reported exclusively versus the parent company's other units. Microsoft lumps Azure infrastructure-as-a-service and platform-as-a-service sales into its Intelligent Cloud segment, along with revenues from licensing of SQL Server and Windows Server, Visual Studio and other software as well as enterprise support, consulting and other professional services. Alphabet's Google Cloud division includes not only Google Cloud Platform services but also Google Workspace, its productivity suite (Microsoft's productivity software sits in its Productivity and Business Processes segment) and other enterprise services.
All three clouds are experiencing slower growth as they and their customers retrench in the face of economic uncertainty. All have pointed to fluctuating quarter-to-quarter sales due to the nature of the cloud operating model and are playing the long game by optimizing for loyalty (helping customers save) while also pursuing multiyear commitments and bigger deals. Slower growth does not mean less spending — as Jassy said, "Customers say they're cost optimizing to reallocate those resources on new customer experiences" — but it is fair to speculate that AWS's focus on adding and refining infrastructure "building blocks" has put it on the back foot when it comes to generative AI. Organizationally speaking, the goodness of Amazon's robotic warehouse technology and retail recommendation engine is unlikely to translate easily into the AWS side of the business.
Microsoft's headlong advance into consumerizing generative AI comes after years of touting the advantages of cross-pollinating information across its cloud-based productivity, developer and customer relationship products. Its integration of ChatGPT into the Bing search engine was a shot across the bow to Alphabet, whose Google Search properties have consistently accounted for a majority of its sales. Meanwhile, Google continues to point out that AI technology has been foundational across its Ads, Search, YouTube LLC and Cloud segments. Whether the decision to merge its DeepMind subsidiary (acquired in 2014) with its Brain team from Google Research into a new unit, Google DeepMind (which will be reported under Alphabet's unallocated corporate costs), will translate into greater sales and profitability is an open question.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
451 Research is part of S&P Global Market Intelligence. For more about 451 Research, please contact 451ClientServices@spglobal.com.