As Nvidia Corp. prepares to acquire contract chip designer Arm Holdings Ltd, a new software agreement with VMware Inc. seems aimed at reassuring the industry that Nvidia can compartmentalize competing interests in its widening business portfolio.
Unlike Nvidia and other chip companies that sell chips and other devices, Arm makes money by licensing patented designs for chips and electronic components to hardware-makers who pay a licensing fee and royalties on Arm designs they build into their own products. Nvidia's Sept. 13 announcement of its intent to buy Arm immediately drew questions about whether Arm's noncompetitive business model would survive. Nvidia CEO Jensen Huang promised that it would. That promise carried significant weight with Arm customers, said Stacy Rasgon, managing director and senior analyst at Sanford C. Bernstein & Co. LLC, during an interview Sept. 14.
Now, Nvidia is bringing the noncompetitive model into its artificial intelligence software business, agreeing to make its AI available to customers of three VMware Inc. cloud platforms. The AI agreement, announced Sept. 29, is part of a larger effort by VMware to expand its appeal beyond its traditional base in enterprise data centers. Most traditional data centers depend on Intel Corp.-based hardware and software standards. Increasingly they are migrating to cloud, AI and other applications that are not Intel-centric.
Arm's Neoverse N1 chip, which is licenses to Amazon Web Services |
Nvidia may still have to step carefully around some Arm customers, however, especially cloud service providers. Amazon.com Inc.'s Amazon Web Services is a major customer of both Nvidia and Arm. AWS licenses Arm's central processing unit, or CPU, and uses it as the basis for its Graviton processors, according to James Sanders, a cloud analyst at 451 Research. Graviton is Amazon's own custom processor.
AWS is promoting the Graviton 2 CPU as an alternative to Xeon servers from Intel. It could also compete with Nvidia, under Huang's plan to enhance Arm CPUs with its own AI technology. The goal is to shift the data-center market from one based on standard Intel servers to one based on AI and cloud-computing technologies, according to Paul Teich, the principal analyst at market research firm Liftr Insights.
It is not clear how AWS and other large cloud providers that design and build hardware for data centers will react to Nvidia's plan, Teich said. Cloud providers generally go along with the demands of their own customers, many of which are actively migrating their on-premise data centers and private-cloud platforms to a mix of public-cloud platforms, among which AWS is the largest, Teich said.
Stable, high-performing AI and machine-learning platforms are one of the highest priorities for executives responsible for hybrid-cloud migrations, according to a survey of IT executives published in August by 451 Research, an S&P Global Market Intelligence company. Seventy-five percent of the mid-level and senior IT professionals surveyed said the pandemic was leading them to invest in new AI initiatives.
More than half of those surveyed by 451 that had already adopted some AI technology said their infrastructure would not be able to support the functions required to make future projects successful. They rated cloud platforms including AWS as the top platforms likely to help improve their AI capabilities.
Graviton 2 and other cloud-provider chips were originally designed to run AI applications such as Google Translate or Amazon's Alexa virtual assistant efficiently enough to reduce the need for additional servers as demand for the applications grow, Sanders said.
If Nvidia can fulfill its promise to make Arm CPU designs more power efficient under AI applications, or cut the cost for cloud providers of developing chips customized to their own platforms, it would get an enthusiastic response from cloud providers more concerned with cost and power efficiency than name-brand chips, Teich said.
"Having their own chips puts AWS in control of its own destiny in terms of deploying a major class of [AI] inferencing tasks on their own hardware," Teich said. "Ultimately, all the cloud providers are looking to avoid the Intel tax."