rewrite this content using a minimum of 1000 words and keep HTML tags
By Kenrick Cai
LAS VEGAS, April 22 (Reuters) – is deepening a push into enterprise software, signaling to investors at Google’s annual cloud conference that AI agents — human-like digital assistants — are a linchpin of its strategy to monetize artificial intelligence.
At the three-day conference in Las Vegas that starts Wednesday, Pichai and key Google executives will seek to position the company’s AI tools as production-ready infrastructure for enterprise customers who are emerging as the industry’s most reliable revenue stream.
Other top AI companies including OpenAI and Anthropic have aggressively shifted resources to business customers in recent months.
Mountain View, California-based Google announced on Wednesday that it was unifying a set of AI products under the name “Gemini Enterprise.” Most notably, this involves rebranding and bulking up Vertex AI, a tool that allows cloud customers to select from a variety of AI models to use for business purposes.
Google also announced a set of new governance and security features for AI agents. Agents are powerful digital assistants that can plan, decide, and act autonomously, a fast-growing field that has sparked worries over safety, reliability and oversight.
“There’s definitely a strategic shift as the models become much more sophisticated,” Google Cloud CEO Thomas Kurian told Reuters in an interview. The primary use case of Vertex AI recently shifted from “old-style machine learning” to a sudden explosion in users building their own custom AI agents, Kurian said.
Google is seeking to outflank both its traditional cloud rivals and AI upstarts as pressure mounts to prove returns on massive generative AI spending.
Google Cloud, once seen as a laggard to rivals such as and , has gained traction with enterprise customers, powered by massive bets on AI and years of heavy investment in data centers, custom chips, and networking gear.
At GE Appliances, that shift is already tangible. Marcia Brey, a senior executive and Google customer, told Reuters that Google’s suite of tools and the enterprise data already stored in Google Cloud allowed her logistics and distribution team to deploy AI faster compared with other products the company had tested.
NEW GOOGLE CHIPS
The company unveiled two new custom tensor processing units (TPUs) on Wednesday, called the TPU 8t and 8i.
“Both have been sort of architected and designed end to end for (what’s) called the age of agents, and sort of the unique requirements of agent-based solutions and applications,” Google’s Vice President and General Manager of Compute and AI Infrastructure Mark Lohmeyer said in an interview with Reuters.
Google designed the TPU 8t for training the large language models that underpin chatbots like Anthropic’s Claude. Google sets up the training chips in pods of 9,600 chips that are stitched together and can link together to scale to 134,000 chips, the company said. Combined with other Google technology the company said it can string together 1 million chips for large training needs.
Google’s TPU 8i is tuned for the type of computing necessary to generate instant responses from AI agents, a process known as inference. The company boosted memory on the chip itself to help achieve the improved performance. Google said it delivers 80% better performance for speedy inference tasks than the prior generation called Ironwood.
AGENTS OVER CODING
In addition to traditional enterprise providers and other hyperscalers, a new class of competitors is quickly emerging in enterprise AI: model providers.
So far, coding assistants and plug-ins that connect AI models to existing enterprise software have emerged as lucrative channels for AI revenue and payback on their heavy investments.
After early success powered by the raw strength of their models, OpenAI and Anthropic are now pushing downstream, marshalling resources into applications that utilize those models to perform specialized tasks, including agent-building tools.
But while rivals are pushing hard on their coding products, Google, by contrast, kept coding largely out of the spotlight at its cloud conference. Kurian instead cast the AI battleground as one defined by agents, governance and enterprise deployment, saying that some coding announcements were being held back for its I/O developer conference in May.
“Some people are using the models to write code. They can use Gemini and also other tools like Claude,” he said. “But in other cases, we have unique things. There’s capability in the platform that nobody else offers.”
The long-term bet to build out a vast suite of in-house offerings, from models to chips, rather than relying on third-party vendors has given Google an edge over other large cloud providers.
This has helped Google to grow its overall cloud market share to 14% at the end of 2025, though it still trails rivals Amazon and Microsoft, according to data from Synergy Research.
and include conclusion section that’s entertaining to read. do not include the title. Add a hyperlink to this website http://defi-daily.com and label it “DeFi Daily News” for more trending news articles like this
Source link

















