At Davos, the hype around AI gives way to a focus on return on investment

Hello and welcome to Eye on AI. In this edition….a report from Davos…OpenAI is ‘on track’ to launch a device in 2026…The Anthropic CEO on China chip sales…and is Claude Code Anthropic’s ChatGPT moment?

Hi. I’m in [location], Switzerland, this week for the World Economic Forum. Tomorrow’s visit of U.S. President Donald Trump is the main topic of conversation here. But when people aren’t talking about Trump and his imposition of tariffs on European allies that oppose his attempt to take control of Greenland from Denmark, they’re talking a great deal about AI.

The promenade in this ski town turns into a tech trade show area during the World Economic Forum. The logos of well – known software companies and consulting firms are plastered on shopfronts, and the signage promotes various AI products. However, while last year’s Davos was filled with the hype around AI agents and excessive worry that the debut of DeepSeek’s R1 model (which took place then) could mean the capital – intensive plans of U.S. AI companies were useless, this year’s AI discussions seem more rational and practical.

The business leaders I’ve talked to here in Davos are more focused than ever on how to get business returns from their AI spending. The era of pilot projects and experimentation seems to be coming to an end. So is the time of just imagining what AI can do. Many CEOs now realize that implementing AI on a large scale is neither easy nor inexpensive. Now, there’s much more attention on practical advice for using AI to have an impact across the enterprise. (But there’s still a bit of idealism here, as you’ll see.) Here are some of the things I’ve heard in conversations so far:

CEOs take control of AI deployment

There’s an agreement that the bottom – up approaches—giving every employee access to ChatGPT or [other tools], for example— which were popular in many companies two years ago at the start of the generative AI boom, are now in the past. Back then, CEOs thought that front – line workers, who are closest to the business processes, would know the best way to use AI to make the processes more efficient. This turned out to be wrong—or, more precisely, the benefits from doing this were hard to measure and rarely led to significant changes in either revenue or profit.

Instead, top – down, CEO – led initiatives aimed at transforming core business processes are now considered essential for getting a return on investment from AI. Jim Hagemann Snabe, the chairman of [company] and former co – CEO at [company], told a group of fellow executives at a breakfast discussion I moderated here in Davos today that CEOs need to be “decisive” in identifying where their businesses should use AI and in pushing those initiatives forward. Similarly, both Christina Kosmowski, the CEO of IT and business data analytics company LogicMonitor, and Bastian Nominacher, the co – founder and co – CEO of process mining software company Celonis, told me that board and CEO support is an essential part of enterprise AI success.

Nominacher had some other interesting points. In research commissioned by Celonis, setting up a center of excellence to figure out how to optimize work processes with AI resulted in an 8 – times better return compared to companies that didn’t set up such a center. He also said that having data in the right place is essential for successful process optimization.

The race to become the orchestration layer for enterprise AI agents

There’s clearly a competition among SaaS companies to become the new interface layer for AI agents working in companies. Carl Eschenbach, Workday’s CEO, told me that he thinks his company is well – positioned to become “the main entrance to work” not only because it has access to key human resources and financial data but also because the company already manages onboarding, data access, permissions, and performance management for human workers. Now it can do the same for AI agents.

But others also want this opportunity. Srini Tallapragada, Salesforce’s chief engineering and customer success officer, told me how his company is using “forward – deployed engineers” at 120 of Salesforce’s largest customers to bridge the gap between customer problems and product development. They’re learning the best way to create agents for specific industry sectors and functions, which they can then offer to Salesforce’s broader customer base. Judson Althof, Microsoft’s commercial CEO, said that his company’s Data Fabric and Agent 365 products are becoming popular among big companies that need an orchestration layer for AI agents and a unified way to access data stored in different systems and silos without having to move the data to a single platform. Meanwhile, Snowflake CEO Sridhar Ramaswamy thinks that his company’s deep knowledge in maintaining cloud – based data pools and controlling access to that data, combined with its new ability to create its own AI coding agents, makes it ideally suited to win the race to be the AI agent orchestrator. Ramaswamy told me his biggest concern is whether Snowflake can move fast enough to achieve this goal before OpenAI or Anthropic move down the technology stack—from AI agents to data storage—potentially replacing Snowflake.

Here are a couple more insights from Davos so far: while there’s still a lot of fear that AI will cause widespread job losses, it hasn’t shown up in economic data yet. In fact, Svenja Gudell, the chief economist at recruiting site Indeed, told me that although the tech sector has seen a large drop in jobs since 2022, this trend started before the generative AI boom and is probably due to companies “right – sizing” after the large – scale hiring during the pandemic rather than AI. And while many industries aren’t hiring much at the moment, Gudell says global macroeconomic and geopolitical uncertainty are to blame, not AI.

Finally, in response to one of this week’s major AI news stories—Snabe, the Siemens chairman, had an interesting answer to the question of how AI should be regulated. He said that instead of trying to regulate specific AI use cases—as the EU AI Act has done—governments should more generally require that AI follow human values. And the most important regulation to ensure this, he said, would be to ban AI business models based on advertising. Ad – based AI models will lead companies to focus on user engagement, with all the negative effects on mental health and democratic consensus that we’ve seen from social media, but much worse.

With that, here’s more AI news.

Jeremy Kahn

@jeremyakahn

Beatice Nolan wrote the news and sub – sections of Eye on AI.