Anthropic CEO Dario Amodei’s 20,000-word essay arguing AI ‘will test’ humanity is a must-read—but his remedies are more noteworthy than his warnings

Dario Amodei, CEO of AI firm Anthropic, released a 20,000-word essay on Monday titled The Adolescence of Technology. In it, he warned that AI is about to “test who we are as a species” and that “humanity is on the cusp of receiving almost unthinkable power—and it’s far from clear whether our social, political, and technological systems are mature enough to handle it.”

The essay, published on Amodei’s [blank], has created a massive buzz on social media. But it’s important to note what’s new here and what isn’t.

Amodei has been worried about AI’s catastrophic risks for years. He’s cautioned against AI assisting in the development of bioweapons or chemical weapons, powerful AI breaking free from human control, potential widespread job losses as AI grows more capable and is adopted across more industries, and the dangers of concentrated power and wealth as AI use expands.

In his latest essay, Amodei repeats all these concerns—sometimes using sharper language and shorter timeframes for when he expects these risks to occur. Headlines about the essay have, understandably enough, focused on Amodei’s direct outline of AI risks.

Among AI companies, Anthropic is known for possibly having the strongest focus on AI safety—a priority that has actually helped it gain commercial momentum with large businesses, as explained in [blank] about Amodei’s firm. This is because many of the steps Anthropic has taken to ensure its models don’t pose catastrophic risks to humanity have also made them more reliable and manageable—traits most businesses care about.

So in many respects, Amodei’s essay is as much a novella-length marketing pitch as it is a passionate prediction and call to action.

This isn’t to say Amodei is being insincere. It’s just to highlight that his essay serves multiple purposes: what he believes is needed to safeguard humanity’s future as AI progresses aligns closely with Anthropic’s current brand position in the market. For example, it’s notable how often Amodei mentions the “constitution” developed for its AI model Claude as a key factor in reducing various risks—from bioterrorism to the model escaping human control. This constitution, [blank], is one feature that sets Anthropic’s AI models apart from competitors like OpenAI, [blank], [blank], and Elon Musk’s xAI.

More notable than the risks Amodei outlines in the essay are the concrete solutions he advocates for. For example, he argues that wealthy people have a duty to help society deal with AI’s potential economic impacts—including supporting those who might lose their jobs to AI. He states that all of Anthropic’s cofounders have promised to donate 80% of their wealth to charity. Additionally, Anthropic employees have individually pledged billions of dollars in company shares to nonprofits, and the firm is matching those donations.

He criticizes other Silicon Valley figures for not taking similar steps, saying: “It saddens me that many wealthy individuals (especially in tech) have recently adopted a cynical, nihilistic view that philanthropy is inherently fraudulent or useless.”

Amodei suggests that AI companies—like his own—should collaborate with enterprise clients to guide them toward AI uses that create value through new business lines and revenue growth, not just by cutting labor costs. “Enterprises often choose between ‘cost savings’ (doing the same work with fewer people) and ‘innovation’ (doing more with the same number of people),” Amodei writes. “The market will eventually produce both, and any competitive AI company must cater to both. But there may be space to nudge companies toward innovation when possible—and that could buy us some time. Anthropic is actively exploring this.”

He also says businesses have a responsibility to find creative ways to reassign employees whose jobs are disrupted by AI instead of just laying them off. He raises the idea that “in the long run, in a world with massive total wealth—where many companies grow significantly in value due to higher productivity and concentrated capital—it might be possible to pay employees even after they stop contributing traditional economic value.” And he notes that Anthropic is considering several “potential paths” for its own employees, which it will share publicly later.

Finally, Amodei calls for government action to redistribute wealth. He says the most straightforward approach would be a progressive tax system—either general or targeted at the excessive profits he expects AI companies to soon generate. (Currently, Anthropic and most other AI-focused firms are deeply unprofitable, though Anthropic told investors last year it aims to break even by late 2028.)

To wealthy groups that would fight such a tax, Amodei offers a “pragmatic case to the world’s billionaires: it’s in their interest to back a well-designed tax. If they don’t, they’ll end up with a bad one created by a mob.”

Headlines about Amodei’s essay have naturally centered on his prediction that 50% of entry-level white-collar jobs will be lost within one to five years. He made the same forecast last week at the World Economic Forum in Davos, but his comments were overshadowed by coverage of U.S. President Donald Trump’s speech at the event.

Amodei also writes in the essay that AI with human-level capabilities across all areas will arrive in the next two years—his most explicit prediction yet of when this landmark moment in computer and human history will happen. (He explains that this human-level AI will take time to spread through society, which is why he thinks it will only displace 50% of entry-level knowledge workers in five years.)

Science fiction author William Gibson famously said: “The future is already here—it’s just not evenly distributed.” It’s also important to note that while Amodei has mostly been right about when certain AI capabilities will emerge, he hasn’t always been fully accurate about their impacts.

For example, early last year, Amodei claimed that AI would write up to 90% of software code within six to nine months. This turned out to be mostly true for Anthropic itself—the company recently revealed its Claude CoWork tool was almost entirely written by Claude. But it wasn’t accurate for the broader code landscape: in most other businesses, AI writes about 20% to 40% of code (though this number is rising, up from nearly 0% three years ago). So Amodei isn’t a perfect prophet, but he’s still someone worth listening to.