Explaining what effective altruism is, where it came from, or what its adherents believe would fill the rest of this article. But the basic idea is that EA – as effective altruists are called – believes that you can use cold, hard logic and data analysis to determine how to do the best in the world. It’s a “Moneyball” for morality — or, less charitably, a way for hyper-rational people to convince themselves that their values are objectively correct.
Effective altruists were once primarily concerned with short-term issues such as global poverty and animal welfare. But in recent years, many have shifted their focus to long-term issues like pandemic prevention and climate change, theorizing that preventing disasters that could end human life altogether is at least as good as addressing current miseries.
The movement’s adherents were among the first people to worry about the existential risk of artificial intelligence, back when rogue robots were still considered a sci-fi cliché. They beat the drum so loudly that some young EAs have decided to become artificial intelligence security experts, and get jobs working to make the technology less risky. As a result, all of the major AI labs and security research organizations contain some trace of the influence of effective altruism, and many count believers among their staff members.
No major AI lab embodies the EA ethos as fully as Anthropic. Many of the company’s early hires were effective altruists, and many of its own initial funding came from wealthy EA-affiliated tech executives, including Dustin Moskovitz, co-founder of Facebook, and Jaan Tallinn, co-founder of Skype. Last year, Anthropic received a check from the most famous EA of all – Sam Bankman-Fried, the founder of the failed crypto exchange FTX, who invested more than $500 million in Anthropic before his empire collapsed. (Mr. Bankman-Fried is awaiting trial on fraud charges. Anthropic declined to comment on its investment in the company, which is said to be bound in the FTX bankruptcy proceedings.)
The reputation of effective altruism took a hit after the fall of Mr. Bankman-Fried, and Anthropic distanced itself from the movement, as did many of its employees. (Both Mr. and Mrs. Amodei rejected the movement’s label, although they said they were sympathetic to some of its ideas.)
But the ideas are there if you know what to look for.
Some Anthropic staff use EA-inflected jargon — talking about concepts like “x-risk” and memes like the AI Shoggoth — or wear EA conference swag to the office. And there are so many social and professional connections between Anthropic and prominent EA organizations that it’s hard to keep track of them all. (Just one example: Ms. Amodei is married to Holden Karnofsky, the co-head of Open Philanthropy, an EA grantmaking organization whose senior program officer, Luke Muehlhauser, sits on Anthropic’s board. Open Philanthropy, in turn, gets the lion’s share. from its funding by Mr. Moskovitz, who also personally invested in Anthropic.)
For years, no one questioned whether Anthropic’s commitment to AI security was genuine, in part because its leaders sounded the alarm about the technology for so long.