After OpenAI’s Chaos, Anthropic Has An Opportunity (And Its Own Untraditional Board)

Life got interesting for Anthropic two weeks ago when OpenAI nearly lit itself on fire. Anthropic had been operating comfortably in OpenAI’s shadow, collecting billions in investment from Amazon, Google, and others as it developed similar technology with an increased focus on safety. Then, as OpenAI’s chaos rolled on, companies that built entirely on GPT-4 looked for a hedge. And Anthropic was there waiting for them.

Anthropic is now in prime position to take advantage of OpenAI’s misstep, but it has its own untraditional board structure to contend with. The company is a Public Benefit Corporation, with a board that serves its shareholders. But a separate, Long-Term Benefit Trust (LTBT) will select most of its board members over time, with a mandate to focus on AI safety. The Trust has no direct authority over the CEO, but it can influence the company’s direction, setting up another novel governance structure in an industry now painfully aware of them.

“The LTBT could not remove the CEO or President (or any other employee of Anthropic),” an Anthropic spokesperson told me. “The LTBT elects a subset of the board (presently one of five seats). Even once the LTBT’s authority has fully phased in, and it has appointed three of five director seats, the LTBT would not be able to terminate employees.”

Several OpenAI employees left in late 2020 to start Anthropic. With serious technical ability, and concern about the dangers of AI, the group raised $7 billion, expanded to around 300 employees, and built Claude, an AI chatbot and underlying large language model. Anthropic now works with 70% of the largest banks and insurance companies in the U.S. and has high-profile clients including LexisNexis, Slack, and Pfizer. It’s announced billion-dollar investments from Google and Amazon this fall.

The founders of Anthropic claim to be even more concerned with safety than OpenAI, but were aware of the pitfalls of their ex-employer’s board structure. So they created a traditional board responsible to shareholders and installed the LTBT to pick board members — a departure from OpenAI’s non-profit model.

The Trust consists of “five financially disinterested members” there to help “align our corporate governance with our mission of developing and maintaining advanced AI for the long-term benefit of humanity,” the company said. Effectively, it’s an effort to sync Anthropic’s governance with its mission but insulate the company from dogmatic chaos.

“Stability is a key,” said Gillian Hadfield, a University of Toronto law professor and ex-OpenAI policy advisor who spoke with Anthropic as it was structuring the Trust. “They don’t want their company to fall apart.”

The Trust is not risk-free. Board members will have responsibilities to shareholders, but they won’t easily forget those who nominated them and why they did it. They’ll have to find a way to balance the two. The structure should make Anthropic more stable than OpenAI but not entirely immune to a repeat of the Altman situation.

“Could you see it happened with Antropic? Yes, I think we could,” Hadfield said. “I’m proud and supportive of the fact that these companies are thinking deeply and structurally about the potential risks. And they’re thinking about how would we distribute the benefits.”

Anthropic’s leadership is also close to the Effective Altruism movement, which has ties to ex-FTX CEO Sam Bankman-Fried along with some of the board members who ousted Altman two weeks ago. The Long-Term Benefit Trust has at least two members connected to Effective Altruism. Paul Christiano, the founder of the Alignment Research Center, is a prolific writer on EA forums. Zach Robinson, the interim CEO of Effective Ventures US, runs a firm tied directly to the movement.

Many Effective Altruists ascribe to a philosophy called Longtermism, which holds that the lives of people deep in the future are as valuable as lives today. So they tend to proceed with AI development with exceptional caution. The theory sounds righteous on the surface, but its critics contend that it’s hard to predict the state of the world generations from now, leading longtermists to sometimes act rashly.

Yale Law School’s John Morley, who helped architect the Trust’s structure, declined to comment. Amazon declined to comment. Google didn’t respond to a request to comment. Amy Simmerman, a partner at Wilson Sonsini Goodrich & Rosati who also worked on developing the Trust, didn’t respond.

Anthropic’s governance should be stable enough to make customers feel comfortable working with the company, at least in the coming years. That’s a significant benefit after OpenAI’s chaos showed the risks of betting on a single model. And those betting on the company seem to be aware of its structure and happy it’s in place, even if it adds some uncertainty.

“This long-term benefit trust is a little bit different. My sense is there’s some level of security in implementing something like that,” said Paul Rubillo, an investor who participated in Anthropic’s Series C round in May. “We’re in uncharted waters, right?”

The post After OpenAI’s Chaos, Anthropic Has An Opportunity (And Its Own Untraditional Board) appeared first on TheWrap.