AI: A Coup, a Cult, and Corporate Governance

Sam Altman, the CEO of OpenAI, speaks during a talk at Tel Aviv University in Tel Aviv, Israel, June 5, 2023. (Amir Cohen/Reuters)

The week of November 20, 2023: The OpenAI wars, Argentina’s election, nuclear power, electric vehicles, and much, much more.

Sign in here to read more.

The week of November 20, 2023: The OpenAI wars, Argentina’s election, nuclear power, electric vehicles, and much, much more.

On Friday November 17, Sam Altman, the CEO of OpenAI, was fired. OpenAI is the artificial intelligence company formed to engineer an AGI (artificial general intelligence), an automated system that could reason as well as Homo sapiens, before, presumably, overtaking the old dullard. Ambitious! 

So far OpenAI has been best known, beyond Silicon Valley anyway, for ChatGPT, but in the last week or so it has made the headlines with a boardroom battle. The statement explaining Altman’s forced exit was opaque (he had allegedly not been “consistently candid” with the board), but the wording of the closing paragraph told those who had been following OpenAI what the real issue was:

OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit’s mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

The company’s founders, who had included Altman, Ilya Sutskever (who became OpenAI’s chief scientist), the inevitable Elon Musk, and others (science fiction devotees all, if I had to guess) regarded AI, and then AGI, as a technology that could potentially deliver miracles but — Skynet waits in the shadows — might also wipe us all out. OpenAI was thus set up to benefit “all” of humanity, a task that raises definitional problems (who is to say what “benefit” is?), except when it comes to  avoiding our extinction, an objective that most — some hard line environmentalists aside — would share. Given the founders’ fears about AI/AGI: “safety” was to be OpenAI’s prime directive. That final paragraph includes a clear hint that, in the board’s view, “dramatic growth” was threatening what OpenAI was meant to stand for. To understand what (briefly) tore the company apart, look there. 

The issue of reconciling the company’s principles with the need to attract the capital it would need to grow had already been (it was thought) dealt with by the formation of the for-profit vehicle referred to in the statement. The profits (or their possibility) could attract the investors and employees OpenAI would need if it were to grow, but, as detailed in the company’s website, they would be capped:

The for-profit’s equity structure would have caps that limit the maximum financial returns to investors and employees to incentivize them to research, develop, and deploy AGI in a way that balances commerciality with safety and sustainability, rather than focusing on pure profit-maximization.

That would normally make some would-be investors pause, as would the fact that the nonprofit’s board, with its obligation to “humanity” would be in overall charge, but during periods of mania, prudence can fly out of the window.  

Semafor (November 21): 

People get hung up on structure,” Vinod Khosla, whose venture capital firm was among the first to invest in OpenAI’s for-profit subsidiary in 2019, said at an AI conference last week. “If you’re talking about changing the world, who freaking cares?”

Within a day or two of Khosla making that remark, the fault lines embedded into OpenAI by its structure were tearing the company apart. 

Microsoft had not been that bothered by the structure either. It first invested $1 billion in OpenAI and then invested billions more (up to an additional $12 billion). It now owns 49 percent of the company, but, wary of attracting the attention of antitrust enforcers, has no board representation.

However, as reported in the New York Times (November 20), it had taken some precautions:

But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it…

This, crucially, gave Microsoft the clout it would need after Altman was pushed out. As the Times reported, OpenAI was quickly reminded that “it needed Microsoft far more than Microsoft needed OpenAI. Microsoft developed and provided the vast computing power that runs OpenAI, and negotiated a slate of legal and commercial deals to protect it if something went wrong there.”

That was just as well. A power struggle had erupted between, to borrow the description used by Karen Hao and Charlie Warzel in The Atlantic, “the company’s two ideological extremes—one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.” Both factions shared the conviction that OpenAI would deliver AGI, and both believed that AGI could be dangerous, but Altman’s team was focused more on the growth. A group gathered around Sutskever, tapping, it seems to me, intriguingly primeval fears about forbidden knowledge, had different priorities. 

Hao and Warzel:

Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!”…

The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.

The wooden effigy was purely symbolic, but it was also evidence of a kind of superstitious dread rarely associated with an engineering project. This doesn’t appear to have gotten in the way of developing ChatGPT and other products. However, it seems that there was reportedly some irritation in the growth camp about the extent of the  resources to be devoted to “superalignment” referred to above. 

And now, wait for it, “Effective Altruism” (EA), a cult, or something adjacent to one, comes into the story. “Alignment” — working to ensure that AI follows human orders and in ways compatible with (unspecified) human values — has become something of an obsession with believers in EA, a once not nutty idea that, in somewhat mutated form, has taken root in the AI sector. According to the New York Times (November 18), the Rationalist and Effective Altruist movements, make up “a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.” 

Altman has described EA as “incredibly flawed” and showing “very weird emergent behavior.” EA is also connected to a notion known, as Richard Waters and John Thornhill explain in the Financial Times, as long-termism: 

[L]ong-termism is based on a belief that the interests of later generations, stretching far into the future, must be taken into account in decisions made today. The idea is an offshoot of effective altruism, a philosophy shaped by Oxford university philosophers William MacAskill and Toby Ord, whose adherents seek to maximise their impact on the planet to do the most good. Long-termists aim to lessen the probability of existential disasters that would prevent future generations being born — and the risks presented by rogue AI have become a central concern.

All this conjures up visions of philosopher-kings (with, it seems, prophetic powers, but little in the way of humility) with a worldview dominated by paranoia, apocalypticism, and immense self-regard. Accepting long-termism would be a recipe for technological stagnation, both societally and at the corporate level and unfortunately OpenAI’s unusual structure had opened the door for these ideas to be embedded into the company at a high level. 

Sutskever is known to be sympathetic to EA. That’s foolish, but senior executives can get away with foolish ideas if they have enough to offer a company. The real problem rested with the non-profit’s independent directors, who had no significant interest in the company’s economic development. This was underlined by the fact that they had no holdings in its equity (a recusal not necessarily required of independent directors). As independent directors, they were meant to act, in some sense, as OpenAI’s conscience. That would normally be fine, but, given the company’s overriding mission, it could represent a severe risk to the those who had invested in OpenAI for dollars, not, uh, “humanity.” 

Three board members were closely involved with EA, including Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology. She has been reported by the New York Times as saying that the board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity.” If the company was destroyed, she maintained, that could be consistent with its mission, words unlikely to prove reassuring to financial investors. 

Interestingly, Toner has claimed that AI is closer to alchemy than to rocket science, an indication, even if only as a metaphor, of the encroachment of magical thinking into this area. When the time came, she voted for Altman’s ouster as did another board member with ties to EA, RAND Corporation scientist, Tasha McCauley. 

Not for nothing are many of those who have these fears about AI known as “doomers.” Some of their thinking was clearly visible in the recent call for a “pause” in certain types of AI development, a call that if heeded (and if they were right about AI’s destructive capability) could have benefited the likes of Xi and Putin and other undeserving or dangerous types.  As I have noted in earlier articles, there are legitimate reasons for concern about AI. But apocalypticism ought to have no place in a board room. 

As mentioned above, OpenAI’s success increased the divide between those who worried where AI was headed and those who, headed by Altman, wanted to press on. Matters came to a head and the board voted Altman out. 

The Atlantic article appeared on November 19, at the halfway point in this drama. All Hao and Warzel could really do was guess about what happen next:

The company was founded in part by the very contingent that Sutskever now represents—those fearful of AI’s potential, with beliefs at times seemingly rooted in the realm of science fiction…

Altman’s firing can be seen as a stunning experiment in OpenAI’s unusual structure. It’s possible this experiment is now unraveling the company as we’ve known it, and shaking up the direction of AI along with it. If Altman had returned to the company via pressure from investors and an outcry from current employees, the move would have been a massive consolidation of power. It would have suggested that, despite its charters and lofty credos, OpenAI was just a traditional tech company after all.

Spoiler: That, mercifully, now appears to be well underway. 

By November 22, Altman was back as CEO, helped by a staff revolt and, critically,  backing from Microsoft, which had been given little advance warning of his firing, and was not best pleased. Chaos within OpenAI, a company into which it had invested billions, could make a mess of its plans to integrate AI into much of what its work. It promptly hired (or began hiring) Altman and OpenAI’s president to lead a new advanced AI unit within Microsoft, and offered jobs to other OpenAI employees who wanted to come over.

And so, the New York Times reported on November 21:

Mr. Sutskever and others critical of Mr. Altman were jettisoned from the board, whose members now include Bret Taylor, an early Facebook officer and former co-chief executive of Salesforce, and Larry Summers, the former Treasury Department secretary. The only holdover is Adam D’Angelo, chief executive of the question-and-answer site, Quora.

The OpenAI debacle has illustrated how building A.I. systems is testing whether businesspeople who want to make money from artificial intelligence can work in sync with researchers who worry that what they are building could eventually eliminate jobs or become a threat if technologies like autonomous weapons grow out of control.

There’s been a debate as to how genuine those fears really were. Some suspect that leading AI players were trying to frighten credulous or rent-seeking politicians into establishing a regulatory regime that favored incumbents. As the New York Times reported on May 30:

In a blog post…. Mr. Altman and two other OpenAI executives proposed several ways that powerful A.I. systems could be responsibly managed. They called for cooperation among the leading A.I. makers, more technical research into large language models and the formation of an international A.I. safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

Mr. Altman has also expressed support for rules that would require makers of large, cutting-edge A.I. models to register for a government-issued license.

None of these proposals sound particularly friendly to new entrants into the industry. 

At the same time, there seems little doubt that some at OpenAI were truly worried about the threat that AI could pose to humanity (and whatever his motives, it should be noted that Altman has publicly put himself in that camp on more than one occasion). 

Nevertheless, it wouldn’t do to overstate this fear: 

Before Mr. Altman’s return, the company’s continued existence was in doubt. Nearly all of OpenAI’s 800 employees had threatened to follow Mr. Altman to Microsoft…

That doesn’t strike me as the actions of people who think that AI might bring about the end of the world. Interestingly, Sutskever has since tweeted his own satisfaction that the crisis had passed. He remains with OpenAI, although no longer sits on the board. 

Writing for the Financial Times, John Thornhill, who is a founder of Sifted, an FT-backed site about European start-ups, lamented the failure of OpenAI’s corporate structure:

It would be tempting to conclude that OpenAI’s experiment of having a not-for-profit board, responsible for ensuring the safe development of AI, overseeing a for-profit commercial business, should be scrapped. 

It’s a temptation that should be eagerly accepted. 

Thornhill: 

Microsoft…has already called for the governance structure to change. And OpenAI’s rapidly reconstituted three-man board, which now includes a former chief executive of Salesforce and a former US Treasury secretary, seems better suited to carrying out traditional fiduciary responsibilities. Yet it would be a tragic mistake to abandon attempts to hold the leadership of the world’s most important artificial intelligence companies to account for the impact of their technology. Even the bosses of those companies, including Altman, accept that, despite AI’s immense promise, it also poses potentially catastrophic risks.

Well, some of those bosses may have drunk rather too much of their own Kool-Aid. Others may have been making it up as they went along. The best way ahead is not for AI to be supervised by a self-appointed priesthood that believes that it knows best what is good for humanity. Such an arrangement would inevitably mean that much of AI’s promise would either be squandered — or be exploited by our competitors. AI is not going to be uninvented. And the more that AI is developed (an inevitable side effect of commercialization), the more we will know about what can go wrong (and how to fix it), and how to prevent it being used to gain an advantage over us. We can be sure that the Chinese, say, or the Russians will not let long-termism or EA get in the way of their AI programs. The more that AI is developed in the U.S., the more it can be used to bolster America’s defenses. 

Thornhill frets that it “would be a tragic mistake to abandon attempts to hold the leadership of the world’s most important artificial intelligence companies to account for the impact of their technology.” But who is talking about doing that? The managements of these companies, like their counterparts in most businesses are answerable to shareholders, the rule of law, and regulatory supervision, the last of which seems bound to increase. 

Not only that, the mere fact of having their products out in the marketplace will expose them to a high degree of scrutiny, and the financial rewards that flows from success ought to incentivize both superior products and good behavior. There will be exceptions, of course, to that, thus the need for a rulebook. The key will be to ensure that the regulatory structure is one mostly designed to build guardrails rather than roadblocks. 

If AI takes off as some now predict, it may trigger major (and perhaps significantly destabilizing) changes to the jobs market. But no one company can try to manage that transition. Much like the regulatory structure within which AI companies will have to operate that will, quite properly, be the responsibility of government. 

The lesson left by the battle over OpenAI, is that a traditional corporate structure with traditional corporate objectives (generating return for shareholders) is the way that AI companies should go. They should not set themselves up as some sort of arbiters for society. 

Wisely, Microsoft is looking for ways to increase its presence on OpenAI’s board, whether directly or indirectly, but some of the published commentary on the possible composition of that board show that changes in OpenAI’s structure should not stop there. 

Bloomberg (November 22):

Job one for the interim board will be finding new directors who can strike a better balance between OpenAI’s business imperatives and the need to protect the public from tools capable of creating content that misinforms, worsens inequality or makes it easier for bad actors to inflict violence.

The reconstituted board should reflect greater diversity, said many people, including Ashley Mayer, CEO of Coalition Operators, a venture capital firm. “I’m thrilled for OpenAI employees that Sam is back, but it feels very 2023 that our happy ending is three white men on a board charged with ensuring AI benefits all of humanity,” she wrote on the social media site X. “Hoping there’s more to come soon.”

Coalition Operators has $12.5 million under management, not what you would call a titan in the investment world, but Bloomberg has a corporatist message to promote. Nevertheless, while a company should be run prudentially (and creating content that would make “easier for bad actors to inflict violence” would, I imagine, fail that test) broader societal issues such as tackling inequality should be a matter for a democratically elected government, not some C-Suite. 

Bloomberg opinion columnist, Parmy Olson, writing on November 24:

When OpenAI’s leaders return to work on Monday they’ll have one thing at the top of their to-do list: figure out what to do about the nonprofit board that nearly killed them.

They’ve already begun setting up a governance structure that will guide them in a more commercial direction, and though that’s great news for OpenAI’s investors, it does fly in the face of its founding principles of prioritizing humanity while building super-intelligent machines. OpenAI’s leadership can do something about that. They must think carefully about the remaining board members they add, and not just to look progressive.

When it comes to the development of AI, the manner in which humanity can be “prioritized” (however that may be interpreted) should be left to governments, not companies, to decide. OpenAI should junk that part of its charter and focus solely (although certainly responsibly) on those narrower, more traditional corporate objectives, such as generating a return for its shareholders, in its case, I’d imagine, through innovation that, if history is any judge, could be good news for humanity. Nevertheless the final decision of how humanity handles the consequences of that innovation — if they are important enough to justify it — should be left to humanity’s democratically elected representatives, not to some carefully selected, suitably progressive cabal. 

The Forgotten Book

Capital Matters has a fortnightly feature, The Forgotten Book, which is written by our new National Review Institute fellow, the writer and historian, Amity Shlaes. We live in an age of short attention spans, and one of Amity’s objectives is to introduce readers to books or other primary sources that warrant a second look.

With her Capital Matters column, Amity will dedicate herself to sharing with Capital Matters readers older, forgotten books, along with new books that aren’t getting the attention they perhaps warrant.

Her latest column can be found here, and is focused on the way that greater information about tax rates helped Americans decide where they wanted to live:

Taxes, of course, drive migration as well. Pointing this out 30 Novembers ago would elicit long, absurd lectures about “snowbirds” and the draw of, say, Florida’s climate. Rebuttal became easier in the 1990s, when the Internal Revenue Service teamed up with the Census Bureau, and, perhaps inadvertently, infused some reality into the discussion.

Together, the IRS and Census began to publish, state-to-state, and even county-to-county migration patterns. What emerged were correlations so tight that the role of tax in American migration became harder to deny.

The Capital Record

We released the latest of our series of podcasts, the Capital Record. Follow the link to see how to subscribe (it’s free!). The Capital Record, which appears weekly, is designed to make use of another medium to deliver Capital Matters’ defense of free markets. Financier and National Review Institute trustee, David L. Bahnsen hosts discussions on economics and finance in this National Review Capital Matters podcast, sponsored by the National Review Institute. Episodes feature interviews with the nation’s top business leaders, entrepreneurs, investment professionals, and financial commentators.

In the 146th episode, David recaps last week’s debate with Oren Cass on the role of public policy in economic life, offers five principles to consider for the use of the state in a market economy, and offers five problems with the new right’s newfound fondness of state interventions.

No Free Lunch

Earlier this year, David Bahnsen launched a new six-part digital video series, No Free Lunch, here online at National Review. In it, we bring the debate over free markets back to “first things” — emphatically arguing that only by beginning our study of economics with the human person can we obtain a properly ordered vision for a market economy…

The series began with a discussion with Fr. Robert Sirico of the Acton Institute. Later guests include Larry Kudlow, Dennis Prager, Dr. Hunter Baker, Ryan Anderson, Pastor Doug Wilson, and Senator Ted Cruz. 

Yes, the six-part series now has seven parts. 

Enjoy.

The Capital Matters week that was . . .

Electric Vehicles

Matthew Lau:

Inconvenient evidence continues to mount against the activists who insist the centrally planned transition to electric vehicles will be a smooth, fuel-efficient, inexpensive ride. Sales and consumer demand are falling behind manufacturers’ expectations. Ford just announced that it is delaying $12 billion in planned investments on EVs and that its $3.1 billion loss in its electric division through three quarters of 2023 already exceeds what it originally projected to lose for the entire year. On top of this, a pair of recent studies lays bare that the ongoing government push for electric vehicles will impose severe costs on taxpayers, with relatively meager environmental benefits…

Argentina

Andrew Stuttaford:

Neither Argentina’s political class nor its voters have been known for their fondness for free-market economics. This makes it all the more likely that Milei’s victory was as much a vote against Argentina’s latest economic crisis as one for, uh, anarcho-capitalism. Nevertheless, the country’s economic crisis is the product of more than 70 years (with the occasional interruption) of economic nationalism, characterized, as economic nationalism tends to be, by industrial policy, high tariffs, capital controls, rent-seeking, and all the rest. Reflecting the corporatist strains that ran through Peronist ideology in its original form, labor unions were and are another major element in the Peronist system. The result (which some of America’s “national conservatives” might want to ponder) has been a miserable failure.

Dominic Pino:

[T]hose in the U.S. who might be supportive of Milei’s free-market agenda should also be cautious of imputing too much weight to his victory. Just as his win does not represent an advance in the global tide of authoritarianism, it also does not portend a global free-market movement.

Milei has trained his fire on specific concerns of Argentinian voters related to the economy and political corruption. Argentina faces uniquely awful economic circumstances, with triple-digit inflation and a poverty rate around 40 percent. The current vice president, Cristina Fernández de Kirchner, is basically in office so she can’t be imprisoned for crimes she committed when she was president. The U.S. has economic and corruption problems, but not like that…

Nuclear Power

Pieter Cleppe:

Compared with nuclear power, solar is much cheaper and easier to install. But the problem of designing scalable storage systems capable of dealing with the problem of intermittency for both solar and, for that matter, wind (the sun does not always shine and the wind does not always blow) has yet to be resolved. They are not, at least for now, sufficiently reliable to serve as the bedrock of energy supply. Perhaps batteries and alternatives such as hydrogen may be able to help make up for these shortcomings in the future, but we are not there yet. Nuclear power, by contrast, can serve as the stable backbone of an electricity grid…

Wind Power

Andrew Stuttaford:

The offshore wind industry has very little to do with markets in any real sense but owes almost everything to political or regulatory distortions of the energy market, specifically the imposed replacement of reliable energy sources with one that (until a solution can be found for the intermittency problem — the wind doesn’t always blow) is not only intrinsically unreliable, but will not do much for the climate, and comes with a steep opportunity cost. Moreover, it seems as if the economic viability of some of these projects rested on, among other flawed assumptions, ultra-low interest rates. Who could have thought they might revert somewhat closer to the mean, or that the money-printing of the last decade or so might have inflationary consequences?

Economics

Dominic Pino:

Nobody has to figure out at the national level, or the city level, how many total turkeys are needed ahead of time. It’s an impossible question to answer. That’s why we have markets instead of centrally planned Thanksgiving.

If you start from the point of view that people want turkeys, the problem becomes much easier to solve. If people want turkeys, they’ll make plans on their own to get them. You don’t have to solve the problem of how the turkey gets to their houses. They’ll solve it in the way that makes the most sense to them…

Economic Freedom and Mobility

Steve Hanke & Stephen Walters:

This problem — bad policies forcing people to uproot their lives in pursuit of more freedom in a friendlier political climate — may not be self-correcting. Cato’s rankings, which its authors have been refining for over two decades, are suggestive. New York’s raw freedom score is not only the nation’s worst but has been declining steadily since 2000 (from -0.60 to -0.75); ditto California’s (down from -0.36 to -0.51). By contrast, the influx of freedom-lovers to New Hampshire over the years has raised its score to the highest on record (up from 0.46 in 2000 to 0.73 today).

The lack of meaningful political competition and, thus, freedom in some states can have tragic consequences for their (remaining) residents…

To sign up for The Capital Letter, please follow this link. 

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version