The Bluestocking: From Napoleon to Elon Musk
A conversation about AI, Leviathan and the hero of the modern world (the taxpayer).
Happy Monday!
Since OpenAI—ultra-buzzy tech unicorn, creator of ChatGPT—has been in the news so much this weekend, I thought I would run a transcript of my recent interview with David Runciman, professor of politics at Cambridge University, about his new book The Handover, on the similarities between states, corporations and AIs. All are agents—they act—and all are bigger than any individual leader.
This transcript was AI-generated, then edited and condensed by me, from our talk at the Cambridge Literary Festival. Since we spoke, the new chair of the OpenAI board tried to bring back the company’s founder, Sam Altman, who was ousted on Friday. When that attempt failed, the interim CEO was pushed out, and Altman got a job offer from Microsoft directly, which he accepted late on Sunday.
Today, 500 out of 720 OpenAI employees have taken Altman’s side in the dispute and have called for the board to resign. It’s still unclear what the original impetus for firing Altman was. It’s a proper clownshow at ostensibly one of the coolest, most promising corporations in the world—which provided a useful place to start talking about how AIs, corporations and states resemble each other, and what democratic politics has to offer in taming them.
Helen Lewis
I'm delighted to be joined by David Runciman, who is a professor of politics at Cambridge University, former host of Talking Politics and current host of a podcast called Past, Present and Future. We are here to talk about his book, The Handover, which takes in a fairly vast sweep of ideas about states, corporations and artificial intelligence, or AI.
I want to start with artificial intelligence because it is a subject that is constantly in the news. On Friday night, we heard that Sam Altman, who is the very charismatic founder, and head, of a company called OpenAI, which is behind Chat GPT, was summarily fired by his board in circumstances that no one really understands. What do you think happened there, and what does it illustrate about that very fast changing world of AI?
David Runciman
One feature of the world of AI is that it’s dominated by these very charismatic men—not all men, because he was replaced by a woman at OpenAI—but he had become the public face of AI. He was at Rishi Sunak’s big summit, and ChatGPT has become the sort of non-human face of AI. But then suddenly, out of the blue, this guy who seemed to be riding high gets fired on a Friday evening. By all accounts, he had no idea it was coming. Microsoft, one of the biggest investors, were told one minute before he was fired. And it’s treated as this extraordinary mystery. And it turns out, probably what's going on is that OpenAI itself is a slightly weird robotic organization with structures and inside wiring. It’s a bit like him being fired by Chat GPT.
Helen Lewis
So OpenAI was founded as a research nonprofit institution, then everyone goes, Oh my God, we struck gold, we're gonna make a fortune. And then a profit-making division is founded on top of that, but there is then a very odd governance structure where people on the board are not tasked with maximizing shareholder value. They are there to guard the integrity of the product, and also— largely unspoken—make sure it doesn’t kill us all.
David Runciman
No one seems to understand what’s going on. And one of the things I say in the book is that we’ve all been in meetings, where at the end of the meeting, you think, how did that happen? Even though we were in the room where it happened.
And the New York Times said the explanation of this mystery is the weird structure of the company. Because in this company, a group of people who are affiliated—this is where it gets all a bit Twilight Zone—with a movement known as Effective Altruism, whose most prominent figurehead was Sam Bankman-Fried, who is now going to go to jail for a hundred years. But three of them are on the board. And they are really worried that AI is going to kill us all.
But what was unusual about this company, is that they could [fire Altman]. There are only six people on the board, a tiny board, they just needed to flip one of the others. And they could get this guy out because they think he's too interested in making money. The moral of this is: there are these super-smart people building these super-smart machines, but mediating between the two are organizations with their own internal structures. And actually, those internal structures are just as important as the algorithm inside the machine and the brain inside the humans.
I mean, Sam Altman can be the smartest guy in the world, he still got fired, because those people got the majority on the board.
Helen Lewis
Silicon Valley, which is about building systems and algorithms, is also struggling still with the Great Man theory of history. And I think that’s fascinating. What is the worth of OpenAI without Sam Altman? Is the product the thing or is actually the evangelism of him and what he represents? Which is the same question I think we’re having about “What is Tesla or SpaceX or Twitter/X without Elon Musk?” or “What is Meta without Mark Zuckerberg?”
You touched on this in the book, that we want these things to have agency. So we often find it convenient to assign that all to a person. We want Mark Zuckerberg to be sitting there with a big dial marked racism, turning it up and down on Facebook, rather than it being a chaotic outcome of very complicated systems that have gone beyond the human brain to understand.
David Runciman
One of the themes of my book is that in a mysterious world of all sorts of complicated kinds of agents, we look for the human decision-makers because we’re more comfortable with that. But that does also feed into this creation myth in Silicon Valley.
Silicon Valley would like us to believe that the most successful companies are successful because the people who came up with the idea are the smartest people: Mark Zuckerberg in his dorm room, a man in a shed who has an idea. And it’s such a brilliant idea that it takes over the world. And that is just not true. You can have the most brilliant idea, and if you’re just one person, you will not take over anything but your shed. What takes over the world is the corporate structure that's built around it.
The best counter example [to the first theory] is Jeff Bezos and Amazon. So Jeff Bezos is also a genius, right? But he didn’t have a genius idea. So the idea that it’s cheaper to sell books, not having a bookstore, but packaging them up and posting them to people is an idea that’s probably about 100 years old. Jeff Bezos is a genius at building a company. And he understood how to scale up his idea that everyone else was having. in a ruthless corporate way.
States and corporations rule this world. And their powers are the ones that we don't understand well enough.
Helen Lewis
That comes back to some of your previous work on democracy and the appeal now in a very chaotic, fast-moving world of a strongman—someone like Donald Trump.
Also, Amazon now makes the majority of its money from Amazon Web Services—essentially hosting data in the cloud— and Amazon marketplace, where it just allows people to do what's called “drop shipping”. Stuff that’s made cheaply in China, you act as the intermediary and you sell the pens on Amazon, which takes its cut. Amazon has become a kind of rentier in the middle of that system, hasn’t it?
David Runciman
This book is not saying states and corporations are terrible, right? We are very lucky to live in worlds that are organized by these things, because they have incredible efficiencies. And their efficiency comes from the fact that they are impersonal. They are impersonal machines, because personal politics is inefficient and corrupt and dangerous. And bad people do terrible things.
The great thing about the American state for all its flaws is that it is an impersonal machine, and the presidents come and go. The reason for me that Donald Trump is so dangerous is that he is hyperpersonalizing the impersonal machine. There are huge advantages to living in a world where big corporate political structures cannot be captured by individual human beings. We are all rich and prosperous and living in peaceful, relatively peaceful societies for that reason.
But then the other thing is the great Silicon Valley myth is that it's the cutting edge of entrepreneurship, and the free market. And the best idea wins out because the best idea is what people want to buy. So Tesla: [Elon Musk] makes better cars than everyone else. But so much of how Silicon Valley makes its money is by creating services that capture government agencies. And [PayPal founder and Palantir boss] Peter Thiel has understood that the best monopoly is the state’s monopoly.
He was one of the first people to come out publicly for Donald Trump in 2016, which in Silicon Valley was a very brave thing to do, because they are all ostensibly centrist.
Helen Lewis
Well, a coalition of progressive libertarians, which are not really two things that go together. How important is Peter Thiel to the story you tell in this book?
David Runciman
One of the things about Peter Thiel is people always say: he was there at the beginning of Facebook. Genius.
Well. He told Mark Zuckerberg to sell Facebook for a billion dollars, because it wasn't gonna get any bigger. And Mark Zuckerberg overrode him. And the reason that Peter Thiel has all that money is because he was wrong.
Peter Thiel is such a weird mixture of different things. He’s the libertarian who makes all his money from the state. He understands that power in this world is not genius: it is personal connections, and corporate heft. He backed Donald Trump because he calculated it as a sort of bet, right? You back Donald Trump, assuming he’s gonna lose, doesn’t matter. If he wins, suddenly, you’re the guy who spotted Donald Trump’s going to be the next president. And Donald Trump invites you to help pick his cabinet.
So it’s about personal connections. But also you need corporate power to get the tax breaks. You can’t scale up outside of the corporate form. But then you're at the mercy of the board. And even Peter Thiel, even Sam Altman, you can’t do it as an individual. Thinking about how a corporate board is structured, which is the most boring subject known to humankind, is also really important.
Helen Lewis
There's also a very strong case of what people say versus their revealed preferences. Everybody who works in AI is saying: Oh, I'm a bit worried it might suddenly achieve consciousness and kill us all. Versus what they're actually doing, which is investing hugely more in it.
It's interesting that Oppenheimer came out this year, because there was a similar kind of rationale happening [with the nuclear bomb], which is, it's probably going to be very dangerous, therefore, we shouldn't be the ones that are in charge of it. Is that how you read that situation, too?
David Runciman
What Oppenheimer makes me think of is that, over the summer, I agreed to write an article for The Guardian, in which they set up a Twitter account in which I just followed the people that Elon Musk follows. So Elon Musk followed about 300 people. He was followed by 150 million. I couldn’t recreate that, but I could follow the 300 people that he followed. It was very interesting, quite revealing, including you getting a clear sense of his political view, very pro-Russia, very anti-Ukraine.
But he then switched the algorithm on Twitter. So you’ve got many, many more ads. And if you are, as I was, Elon Musk on Twitter, all of the ads were for Oppenheimer and Napoleon. And it was clear that what they thought was: if you're Elon Musk, these films are really going to appeal to you, particularly Napoleon. And then lots of people were saying, Elon, you should see this film, it's about you—
Helen Lewis
It ends really well for you.
David Runciman
—because you are the Napoleon of our world, you are the Oppenheimer of our world. But of course he's not.
Helen Lewis
Did you do a cleansing ritual when that experiment ended?
David Runciman
I'd never really been on Twitter. So this was my initiation into Twitter, which was to be Elon Musk on Twitter. So I found it extremely freaky. But then I just stopped. And I've had no desire to go back.
Helen Lewis
Elon Musk has done the world a great service by ruining Twitter. We talk about “the algorithm” in very mysterious terms, but it is clear he has changed the algorithm. Now, if you buy a blue tick from him for $8 a month, your replies get higher up in the mix, you're more likely to show up in the For You feed. He has essentially been the man with his big dial marked racism, and he’s been turning that dial.
The other thing is that he has himself just plunged into it. There is no greater example of what we call Poster’s Brain than Elon Musk.
David Runciman
You think he would pay someone to do this for him, because he can't have the time. But no one would tweet like that.
Helen Lewis
If it helps, he did tweet about four years ago that “50% of my tweets are written from the porcelain throne”. So I think we’ve answered the question about where he finds the time.
But I want to go back to Napoleon. There’s a hell of a book to be written by the right nerdy military historian saying that the unsung hero there is the quartermaster. The supply lines worked, and it was good, and then they didn’t. But we tidy that away because we want the simple story of a human agent.
David Runciman
There are social structures that organize our world for us and that allow individual human beings a kind of supercharged agency. In the modern world, two in particular that stand out. The modern state that was partly invented by Napoleon—or he was there at the beginning of it. And then, crucially, the modern corporate form.
These two things have certain features in common, and they are not dissimilar to robots in their superpowers. They live longer than human beings, they have the capacity to follow through on decisions in a way that involves enormous organizational structure and complexity. They are algorithmic—democracy is an algorithm, right? You get inputs, and you get outputs.
But in the modern state, those outputs have enormous power, because of the machine that lies behind it. In modern states, the quartermaster is the civil service. Now, these are the people that keep this machine going. And these machines have been the engines of prosperity and economic growth. And they are the things that will kill us.
In Silicon Valley, there’s another movement connected to Effective Altruism, which is called the existential risk movement, which are the people who think we don't spend enough time thinking about the things that have a small chance of ending everything.
There are four primary areas of risk. So one is AI killer robots. One is catastrophic climate change. One is what they call bioterror or error, and one is nuclear war. So all four of those actually are in the hands of states and corporations. If any one of those four, including the killer robots, do finally come to get us, it won't be because we lost control of the killer robots. It’ll be because we lost control of the political and economic machines that organize our world.
The one that is present massively neglected is nuclear war. If it happens, the thought is that a mad person will have done it. Putin is crazy, or whoever it is, maybe Netanyahu is. Actually it will happen because extraordinarily complex, algorithmic political bureaucratic structures have been set up within states, so that when Donald Trump presses the big red button, it actually happens. And it happens, because there is a whole infrastructure behind that. If you want it to stop happening, one don’t elect Donald Trump, but two, change the infrastructure.
Helen Lewis
The other way it could happen is by someone fat-fingering something.
David Runciman
The great appeal of AI systems, particularly for states and corporations, is their efficiency—and what they mean by “efficiency” is they strip out human error.
But in warfare, mutually assured destruction is a game of chicken. The way you bypass that in a game of chicken is you strip out the human beings and make sure that it's a robot that presses the button, because the robot won't have qualms. China and America are currently involved in an arms race which is premised on this. The Chinese state is massively investing in AI weapons technology, precisely because in this great game of chicken that may be played over Taiwan, you want to signal to the other side that you’ve got fewer humans involved in your process than they do.
Helen Lewis
There was a famous thought experiment once that said the nuclear code should be embedded in the heart of a bodyguard, who walks around with the president all the time. And if he wants to launch a nuclear weapon that will kill 80,000 people, maybe he first of all has to carve into the heart of the guy that's been walking around with him.
David Runciman
You think Donald Trump wouldn’t do that?
Helen Lewis
Yes, he would do it with a steak knife in between bites. But the point is to humanize these processes.
In AI, there’s a lot of talk about this idea of the singularity, which is the moment that … we are no longer in control of AI. But you talk about a an earlier form of singularity when the first thinking machines were created—the modern states. So tell me about Hobbes’s Leviathan.
David Runciman
Thomas Hobbes wrote a book called Leviathan in 1651, that I read and reread at various points. But at one point, I was struck by something I’d never really focused on before, which is that the very first line of the book says the state is a robot. So the word he uses is automaton—this is 1651, the word robot was invented in the 1920s.
The book has at the front of this famous picture of a giant, an artificial kind of Frankenstein, monster person, made up of lots of little people. He calls it a decision-making machine, and the reason it will be so powerful and effective is it won’t be human. Its decisions will have human input in the same way that Google works, then the machine will produce outcomes that have superhuman qualities to them, they will be more durable. And Hobbes said, If we build these things, we will build a much safer world. It looks like it will be a scarier world, with giant, lumbering things, but it will be a safer world. And it couldn’t be more dangerous than the world we live in—which was the world with the 30 Years War, that the worst war ever probably, just genocidal horror across Europe, driven by wars of religion.
So let’s have these impersonal, inhuman states that don't have feelings and passions, don’t believe anything, right? The state doesn’t believe in God or not believe in God, it's just a bloody robot. And he was right. It is a safer world. He said, it’ll be a more prosperous world—you can build corporate forms of this [robot] that can invest for the long term, and so on. The combination of these powerful states with powerful corporations is usually the trigger for extraordinary economic growth.
Helen Lewis
You go on to talk about corporations. And it seems to me, in those early days, the boundary between those two is maybe more porous than it is now. Take the East India Company, which ran its own military and had a GDP higher than many states.
David Runciman
Yeah, the East India Company had an army. I mean, the things that Tesla and Meta don’t have—the two things that even the most powerful companies in the world still don’t have—is an army and a currency. These are things that states have maintained their monopoly on. And when Mark Zuckerberg said he was going to launch his own currency, Libra, the US Treasury killed it like that.
The US state is still the most powerful robot in the world, because it has an army of nuclear weapons, and it has the dollar.
Helen Lewis
That makes me think of Sam Bankman-Fried. To me, the story of crypto is fascinating for those reasons, because everyone wanted a pure form of money, as you said, this idea that you would be outside government control, none of this human messiness.
And then unfortunately, what happened was, as soon as something went wrong, no one was coming to save them. Unlike a conventional state bank system, there is no deposit protection. Lots of people are maybe more libertarian in theory than they are in practice. When it comes to it, people really want the state. Covid being another really good example of that, right?
David Runciman
The polling during COVID was the majority of people thought it didn't go far enough the restrictions, the lockdowns.
Helen Lewis
Something like 20% of people wanted a 9pm curfew.
David Runciman
It is wired into us— an understanding that the Leviathan is our protector. We saw this in 2007/2008, people who run the banks go running to the state saying, Help me, help me, help me.
Peter Thiel wants a cryptocurrency because he thinks actual money is corrupted by human beings. He thinks the dollar is a dirty human thing. And if you can’t have gold—which is what they all want, the gold standard, which bankrupted the world and gave us Hitler—if you can’t have gold, what you want is a currency that is incorruptible by human, not just error, but volition.
Helen Lewis
I went to a conference a couple of weeks ago, which had a panel on cryptocurrency. One man said: I like Bitcoin, because Bitcoin is like gold and the internet had a baby.
David Runciman
Gold and the internet?
Helen Lewis
That sounds like the worst thing ever.
Helen Lewis
You mentioned Sam Bankman-Fried. Unlike Facebook with Sheryl Sandberg or Google with Eric Schmidt, he didn’t call in a “grown up” to mature the company from a start-up into something more conventional. Is crypto incapable of that?
David Runciman
I do write in the book about the development of these tech companies, and one striking feature of them is they start with these rule-breaking tech bro CEOs. Then they get a board. They get investors, they get very, very powerful and wealthy. And they all get CEOs who are very boring Harvard MBAs: Steve Jobs is replaced by Tim Cook. It’s that transition, because they know there is strength, actually, in the old-fashioned way of doing it. Rule-based and rule-governed.
The interesting one to me is Tesla. During the time that I was being Elon Musk on Twitter, he bought Twitter for $44 billion. And people were like, This is crazy, right? He seemed to have bought this failing thing. But there was one day, where his net worth went up by more than he paid for Twitter. Because the Tesla share price skyrockets. His net worth increases by or decreases by more than $10 billion a day on a regular basis. Either he is going to be the exception to that rule, and he can keep running these companies like a personal fiefdom, or he is not.
Helen Lewis
There is a very fragile House of Cards with Tesla, I think. […] Let’s take a question.
Question 1
I have to teach a course around the debate: is AI good or bad?
David Runciman
I've done some panels where the question is, are you a utopian or dystopian? And five years ago, it was, is the internet good or bad for democracy? The answer is always both. It is always both. But AI is both a force for good and bad. Of course, the modern state is a force for good and of course, bad. It has made us richer, healthier, and more prosperous. And it was also created the worst wars in human history and it's capable of genocide.
Whether it's a force for good or for ill depends on how we design it. This is my point—how we design institutions that control it.
Helen Lewis
I would say the same. My initial thought was: it's like asking if the invention of the knife is good or bad. And if it's helping you to eat a wider range of foods: good. If you’ve been stabbed in the heart, bad.
A better example is probably nuclear weapons. And that is the kind of level of power I think we're talking about with AI. If you think about the bombing of Hiroshima, it created the conditions for the Pax Americana under which we have all grown up. Most of the people in this room have not grown up somewhere that was riven by civil war or international war. But that is scant consolation to you if you're one of the citizens of Hiroshima in 1945.
And I think, exactly to David’s point, in the case of nuclear weapons, there are international treaties governing them, there is a whole architecture governing their use and trying to restrain them, efforts to bring down the amount of warheads that states have. And that’s the terms in which I think we should be thinking about AI.
Question 2
In response to what you just said about needing the right social political and economic structures. What should we be doing to ensure we have those?
Helen Lewis
Let’s talk about that in relation to climate change.
David Runciman
I use this example in the book. I’m not sure exactly how accurate this figure is, but roughly it has been estimated that 70 percent of carbon emissions are either directly, or more often indirectly, the responsibility of 100 corporations.
So the argument about climate change is often: how can we change human behaviour so that we behave in a way that we don't despoil our world? But changing human behavior is really hard. You just have to look into yourself, your own habits, it's really hard. It’s hard to recycle stuff in the right bins, right? It is not hard to rewire corporations, actually. It is just very, very political. Well, this is a political world. So if those 100 corporations behave differently, it would make a bigger difference.
Helen Lewis
You mention Mariana Mazzucato’s book, The Entrepreneurial State, about the level of innovation that has been created by the state. And I think climate change, and the energy transition, is a very good example of the way that innovation usually requires [state] backing.
In Britain, we could be a world leader in renewable energy, if we put money in from the state. Realistically, companies will bend like flowers towards sunlight towards the incentives.
David Runciman
It does require taxation. Also go back to where we started: the great myth of Silicon Valley, that this is about individual entrepreneurship. The internet was invented by the American military [DARPAnet].
The one thing that states have over corporations is an insane appetite for risk when they are under existential threat, which is why war is the great engine of innovation. So during the Second World War, the American state wasted more money than any organisation ever has in human history, on the worst projects imaginable, the kind of things that would get you fired if you were head of the board, and you said, Let's build this, and it sank. Let's put this up in the air, and it fell down. But they also invented out of that, and then the Cold War, the internet. No one else does that apart from states and states can do it, because they have taxpayers to bail them out.
Helen Lewis
The great hero—me, paying some tax.
David Runciman
The taxpayer is the hero of the story of the modern world.
That’s all for now. See you on Friday for your regular Bluestocking, and if you enjoyed this, why not forward it to a friend?
Edited on 22 November, to correct what AI researchers mean by “singularity”. Thanks to Tom Chivers for the feedback.
AI joke: no truth in the rumour that the Geordie AI is called YI
I don't think "corporations and states are AIs" should make us *less* worried about AI existential risk. Corporations and states are slow AIs that run on humans, with well-understood alignment mechanisms (the market, elections). And they still do bad things all the time! That should IMHO make us *more* worried about fast AIs running on silicon with alignment mechanisms that either don't work very well (RLHF) or are entirely absent.