How Effective Accelerationism Divides Silicon Valley

Émile P. Torres

Notes

Paris Marx is joined by Émile Torres to discuss Silicon Valley’s recent obsession with effective accelerationism, how it builds on the TESCREAL ideologies, and why it shows the divide at the top of the AI industry.

Guest

Émile Torres is a postdoctoral fellow at Case Western Reserve University. They’re also the author of Human Extinction: A History of the Science and Ethics of Annihilation.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Émile, welcome back to Tech Won’t Save Us.

Émile Torres: Thanks for having me. It’s great to be here.

PM: Absolutely. I was looking back through the history of the show. Obviously, you were in the Elon Musk series that we did recently. And I was shocked to see that you hadn’t been on a proper, regular episode of the show, since over a year ago. And I was like: Okay, we need to change that. And there are some things that we definitely need to talk about. So, happy to have you back on the show for us to dig into all this.

ET: Thanks for inviting me.

PM: Of course, this is where I force you to thank me for saying nice things about you. I do this to everybody. When you were on in the past, we talked about these concepts that have become, I guess, quite familiar to people at this point, longtermism, and Effective Altruism. Obviously, when we were talking about Sam Bankman-Fried and all this crypto stuff, these ideas were in the air, we’re seeming to become more popular through, the pandemic moment and everything that was going on then. But, you have been writing about these further with Timnit Gebru, who, of course, was on the show earlier this year. And you talked about this broader set of ideologies, called TESCREAL, which is obviously an abbreviation. I was wondering if you could talk to us about what this bundle of ideologies is what that acronym stands for, and then we can go from there.

ET: Sure. So the acronym TESCREAL stands for a constellation of ideologies that historically grew out of each other. And so consequently, they form a single tradition. You can think of it as a wriggling organism that extends back about 30 years. So the first term in the TESCREAL acronym is transhumanism. And in its modern form, it was found in the late 1980s, early 1990s. So it goes back about 30 years. And it’s hard to talk about any one of these ideologies without talking about the others. They shaped each other; they influence each other. The communities that correspond to each letter in the acronym have overlapped considerably over time. Many of the individuals who have contributed most significantly to the development of certain ideologies also contributed in non-trivial ways to the development of other ideologies. So the acronym itself stands for a bunch of big polysyllabic words, which, namely transhumanism, Extropianism, singularitarianism, cosmism, rationalism, Effective Altruism, and longtermism.

Yes, so these ideologies are intimately linked in all sorts of ways. And they all have become, if not, in their current forms, influential within Silicon Valley, they’re legacies, their core ideas and central themes have been channeled through other ideologies like longtermism, and Effective Altruism, rationalism, and so on, that are currently quite influential within Silicon Valley. There are many people in the San Francisco Bay Area, etc., in Big Tech, who would explicitly identify with one or more of these ideologies. This was a TESCREAL bundle. And the emergence of these different ideologies corresponds chronologically to their emergence over time. So, transhumanism is really lthe backbone of the TESCREAL bundle. Longtermism could be thought of as something like the galaxy brain that sits atop because it binds together all sorts of important ideas and key insights from other ideologies to present, to articulate a comprehensive worldview, or what you might call a normative futurology, claims about what the future could and should look like that it has been championed by people like Elon Musk, and so on.

PM: So we’ll talk about their normative futurologies through this episode, but I guess for people who hear you name off those terms, I think let’s briefly go through them just so it’s clear what we’re talking about. Transhumanism, I think, is quite obvious — this idea that we’re going to enhance the human species with technologies, merge human and machine. These are ideas that I think we’ve heard of, and that have been around, as you say, for a while. So, this will not be new to people. Extropianism, I feel like might be a word that is a bit less familiar. What would that mean?

ET: So, that was the first organized transhumanist movement. So, it’s emergence roughly coincides with the establishment of modern transhumanism. So really, the very early 1990s. In fact, the founder of the Extropian movement, a guy named Max More, whose name was originally Max O’Connor, but like many Extropians, he changed it to better reflect his Extropian and transhumanist worldview. Other examples, his wife is Natasha Vita-More, with a hyphen between Vita and More — so more life. There are a bunch of other examples that are somewhat humorous. But yes, so Extropianism, it was a very techno-optimistic interpretation of the transhumanist project. There was a strong libertarian bent within that movement, a belief that free markets are the best way forward in terms of developing this technology in a safe and promising way.

In fact, Ayn Rand’s “Atlas Shrugged” was on the official reading list of the Extropians. And so there was this Extropian institute that Max More founded, and part of the reason that the Extropian movement was sort of the first platform for transhumanism and really established transhumanism, put it on the scene in Silicon Valley. Part of that was because of the Extropian mailing list. So, they had this listserv where people from all over the world could contribute. This is how Nick Ostrom, and Eliezer Yudkowsky, who’s a leading rationalist, and one of the the main AI doomers out there today, Anders Sandberg, Ben Goertzel, who maybe we’ll talk about in a moment, because he’s the founder of modern cosmism. All of these individuals were able to make contact, cross pollinate ideas to develop the transhumanist vision of what the future ought to be.

PM: This is basically just Ray Kurzweil, right? This is the idea that we’re going have the computers reached the point where they gain this human intelligence and we kind of merge. I guess it’s similar to transhumanism in some ways?

ET: Exactly. So I would say that the the next three letters in the acronym of TESCREAL — those are just variants of transhumanist with different emphases and maybe slightly different visions about what the future could look like. But, ultimately, they are rooted in the transhumanist project — this aim to develop advanced technology is to radically re-engineer the human organism. So with singularitarianism, the emphasis is on the coming technological singularity. There’s a couple of different definitions of that. For Kurzweil, it’s about humans merging with AI, radically augmenting our cognitive capabilities, becoming immortal, and so on. And ultimately, that will accelerate the pace of technological development. To such a degree that beyond this point, the singularity, we cannot even comprehend the Phantasmagoria of what the world will become. But it will involve dizzyingly rapid change, driven by science and technology.

So, it’s continuing with the metaphor of the singularity, which is taken from cosmology. There’s an event horizon, beyond which we cannot see. And so that’ll mark a new epoch in cosmic history. I think it’s the fifth of six epochs he identifies in his grand eschatology. His grand view of the evolution of the cosmos as a whole and the sixth epoch that culminates with us or our cyborg-ish, or purely artificial descendents spreading into space. So, there’s this colonization explosion, and the light of consciousness then being taken to all the far reaches of the accessible universe. And ultimately, as he puts it, the universe wakes up. So, this is simply their terms. And it’s the term singularitarianism, that was coined by Extropians by a particular by a guy named I think his name is Mark Plus, so another guy who changed his last name.

PM: I feel like though, when you talk about Kurzweil. He’s the kind of person who reminds me of those people who are always predicting the apocalypse is going to come like these really religious folks. He reminds me of someone like that, but it’s like a secular techno religion, constantly predicting that the singularity is going to happen, and it’s going to happen. And then the date just keeps getting pushed, because it doesn’t happen because like, it’s basically a religious belief, not something that’s founded in anything concrete.

ET: Yes, I would completely create the connections between singularitarianism and transhumanism more generally, the connections between those things, and traditional religion, like Christianity, are really significant and extensive. It’s not uncommon for people to describe something as a religion, an ideology, a worldview and so on, as religion in a way to denigrate it, right. That’s just a facile means of criticizing something. But in this case, the connections really are quite significant. So transhumanism itself. So, modern transhumanism emerged in the early 1990s, late 1980s. But the idea of transhumanism that goes back to the early 20th century. And the reason I mention this, is that it was proposed initially, and explicitly as a replacement for traditional religion. So, Christianity declined significantly during the 19th century. And if you look at what people were writing, back in the latter 19th century, early 20th century, they were reeling from the loss of the meaning the purpose, the eschatological hope, hope about the future, all of that was gone.

So, you have basically a bunch of atheists who are searching for something to fill that that void and transhumanism was proposed as a solution to this problem. So, through science and technology, we could potentially create heaven here on earth, rather than waiting for the afterlife, we’ll do it in this life, rather than in heaven being something that happens in the world, it being otherworldly, we created in this world, through human agency, rather than relying on supernatural agents. So, it’s very religious. And in fact, a number of people have, in a critical mode have described the technological singularity as the techno rapture. And you’re totally right, that’s consistent with all of this. Kurzweil, himself, as well as other transhumanist and Extropians, they propose their own prophetic dates for when the singularity is actually going to happen. According to Kurzweil, the singularity will happen in 2045.

PM: It reminds me of Elon Musk predicting self-driving cars every few years. It’s the same thing. But I think we’ll come back to this point about religion, but moving through the the acronym — cosmism, I think that is something that is probably quite familiar to people as well, right? Is this basically what Elon Musk is proposing when he says that we need to extend the light of humanity to other planets, and this idea that we kind of need to colonize space in order to advance the human species?

ET: Yes, definitely. So, I’m not sure about the extent to which Musk is conversant with modern cosmism. But nonetheless, the vision that he seems to embrace, this vision of spreading into space and expanding the scope and size of human civilization, that is very, very consistent with the cosmic view. So cosmos in itself, I mean, this goes back to the latter 19th century. There were a bunch of so-called Russian cosmists. But at least with respect to the acronym, Timnit Gebru and I are most interested in cosmism in its modern form. So, as I mentioned before me this was first proposed by Ben Goertzel, who’s a computer scientist transhumanist had was a participant in the Extropian movement has close connections to various other letters in the acronym that we haven’t discussed yet. But he was also the individual who popularized the term Artificial General Intelligence (AGI). So, there’s a direct connection between modern cosmism and and the current race to build AGI among companies like OpenAI and DeepMind, Anthropic, xAI, and so on.

So, that’s part of the reason why I think modern cosmism is included in the acronym. We felt like if we didn’t have the ‘C’ in TESCREAL, something notable would be missing. Cosmism basically goes beyond transhumanism in imagining that we use advanced technologies not just to reengineer the human organism, not just to radically enhance our mental capacities to indefinitely extend our so called healthspan. But also then to use this technology to spread into space, reengineer galaxies, engage in what Goertzel refers to a spacetime engineering. So, actually intervening on the fundamental use of the fabric of spacetime to manipulate it in ways that would suit us that would bring value — what we would consider to be value — into the universe. So, that’s that’s the the notion of cosmism. And, and really, it doesn’t take much squinting for the vision of what the future should look like, according to cosmism, to look basically indistinguishable from the vision of what the future ought to look like from the perspective of longtermism. There’s a slightly different moral emphasis with respect to these these two ideologies. But in practice, what they want to happen in the future, is basically the exact same.

PM: That’s hardly a surprise. And, if we’re kind of moving through the acronym still, Rationalism would be the next one. And I feel like, going back to what we were saying about religion, correct me if I’m misunderstanding this, but it’s kind of like: We are not appealing to religious authority, or some higher power to justify kind of our beliefs or our views in the world, or what we’re arguing, but rather, we’re referring to science and these things that are observable, and so you can trust us, because, I don’t know, we’re engineers and scientists, and blah, blah, blah? Is that is that the general idea there? Or does it go a bit beyond that?

ET: That is definitely part of it. Even though many people, including individuals who are either in the rationalist community, or were members of the community and then left, have described it as very cultish. And part of the cultishness of Rationalism is that it has these charismatic figures like Yudkowsky, who are considered to be exceptionally intelligent. I saw some posts on the rationalist community blogging website, “Less Wrong.” So, that’s the platform out of which Rationalism emerged, and it was founded by Yudkowsky around 2009. Somebody had provided a list of great thinkers throughout Western history. And three of the names were Plato, Immanuel Kant — so a very famous enlightenment, German philosopher — and then Yudkowsky. So there’s a lot of deference towards these figures, whose authority generally isn’t questioned by a lot of people for fear of them questioning Yudkowsky, or others who have supposedly have these really high IQs, for fear of them appearing unintelligent, embarrassing themselves, and so on. There is a sort of irony, that it’s all about rationality and thinking for yourself, not just following authority, but it is very hierarchical.

I would say the exact same thing about EA. It’s also very cultish. And that’s not I don’t use that word loosely. Here. I could provide 50 examples, most of which are jaw-dropping, like yes, at the end, it’s just impossible for someone to look at these examples and say, No, EA is not cultish or rational is it is not a cultish. No, it is very much a cult. And, the core idea with rationalism. So, it was it was founded by this transhumanist who participate in the Extropian movement, who also was a leading singulatarian along with Ray Kurzweil, I’m referring here to Eliezer Yudkowsky. So him and Kurzweil were leading singulatarians, in fact, Kurzweil, Yudkowsky, and Peter Thiel founded the Singularity Summit, which was held annually for a number of years, and included speakers like Nick Bostrom. I’ve mentioned before, as well as individuals like Ben Goertzel. So all of these people who are all part of the same social circles. So, ultimately, what motivated rationalism is this idea that, if we’re going to try to bring about this utopian future in which we re engineered the human organism spread into space, create this vast multigalactic civilization, we’re going to need a lot of “really smart people” doing a lot of “really smart things.”

And one of these these smart things is designing an artificial general intelligence that facilitates this whole process of bringing about utopia. And so, consequently, if smartness is absolutely central, if intelligence is crucial for the realization of these normative futurologies and the utopian vision at the heart of them, then why not try to figure out ways to optimize our rationality? To identify cognitive biases, neutralize them, use things like Bayes’ Theorem for anybody out there who’s familiar with that, and, tools from Decision Theory to figure out how to, to make decisions in the world in the most optimally rational way? So ultimately, that sounds like I think from a distance, it might sound good. Because nobody wants to be irrational. Nobody sets out to be irrational. But when you look at the details, it turns out it’s just deeply probablematic. I mean, it’s based on a narrow understanding of what rationality means.

It’s based on that, that Yudkowsky once argued, in one of his “Less Wrong” posts, that if you have to choose between — let’s imagine you have to choose between two scenarios in the first scenario, there’s some enormous number just unfathomable number of individuals who suffered the nearly imperceptible discomfort of having a speck of dust in their eyes. The other scenario, the second scenario is a single individual who was tortured relentlessly and horrificly for 50 years straight, which scenario is worse? And well, if you are rational, and you don’t let your emotions influence your thought process, and if you do the math, or what Yudkowsky calls ‘if you just shut up and multiply,’ then you’ll see that the first scenario, that dust speck scenario, and that’s worse. Because even though the discomfort is almost imperceptible, it’s not nothing. And if you multiply not nothing by some enormous number, that’s how many people have you experienced have this experience, then that number is itself enormous. So, compared to 50 years of horrific torture, then actually dust specks is much worse. So, that I think exemplifies their understanding of what Rationality is, and the sort of radical extremist conclusions that one can end up at. If one takes seriously, this sort of rationalist, so-called rationalist, approach to making decisions.

PM: And so, I think what you have kind of laid out there for us, really shows us how all of these pieces, as you were saying, come together in longtermism at the end of the day — that really kind of mathematical view of the population and how you’re calculating the value of individuals and stuff in the end, but also we want to spread out to multiple planets. And we want to ensure that we have people who are digital beings living in computers on these different planets, because that’s equal to actual kind of flesh and blood people that we consider people today. And so, all of these kind of, I think we would consider, odd kind of ideological viewpoints building over the course of several decades to what we’re seeing today. And I don’t think we need to really go through Effective Altruism and longtermism, because we’ve done that in the past. And I think the listeners will be quite familiar with that.

Before we talk about further evolutions of this, what I wanted to ask you was, once you started writing about TESCREAL, and once you started putting all of this together, what do you think that tells us about kind of the state of the tech industry and the people — often powerful, wealthy men — who are spreading these ideas and becoming kind of obsessed with this view of the world? Or how they understand the world? What does that tell us about them and the tech industry as a whole, that this is what they’ve kind of coalesced around?

ET: A couple of things come to mind. One is that there are many individuals, including people in positions of power and influence within Silicon Valley, especially within the field of AI or AGI, Artificial General Intelligence, to start it off as Effective Altruists, or longtermists. Just as a side note, in the media, there’s a lot of talk about EA. It was two individuals on the board of directors of OpenAI, who were pivotal in ousting Sam Altman, and so on. In this context, EA is really shorthand for EA longtermism. Because EA is kind of a broad tent, and there are a bunch of EAs who think longtermism is nuts and want nothing to do with with longtermism. But you know that the community as a whole and a lot of its leaders have been moving towards longtermism as its main focus. There are a bunch of individuals who gained positions of power within AI in Silicon Valley, and so on. And started off being effective altruists, in particular longtermists.

And then there are a number of people — probably Musk is perhaps a good example, who did not start off as longtermists. They couldn’t. I mean, the term was only invented in 2017. And the origins of longtermists go back about 20 years, and Musk probably wasn’t all that familiar with that literature. But nonetheless, they have this idea about what they want to do, which is to colonize space, merge humans with AI and so on. And then turned around. Notice that there’s this ideology that’s on the rise called longtermism, that provides a kind of superficially plausible, moral justification for what they want to do anyways. So, Musk wants to colonize space — maybe it’s just sort of boyhood fantasies of becoming multiplanetary, saving humanity by spreading to Mars, and so on. And then he looks at longtermists and [sees them] developing this “ethical framework” that says: ‘What I’m doing is, arguably, the most moral thing that one could possibly be doing.’ So, that gestures at one thing that this bundle of ideologies reveals about Silicon Valley.

It’s a very quantitative way of thinking about the world. The fact that this particular utopian vision appeals to these individuals, I think does say a lot about them. Because this vision was crafted and designed almost entirely by white men, many of whom are at elite universities, namely Oxford, mostly at Oxford. And the vision is deeply capitalistic. I mean, some people describe it as capitalism on steroids. It’s also very Baconian, in the sense that it’s embodies an imperative that was articulated by Francis Bacon, who played a major role in the scientific revolution, on the philosophical side of that. [Bacon] argued that what we need to do is understand nature. And so he’s arguing for empirical science — understand nature. Why? Because once we understand nature, then we can subdue, subjugate, and ultimately control conquer the natural world. The longtermist’s vision is a very capitalistic, it’s a very Baconian. And it’s all about subjugating nature.

In fact, a central concept, within the longtermists tradition is that of existential risk, which is defined as basically any event that would prevent us from realizing this technutopian world in the future among the stars full of just astronomical amounts of value, by virtue of there being astronomical numbers of future individuals. And, Bostrom himself offers a more nuanced definition offered a more nuanced definition. And in 2013 paper, where he said that index central risk is any event that prevents us from reaching technological maturity. What is technological maturity? It’s a state in which we’ve fully subjugated the natural world. And we’ve maximized economic productivity to the physical limits. Why does that matter? Why is technological maturity important? Well, because once we’ve fully subjugated nature, and maximize economic productivity, we’ll be optimally positioned to bring about astronomical amounts of “value” in the future.

All of this is to say that the utopian vision is, I think, just deeply impoverished. It’s just capitalism on steroids. It’s just Baconianism to the extreme. It very much is just an embodiment of certain tendencies and values that were developed within the Western tradition, and embody a lot of the key forces that have emerged within the West, like capitalism and science and so on. One of the most notable things, as I’ve written before about the TESCREAL literature, is that there’s virtually zero reference to what the future could, and more importantly, should look like, from the perspective of, for example, Afrofuturism, or feminism, queerness, disability, Islam, various other Indigenous traditions, and so on. So, just no reference to that. It’s just a very Western white male view of what the future ought to look like. So, I think that’s another reason that this cluster of ideologies is so appealing to tech billionaires. And by virtue of it being so appealing, it reveals something about this tech billionaires themselves. This is the world that they ultimately want to bring about and want to live in.

PM: It’s a real view from the top of the pyramid — from the people who have kind of won the game of capitalism, and want to ensure they can remain in their positions and not be challenged. And just to add to what you were saying, you’ve talked about how it is like capitalism on steroids. And you see this in the writings and in what these people are speaking about, when they promote these ideas, people like Marc Andreessen, or Elon Musk. I feel like they have, over time become much more explicit about it, which is how they believe that to realize this future, or to make a better future for everybody, when they do kind of make reference to people beyond themselves, is that technology, fused with capitalism or with free markets is what is essential to achieve that right to basically say, the government needs to step out of the way the government can’t be regulating us or trying to halt the technological future that we’re trying to achieve. Because that is ultimately not just bad for us, but bad for everybody.

And I feel like this piece of it, obviously, the idea of using technology to dominate nature, and things like that have been around for a really long time. But I feel like this particular piece of it, or this particular aspect of it is potentially much more recent as well? If you think about what people who were thinking about technology or developing technology might have thought in, I don’t know, the first half of the 20th century or even a little bit after that, there was a very strong relationship between technology being developed and the State, and the role that the state played in funding it. And then those ideas shift, most notably in the 1980s in particular, where all of a sudden the state is the enemy and the tech industry is pushing back against the hierarchies of the corporation and the government and the bureaucracy and all that.

I think you can just see like these ideas taking root in that moment or kind of reemerging in a particular form, that now kind of continues to evolve over the course of many decades to the point where we are today, where we have these people who are the titans of industry, who are at the top of this industry that has really taken off since the internet boom, in particular, and who now feel that they are the smartest people in the world, the most powerful people in the world, that they know what the future should look like, and how to develop a good world for humanity. And so naturally, it needs to be these ideas that also kind of elevate them, and make it so that their position is justified ideologically within the system, so that they are not challenged, and they are not going to be pushed out of the way for some other kind of form of power to take their place.

ET: Another thing I pointed out in some articles, and this ties back to something I was mentioning a few moments ago, which is that, you know, longtermism and you know, kind of just the TESCREAL group ideologies in general, they not only say to the rich and powerful, that you are morally excused, you have a moral pretext for ignoring the global poor, but you’re a better person. You’re a morally better person, for focusing on the things that you’re working on, because there’s just astronomical amounts of value that await in the future amounts of value that utterly dwarf the total amount of value that exists on earth today, that has ever existed for the past 300,000 years since homosapiens have been around. And consequently, it’s like lifting 1.3 billion people out of multi-dimensional poverty. That is, in absolute terms, a very good thing. But relative to the amount of value that could exist in the future, that is a molecule in a drop in the ocean. If you’re rational, if you’re smart.

PM: As these people most definitely are — very rational, very smart.

ET: Yes, the most effective way of being an altruist then is to do what you’re doing anyways, try to merge or brains with AI, try to get us into a space, it’s trying to build AGI. It’s wild stuff.

PM: Now, we need to put a cap on this pyramid that we’ve been building of these ideologies. Because, I think it won’t be a surprise to any listeners of this show that the tech industry and the leaders of the tech industry have really been like black pilled. And their, their brains are filled with brain worms in the past few years — they have been intensely radicalized publicly, in a way that maybe some of them would have said this stuff like in private in the past. And, obviously, you mentioned Peter Thiel earlier, he has long been kind of one of the kind of pushers of right wing ideology and quite radical right wing ideology within Silicon Valley for quite a long time.

But I think to see so many of these people speaking out and kind of echoing these right-wing conspiracy theories, and these right-wing kind of ideas, is more novel, not in the sense that they’ve never been right-wingers, but that they have adopted quite an extreme right, a hard right, even a far-right perspective on the world, that they are championing more and more directly. And I feel like you can see that most evidently, in this embrace of what they’re calling effective acceleration ism, or techno optimism in the past few months, how would you describe this idea of effective acceleration ism? And how does it build on these kinds of existing TESCREAL ideologies that we’ve been talking about already? Or is it different than them at all, just giving it a fresh coat of paint?

ET: I think the best way to understand effective accelerations or the acronym is e/acc, which they pronounced “ee-ack.’

PM: And they’d love to stick it in their Twitter bios and or their X bios and really champion it.

ET: It’s very fashionable right now.

PM: And I think one thing to say about it is like, maybe distinct from TESCREAL is that these are particular ideologies, as we’ve been talking about that kind of maybe have this philosophical grounding to them, certainly there is that with Effective Accelerationism, but as you talk about with e/acc, I feel like one thing that is maybe potentially distinct about this is that it does seem designed in particular for meme culture, and to try to kind of tap into this to a certain degree. I don’t know how effective that part of it has been. But it does seem like the idea is like, this is something that needs to go up on social media. And we need to make something that’s like appealing and easy to understand for people to really kind of divide people into like the US in them. And this idea seems to be trying to pick up on those sorts of things. But I wonder what else you see there?

ET: I think one way to understand e/acc is as just a variant of TESCREALism. So, there are a bunch of points to make here. One is you had mentioned just a few minutes ago, that a lot of the champions of the TESCREAL bundle or TESCREAL ideologies have a strong libertarian proclivity, we could say, and that’s true. But actually, I think this gets at a key difference between e/acc and EA, by which I mean longtermism, or the longtermist community, which I think their main disagreement concerns the role of government. So, EAs, longtermists, so a lot of these individuals, like people who help to shape the longtermist ideology, going back to the Extropians, very libertarian — there’s this notion that the best way to move technological progress forward, and ultimately to realize the transhumanist project is for the State to get out of the way. And then you had some individuals in the early 2000s, who started to realize that actually, some of these technologies — the very technologies that will get us to utopia that will enable us to reengineer humanity and colonize space — those technologies may introduce risks to humanity, and to our collective future in the universe that are completely unprecedented.

So, some of them began to take seriously this idea that maybe the State actually does have a role, and there should be some regulations. And so one expression of this is the writings of Nick Bostrom, who points out that the mitigation of existential risk again — again, any event that prevents us from creating utopia — that that mitigation that is is a global, public good. In fact, it’s a transgenerational global, public good. And those sorts of public goods oftentimes are neglected by the market. The market just isn’t good at properly addressing existential risk. Well, if the market isn’t good at that, then maybe you need the state to intervene and to properly regulate industries and so on, to ensure that the next existential catastrophe doesn’t happen. So, this realization has defined the EA, longtermist, rationalist (and so on) tradition of thinking. Basically, they’re libertarian about everything, except for some of these super powerful, speculative future technologies, like AGI. Molecular nanotechnology would be another example. That’s where the State should intervene. But maybe that’s the only place where the State should intervene.

PM: So, I guess that this is a split that we’re seeing, in particular with this AI hype machine that we’ve been in for the past year, where, and we just saw it play out very clearly with the OpenAI stuff, where, on the one hand, you have these people who call themselves the AI safety — I believe is the term they use — people who believe that AGI is coming. And we’re building these tools that are going to have these computers reach human level intelligence or beyond, but we need to be scared of this, and we need to be regulating it. And we need to be like concerned about what this future is going to be. And then on the other hand, you have these, like AI accelerationists, who feel: Take off all the guardrails, we need to plow through because ultimately, this AI is going to be great for society, even though it has some risks.

And so I guess you also see that in the term Effective Accelerationism, where the idea, and of course you hear Sam Altman talk about these things, Marc Andreessen has expressed it in his techno-optimism manifesto. It’s like: Don’t put any rules and restrictions on us, because this is going to be the most amazing thing for humanity, if you just allow us to continue developing these technologies push forward even faster accelerate them, basically, is what they would say. And this is the way through the market, not through any government involvement, that we’re going to improve society. So is this the schism that’s really playing out? And is this part of the reason that we have the elevation of this effective acceleration ism in this moment, in particular, because of this AI divide that is playing out and how prominent this has become?

ET: I think there are two things to say here. One is, insofar as there are existential risks associated with AGI, what is the best way to effectively mitigate those risks? The accelerationist would say: It’s the free markets; you fight power with power. If you open source software, enable a bunch of different actors to develop artificial general intelligence, they’re going to balance each other out. So if there’s a bad actor, well, then you’ve got five good actors to suppress the bad actions of that other actor. And so the free market is going to solve that problem. The second thing then, is a an assessment about the degree to which AGI poses an existential risk in the first place. So, not only would they say: Okay, the best way to solve existential risk is through the free market,y through competition, fighting power, pitting power against power. Many of them would also say: Actually, the existential risks associated with AGI, they’re really miniscule, anyways. A lot of the claims that have been made by Bostrom and Yudkowsky and your other so-called doomers. They’re just not plausible, they would argue.

And so, one of the arguments, for example, is that once you have a sufficiently intelligent artificial system, it’s going to begin to recursively self-approve. So, you’ll get this intelligence explosion. And the reason is that any sufficiently intelligent system is going to realize that for a wide range of whatever its final goals might be, there are various intermediate goals that are going to facilitate it satisfying or reaching those final goals. And one of them is intelligence, augmentation. So if you’re sufficiently intelligent, you’re going to realize: Okay, say I just want to get across town, or I want to cure cancer, or colonize space or whatever. If I’m smarter, if I’m better at problem solving, if I’m more competent, then I’ll be better positioned to figure out the best way to get across town, cure cancer, colonize space, whatever.

Once you have a sufficiently intelligent system, you get this recursive self-improvement process going, a positive feedback loop. And so this is the idea of FOOM, why is we create AGI and then FOOM, it’s this wildly super intelligent system, in a matter of maybe it’s weeks or days, maybe it’s just minutes. And so a lot of the e/acc people think that food is just not plausible, it’s just not going to happen. So consequently, they would say, actually, the, the argument for AGI existential risk is kind of weak, because a key part of it, at least, in certain interpretations is this FOOM premise, while the full premise is not plausible, therefore, the argument fails. So I think those are two, I think, important ideas.

PM: Just to cut in here, I would say, to make it a bit more concrete, I guess what you’re talking about is, as you say, on the one hand, you have the kind of AI doomers, as the kind of effective acceleratists, as we call them, basically saying: Oh, my God, that AGI is going to be so powerful, it’s going to be a threat to all of us. And as you’re saying, they would say no, not really. But then, Andreessen, for example, would build on that and say that the AI is not a threat, because it’s actually going to be able to be an assistant for all of humanity, that’s going to make it easier for us to do a whole load of things, thinking back to what Sam Altman said about it, being your doctor or your teacher, or whatever.

But Andreessen, of course, goes further than that, and says there will be these AI assistants in every facet of life that will help you out. And then of course, there’s also the, I think, more ideological statement, if people go back and listen to my interview with Emily Bender, where exactly as you were saying, they’re saying that the AGI will allow us to enhance human intelligence or augment human intelligence, because intelligence, whether it’s computer or human, is the same thing. So, as soon as we have computer intelligence that’s increasing the total amount of intelligence in the world. And so once we have more intelligence, everything is better off and we’re all better off, and everything is just going to be amazing if we just let these computers improve over time, and we don’t hold them back.

And of course, the other key piece of that there, when you talk about the free market is Andreessen’s argument that we can’t have regulation, because the regulation is designed by the incumbents to ensure that they control the AI market. And as you’re saying, we can’t have these kinds of competitors developing their own AI systems. If you think about Andreessen as a venture capitalist, you can see the kind of material interests that he would have in funding companies that could potentially grow, instead of having it dominated by kind of existing monopolies like Google or Microsoft, or whatever. But I think that’s just to make the points that you’re seeing more concrete and to show themselves in the ideologies that these people have. But, happy if you have further kind of thoughts on that.

ET: I think that’s exactly right. So, both e/acc and longtermists, or traditional TESCREALists — you might call them — both of them are utopian. Andreessen says he’s not utopian. But if you look at what he writes about what awaits us in the future, it’s utopian.

PM: Andreessen also puts down religion at every opportunity he has. There’s one line, if I can just read it for you where he says, “We believe the ultimate moral defense of markets is that they divert people who otherwise would raise armies and start religions into peacefully productive pursuits.” But almost every line in his manifesto starts with ‘we believe, we believe.’ It’s this faith-based argument that techno-optimism is the thing that’s going to make everything better if we just believe in the tech founders and the AGI, or whatever.

ET: Totally! And so the e/acc people themselves have described their view basically as a religion. One of them said something like: It’s spirituality for capitalists. To put it differently, it’s like capitalism in a kind of spiritual form.

PM: It feels like going back to what you were saying earlier, when you were talking about how there were these atheists who are seeking out some kind of religious experience or some spirituality and found like transhumanism, or these other ideologies to kind of fill this hole that existed within them. And it very much feels like obviously, I think there are some other incentives for people like Marc Andreessen, Sam Altman, Elon Musk. But I think that there’s also — especially when you think about how these ideas or these ideologies have kind of a broader get a broader interest from the public or from people in the tech sector — you can see that there’s also this kind of yearning for a grander kind of narrative or explanation of our society of humanity of our future or whatever.

ET: Totally. And so maybe it’s worth kind of tying together a few ends here. So, both of these views are deeply utopian, techno-utopian. So, they think the future is going to be awesome, and technology is the vehicle that’s going to take us there. The longtermists have more of an emphasis on the apocalyptic side of that. So, maybe with these technologies, actually just the default outcome of creating AGI isn’t going to be utopia, everything being awesome. Actually, it could be catastrophe. And so this apocalyptic aspect of their view — that links to the notion of libertarianism. So, this is where the State plays a role to enable us to impose regulations to avoid the apocalypse, thereby ensuring utopia. So one is more techno-cautious. And the other is more “techno-optimist.” It’s more like techno-reckless — they just judge the existential risk to be very low. And think, it’s all going to be fine, because the free market is going to figure it out. And the more apocalyptic longtermists to say, no, actually, we shouldn’t count on everything being fine. Actually, the default outcome might be doom, and it’s really going to take a lot of work to figure out how to properly design an AGI. So that we get utopia, rather than complete human annihilation.

The two sides of AGI also find expression in the term godlike AI which is being used by longtermists and doomers and so on, as well as people like Elon Musk, who refer to AGI as summoning the demon. So, the idea is basically AGI is either going to be a God, who loves us, gives us everything we want, lets us live forever, colonize space and so on. Or it’s a demon that’s going to annihilate us. And again, the e/acc people are like: No, no, all of that’s just kind of science fiction stuff. What’s not science fiction is utopia, though. I think this is really the the key difference between e/acc and EA longtermism. And beyond that, I think that is the most significant difference. There are minor differences, then in terms of their visions of the future. So longtermists care about value in a moral sense. So maybe this is like happiness, or its knowledge or something of that sort. And drawing from utilitarian ethics, argue that our sole moral obligation in the universe? Well, if you’re a strict utilitarian, it’s the sole moral obligation. Maybe you’re not a strict utilitarian, you could still say that a very important obligation we have is to maximize value. And this notion of value maximization, which is central to utilitarianism.

I should point out: utilitarianism, historically, emerged around the same time as capitalism. And I don’t think that’s a coincidence, both are based on this notion of maximization for capitalists, it’s about profit. For utilitarians, it’s this more general thing, just value happiness, something like that. That’s what you need to maximize. Whereas for the e/acc people, they’re not so concerned with this metric of moral value. They care more about energy consumption. A better civilization is one that consumes more energy. And they root this in some some very strange ideas that come from a subfield of physics called thermodynamics. And so if you read some of the stuff that e/acc people have written, they frequently cite this individual named Jeremy England, who is a theoretical physicist at MIT. And he has this legitimately, and scientifically interesting theory that the emergence of life should be unsurprising given the laws of thermodynamics. So, basically, in living systems matter just tends to organize itself in ways to more optimally dissipate energy. And that’s consistent with the second law of thermodynamics.

I don’t need to go into all the details, but basically, the e/acc people take this to an extreme and they say: Okay, look, the universe itself, in accordance with the laws of thermodynamics, is moving towards a state of greater entropy. Living systems play a role in this because what we do is we take free energy, and we convert it into unusable energy, thereby accelerating this natural process of the universe itself, heading towards a state of equilibrium. And so the goal, then, is that if you create bigger organisms called meta-organisms, they can be corporations, they can be companies, civilizations, and so on. All of these entities are even better at taking free energy, converting it into just dissipated energy, thereby conforming to what they refer to as the will of the universe. Maybe people are struggling to follow. And I think if that is the case, then that is an indication that you are following along because it is very strange. And basically what the e/acc people do. So, what matters to them is a larger civilization that converts the most energy possible into just unusable dissipated energy. That’s what dissipate energy needs, just you can’t use it anymore. And so, for them, the ultimate goal, it’s not the emphasis is not so much maximizing moral value.

It’s about creating these bigger and bigger civilizations, which means colonizing space, which means creating AGI which is going to help us to colonize space, new forms of technological life, with respect to human beings, who may be merging with machines, and so on. All of this is conforming to the will of the universe, by accelerating that process of turning usable energy into dissipated energy, increasing the total amount of entropy in the universe. And so what they’re ultimately doing here, and it’s very strange, is that they’re saying that what is the case in the universe ought to be the case, and anybody who’s taken a class on moral philosophy is gonna know that that’s problematic. You can’t derive an “ought” from an “is.” Just because something is the case, doesn’t mean that it should be the case. But that is basically what they’re they’re doing. And maybe one thing to really foreground here is that even though the “theoretical basis” is a little bit different from the longtermists, in practice, their vision about what the future ought to look like, is indistinguishable from the longtermist view. We should develop these advanced technologies, artificial intelligence, merge with machines, colonize space, and create an enormous future civilization. That is exactly what the longtermists want is well.

So, if there’s a map — imagine a five ft. by four ft. map, and this map shows you where different ideologies or different positions are located — e/acc and longtermism would be about an inch apart. In contrast, the AI ethics people with Emily Bender, and Timnit Gebru and so on, they would be like three ft. away. So, if you stand far enough from the map, e/acc and longtermism are in the exact same location. Yes, they disagree about the extent to which AGI is existentially risky, and they disagree very slightly, about what we’re trying to maximize in the future. Is it energy consumption? Or is it value like happiness or something like that? But other than that, they agree about so much. I mean, they talk about the techno-capitalist singularity. So obviously, that’s drawing from the this tradition of singularitarianism— the S in TESCREAL. One of the intellectual leaders of e/acc, Beff Jezos, recently revealed to be Guillaume Verdon, but he’s founded a company called Extropic. And, although the term, extropy, didn’t originate with the Extropians, they were the ones who popularized the term and provided a more formal definition.

So his company itself is named Extropic, which gestures at the Extropian movement. He’s mentioned that the e/acc future contains “a pinch of cosmism,” to quote him. So, there’s just all sorts of connections between e/acc and longtermism and the other TESCREAL ideologies that substantiate this claim of mine, that e/acc really should just be seen as a variant of TESCREAL. And there’s just different emphasis on a few minor points. But otherwise, it’s exactly the same. I mean, even their interviews with the e/acc people in which they talk about how mitigating existential risk really is extremely important. It really does matter. They just think that existential risk is not very significant right now. And that it’s the longtermists and so on have this very overblown view of how dangerous our current technological predicament is. So, the debate between the e/acc and the EAs, longtermists and so on. I mean, that should be understood as a family dispute. Family disputes can be vicious, but it is a dispute within just a family of very similar worldviews, very similar ideologies.

PM: I think that’s incredibly insightful and gives us a great look at these ideologies, how it all kind of plays out. And I just want to make it a bit more concrete again, for people. Obviously you think about Elon Musk when you think about this kind of energy piece of it. And how they’re incentivized to use more energy. And they believe that society is better if we just use more energy. Elon Musk hates the degrowth people in the suggestion that we might one of control those things. And maybe we can’t just kind of infinitely expand on this one planet. And so obviously, his vision of how we address the climate crisis is one that’s around electrifying everything, creating a ton more energy. So we can still have private jets and personal cars and all this kind of stuff. Nothing has to change, we actually keep using more and more energy. It’s just now it’s all electrified, and not using fossil fuels. So, the crisis is fixed.

And if you look at someone like Jeff Bezos, when he presents this idea of humanity going into space, and living in these space colonies that areorbiting around Earth, and he wants the population to expand to a trillion people, he basically sets up this choice that we have to make where we either stay on planet Earth, and we accept stasis and rationing as a result, or we go down his future, and we go into space. And then we have this future of dynamism and growth, because we just keep expanding and all this kind of stuff. We’re using more energy, there’s more and more people. So, just to give an example of how this plays out, and of course, you see the energy speak in Andreessen, his manifesto, as well, where he’s talking about this, we need to be producing and using more energy, because that is the way that we get a better society.

And to close off our conversation, I just want to talk about some of the other influences that seem to be coming up here. In Andreessen manifesto, which — I think it’s fair to say — is one of the key documents of this effective accelerationist, techno-optimist movement at the moment, at least of the various writings that are out there, and some of the names that stood out to me was, he makes a direct reference to Nick Land, who is this far-right, anti-egalitarian, anti-democracy, philosopher, thinker — I don’t know how you’d want to describe him — who explicitly supports racist and eugenic ideas. And Andreessen presents them as one of the key thinkers in this movement. And then another one that he directly cites, but doesn’t say so by name. He includes the name later in this list of people. But he cites Filippo Marinetti, who, of course, was an Italian futurist. This was a movement that was basically linked to fascism that was part of Italian fascism. And so these are some of the people that he is calling attention to, when he talks about what this vision of the world is. What does that tell you? And what else are you reading about the types of influences that Andreessen and these other folks are citing when they talk about this?

ET: So actually, I think this gets at another possible difference between e/acc and EA longtermism, the EA community, at least according to their own internal community wide surveys, tends to lean a bit left, whereas I think e/acc tends to lean quite right. IThat being said, the longtermists and really the TESCREAL movement itself has been very libertarian. I mean, that’s a key feature of it. Again, they’re libertarian about everything, except for these potentially really dangerous technologies. Actually, just as a side note, there was a an individual who works for Eliezer Yudkowsky’s organization Machine Intelligence Research Institute, or MIRI is the acronym, who responded to somebody on Twitter asking is the return for e/acc, except with respect to advanced nanotechnology, AGI, and so on. And this individual’s response, his name is Rob Basinger, you could find his tweet if you search for it, his response was the term for that for e/acc, except with respect to AGI and so on, that’s called doomer.

So the doomers from the beginning, a lot of those doomers were accelerationists from the start, and then they came to realize: Oh, actually, some of these technologies are super dangerous. So maybe we need to regulate them. Now, they’re called doomers. But they are accelerationist about everything except for AGI and maybe a few other technologies. So, all of that is to say that there still is a pretty strong right-leaning tendency within the TESCREAL community. But I think the e/acc movement is probably even more unabashedly right-wing in many respects, and every now and then they’ve referenced the the dangers of fascism, and so on. But a lot of their views are fascist or they’re, they’re aligning themselves with people who have fascistic meanings.

Elon Musk is maybe another example here. Even the intellectual leader along with Andreessen of the e/acc movement, Beff Jezos, Guillaume Verdon, being his real name. He has referenced Nick Land in some in his work, although apparently he wasn’t familiar with Nick Land until he started writing several years ago the e/acc manifesto, which is up on their Substack website. But, he has explicitly said on Twitter that he’s aligned with Nick Land. And Nick Land is a far-right guy who’s been instrumental in the emergence of the so-called Dark Enlightenment. Nick Land wrote a book called The Dark Enlightenment. And in fact, the dark enlightenment also has roots in the “Less Wrong” community. So, “Less Wrong” was the petri dish out of which the neo-reactionary movement emerged, and the reactionary movement is very closely tied to overlap significantly with this notion of the dark enlightenment.

So, a lot of it is worrisome. These are people who I mentioned before, there are two broad classes of dangers associated with TESCREALism. One concerns the realization, successful realization of their utopia, what happens if they get their way they create an AGI that gives us utopia, everything that they want. What then? Well, I think that will be absolutely catastrophic for most of humanity, because their utopian vision is so deeply impoverished. It’s so white, male, Western, and Baconian and capitalistic and so on, that I think, marginalized communities will be marginalized even more, if not outright exterminated. But then the second category is the pursuit of utopia. And this subdivides into the doomers and accelerationists. Why are the doomers dangerous in their pursuit of utopia? Well, because they think these technologies like AGI could destroy Utopia if we don’t get AGI right. And if we do get AGI right, it will bring about utopia. Therefore, extreme measures, including military strikes, at the risk of triggering thermonuclear war, all of this is warranted to prevent the AI apocalypse.

So, that is why I think the doomers are super dangerous. And they are increasingly influential within major governing entities, United Nations, UK Government, US government and so on. That scares me the accelerationists on the other side. They’re dangerous, because not only do they minimize the threat of some existential risks, but the current day risks that are the primary focus of AI ethicists like Gebru and Emily Bender, and so on. Those are just not even on their radar. They don’t care about them. Social justice issues are a non-issue for them. So accelerate, accelerate, accelerate, if people get trampled, during the march of progress towards the these bigger and bigger, more energy intensive civilizations, so be it. Maybe that’s sad, but sorry, the stakes are so high. And the payoff is so great, that we can’t put any brakes on progress.

And so this just ties into the right-leaning tendency to neglect, minimize, disregard or denigrate social justice issues. So people, Andreessen and the other right-leaning, or even far-right e/acc people, they just don’t care about social justice issues. They just don’t care about the harms that AI is already causing. There’s almost no mention in either the doomer or the accelerationist camp, about the use of AI in Israel, and how it has played a part in the what some would describe as a genocide that’s unfolding in Gaza. This is small fish. I mean, this is just like minnows, and we’re talking about whales. So, all this is very worrisome in the fact that it’s it is so right leaning. It’s unsettling.

PM: Absolutely. I’ll put a link to the Israel article so people know what you’re talking about in the show notes. I would also just say, you can see those ideas, also infecting policymakers as well, as we’ve been talking so much about regulation. We saw in the UK the Conservative government made an explicit push to be a leader in AI regulation, a kind of regulation that aligns with what these kinds of doomers or whatever you want to call them are saying, in order to be focused on this far future stuff rather than the reality. And then a couple of weeks ago, the deputy prime minister in the UK was saying: We’re going to roll out an AI hit squad to reduce the number of people working in the public sector, and to get it to take over immigration and these other kinds of key areas, which is this is very worrying. And this is exactly the type of thing that people are warning about. But just to bring it back to Effective Accelerationism, and these ideologies. They’re also very clear on their enemies. The decels or the decelerationists, and, of course, Andreessen points to the communists and the Luddites, very much I think the people who appear on this show, who pushed back against them, and I’m sure you consider yourself part of that, that class as well.

ET: So, one thing to disambiguate. There is that term decel that is an umbrella term that can include all sorts of people from Emily bento and Timnit Gebru to Eliezer Yudkowsky. But those are two radically different groups. So AI Safety, you’ve mentioned that earlier, I mean, Yudkowsky is part of the AI safety. A lot of individuals who work in AI safety are “doomers.” AI safety came directly out of the TESCREAL movement. So that is that is an appendage of this, this TESCREAL organism. And again, the idea is tha AGI is going to bring us deliver us to utopia, but it might also destroy us. So let’s try to figure out a way to build a safe AGI, hence, AI safety. That contrasts very strongly with AI ethics, which is much more concerned with non-speculative, actual real world harms that, especially that are disproportionately affect marginalized communities. And so, the decels — there are two things to say about this. One is doomers are decels. And it’s recent, and the other e/acc’s like to position themselves as the enemies of the doomers, this version of decel.

But again, that is a family matter, that is a dispute among family members, because their vision of the future is almost identical. The only big difference between them is their probability estimates of AGI killing everybody, if it’s built in the near future. So that’s the one point. And then the other individuals who be classified as decels, I do think that they are enemies of Andreessen and these would be people like myself, who don’t have this bizarre kind of religious faith in the free market, who don’t think that the default outcome of developing these advanced technologies is that everything is going to be wonderful. I mean, to look around at the world, I mean, there’s just overwhelming evidence that technology, it does make some things better, it also makes things a lot worse.

And again, the situation in Palestine is one example. But there are a billion others that could be cited and deduced here. And so hence, what does that imply? It just implies we need to be cautious, and be careful and be prudent. And, think ahead and get the government to implement regulations that protect vulnerable marginalized peoples around the world. And that’s the best way to proceed. So, just to disambiguate that term decel, it could mean doomers, which are basically the same family as the e/acc’s. Or it could also be these people in the AI ethics community, who are just a world apart of the doomers and the e/acc’s.

PM: So, consider what type of decel you want to become, veer more toward the Luddite and communist side of things, to think about how Andreessen frames it. I would also say, think about what our own pushback to Effective Accelerationism. Shout out to Molly White, friend of the show, who added e/ludd to her username, Effective Luddism, I guess. So, big fan of that. But Émile, always great to speak with you great to get an update on how these kind of ideologies are progressing, get more of the history on this to understand exactly the brain worms that are infecting the tech class and the powerful people who make so many decisions that affect our lives. Thanks so much. Always great to speak with you.

ET: Thanks for having me. It’s wonderful.

Similar