The Fight Over the Future of OpenAI

Mike Isaac


Paris Marx is joined by Mike Isaac to discuss the drama around Sam Altman being temporarily removed from OpenAI, what it means for the future of the company, and how Microsoft benefits from its partnership with the company.


Mike Isaac is a technology reporter at the New York Times. He’s also the author of Super Pumped: The Battle for Uber.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!



Paris Marx: Mike, welcome to Tech Won’t Save Us.

Mike Isaac: Thank you. Thank you for having me.

PM: Really excited to chat. Obviously, I’ve been following your work for ages. Loved your book, “Super Pumped.” Always meant to have you on the show, and for some reason, it just hasn’t happened until now. But with all this going on, it was finally time for Mike Isaac to come on to Tech Won’t Save Us. So I’m very excited.

MI: We needed another implosion to make it happen. I’m glad we did. Thank you so much!

PM: Always love a good tech implosion [laughs]. So you’re on the show, because you’ve been reporting on this drama that we’ve all since seen playing out over the past few days. Really started on Friday, with the announcement that Sam Altman was getting the boot from OpenAI, which seemed like a real shocker. So what happened exactly on Friday and what was your reaction when you heard this news?

MI: We’re recording this on Tuesday, the past few days have been a super blur. And it was funny, everyone was like: Well, time to go home for Thanksgiving, Friday. Nothing’s going to happen because no one wants to work on a Friday. I mean, there’s the news dump before a holiday thing. Anyway, around I want to say 12:20 p.m. or something, my colleague Cade Metz gets word that this is happening. And then seconds later, a blog post hits from OpenAi, from comms, essentially giving this pretty scathing letter ousting Sam Altman, the CEO of what is probably one of the most valuable private AI startups in the world, from the company. Which is crazy, because he’s not a household name, not most people know who Sam Altman is. I spend a lot of my week explaining who this guy is. But he’s a well-known figure in Silicon Valley and has been around for a while, ran Y Combinator, which is a startup incubator out here, which all these companies have come out of. But his celebrity has really risen in the past 18 months, I want to say, as OpenAI, this generative AI company that produced ChatGPT has just gained momentum. The valuation has shot up in the weird mechanics of funding rounds that happen and everyone’s trying to get in there. And all the other companies are freaked out and want to compete with them. So it was just a big shock for them to say: Not only are we firing our CEO, we’re doing it for cause because he’s been not consistently candid with us over time.

PM: A euphemism for lying, basically.

MI: Totally. Usually, even if some bad shit went down at a company, you don’t usually stab them on the way out the door. The decorum in the Valley is give them a polite goodbye. And everyone: They did a great job. And they’re going to be an advisor, which of course they’re not. And they just walk out. But that is not what happened here, and everyone freaked out. It was crazy.

PM: It’s well known that you kind of fail your way up in Silicon Valley. But even you think back, we can talk a bit more about OpenAI, but you think back to 2018 When Elon Musk tried to take it over. And he certainly didn’t need the nice: We let him get away, nicely. But we find out later, or at least Sam Altman said later, that what actually happened, they said at the time, that there was a conflict of interest because Elon Musk’s Tesla was working on self-driving technology. So he decided to leave because of this conflict between open AI and Tesla, and Sam Altman said more recently that: Oh, no, actually what happened is Elon thought we were behind, tried to take over the company, yelled at founders who were like: No, we’re not going to let you do that. And so then he took off and stopped giving them the money that he promised them, the billion dollars of funding. So this kind of stuff happens but usually you don’t get the: This guy’s a liar, we’re kicking this guy out.

MI: That’s the gentleman’s agreement, I guess, amongst these people is, it can be very dramatic and often is, especially when millions or billions are at stake. But they’ll just keep it quiet for a while, and then maybe it slowly trickles out over time or something. If people care about it, because, again, startups are messy. A lot of this shit happens a lot of the time and doesn’t go reported because most people don’t care. Or it’s not valuable enough or it’s a stupid little app, whatever. Like is not important enough. But this happened to be Elon Musk and all the sort of big wigs around here. So contrasting that squabble to this very high profile squabble, that was immediately apparent something was off, but it also kicked off this spiral in the Valley of everyone trying to figure out what he did. Like, you read a blog post, and you’re like: Okay, there’s a laundry list of things that CEOs get removed for. Did he do fraud? Did he have some sort of sexual scandal? Did he do, whatever? Just what are the cardinal sins of getting booted from a company as the CEO?

PM: All of those rumors, to be clear, were floating out there. Those and many other things, whether there was more human labor involved there, then they were saying and all these other kinds of speculations, because there was this void into which people were placing everything they were thinking about Sam Altman and OpenAI, and these technologies and everything. Do we know though? Like, have we figured it out yet? Why he was actually booted from the company, or is that still a gray area?

MI: Still up in the air, as far as I know. I mean, to your point, the information void was what got the whole thing spinning. It was amazing. It was on Twitter, it was on Hacker News, the Y Combinator website, which is where every engineer goes to post in the Valley and ruminate. It was on the OpenAI reddit forum, and then just people texting calling, meeting, etc. It was just like, the gossip mill was in full effect. But I think that was a real board miscalculation, because that fed the hype train around this, then the story became: What did this guy do? And I think that it was very clear that they didn’t have communication strategy. This was just sort of like: Okay, we got to push this guy out, send a message and then assume everyone would be like: Okay, let’s go do it. And that was just not what happened. It was wild.

PM: What has been reported over the last few days is essentially, there was like a schism or a divide within the company, over how to approach its mandate, how to approaches its technologies. And from what I’ve read, the suggestion was that Chief Scientist, Ilya Sutskever, I might be butchering that pronunciation, was very much the one who initiated this push by the board to get Sam out. Can you talk to us about what that divide is, and the different camps that exists within the company and the thinking on these AI tools and how to approach it?

MI: We’re still trying to figure that out. The other dynamic here is that there’s certainly folks who are talking and spreading a lot of stuff to journalists. And if you’re paying attention, you can tell who it is, who’s being told what, and who’s being used as a mouthpiece for someone trying to angle their way in. And my colleagues and I have been very careful throughout all this. If you go by, like this thing, if you go back and look at like the many tweets or stories, there’s a lot of wrong stuff that you’re going to have to wait and see what shakes out. But it’s been fascinating, because they’re using media very effectively to try to push this campaign to reinstate Sam. Normal people don’t know that this is going on. They’re barely like just sort of like: Okay, there’s a headline, I don’t really know what that means. But here’s the new drip, drip, drip or whatever. And I think that was actually working for a while. It’s gone, sort of back and forth. But anyway, just to your question, there are differing camps in this company and what they believe in. It’s super convoluted, and I don’t even know all those details of these philosophies. But it essentially boils down to the effective altruists versus the accelerationist, those are the two camps.

Effective altruists, this is a butchering of their philosophy, but essentially, are very concerned about the effects of AI on the world and are often labeled as Doomers because they have their Skynet thing in their heads all the time. Terminator is going to become self-awre and nuke us all. Then there’s the accelerationists which probably is where Sam Altman falls into the camp. Which is like: No, we need to speed up development of this tech, because think of all the people who are not being served by it being out there. If there’s a continent full of kids in Africa who don’t necessarily have direct access to medicine, equally across different countries, shouldn’t we be able to at least give them a chatbot doctor, so it gives them a minimal viable standard of care? Even if it’s not as good as a human like, this is like thinking. And there are certainly logical holes in that argument, but that’s how they think about it. So that has been a point of contention inside of the company, but really, across the Valley and across these camps for years and years now. OpenAI is almost a perfect vehicle — and its implosion, a perfect vehicle — for how that divide is really split people at this point.

PM: I think it’s fascinating to hear you describe it that way because I feel like when you look at Altman, as a character, as the head of OpenAI. If you look at his actions, it very much aligns with this accelerationist camp. But then if you look at his rhetoric, he’s using the rhetoric of both camps: We need to get these tools out there, people need it. But then at the same time, AI might destroy the world and kill humanity and all this kind of stuff? What do you make of the way both of those arguments, that don’t seem to fit together, are still deployed?

MI: I have a lot of admiration for the political animal that Sam Altman is, I guess, is what I would say. He has gone from cult Valley figure to figure on the international stage in a very short amount of time. And I think that also probably plays into why some folks on the board are wary of him. But he’s very smart and using the language of EA folks. And look, I will say that, I do think that everyone involved in these conversations believes parts of what they’re saying, if not all of what they’re saying. I do believe that, like the Doomers, as they call them, the EA folks, are legitimately concerned this is something that is in their mind. Just as I think that Sam probably also believes in the: We need to speed this up stuff. But I think if you boil it back down to the original purpose of: This is a nonprofit, we’re doing this for ideals, and then watch it spin out of control as money gets involved, as it becomes like a more lucrative enterprise. That, to me, is the more salient divide throughout a lot of this. The way I keep talking about, internally, is like idealism, running into capitalism, and fighting out. And it seems like one side is winning. If Sam makes his way back in and if he gains certain concessions, if he rids himself of the board, if he is able to change the profits, or the structure of the company, that definitely says something about who won this fight, if that makes sense.

PM: Absolutely. I want to go back to the events of the weekend, and how it all played out in just a second. But I think that this is an important point of it, that you bring up right there as well. Because a lot of these companies that we’re used to are private for-profit companies, and then eventually they go public. And that is kind of the route in Silicon Valley. But OpenAI was set up in 2015, as a nonprofit, explicitly to not have this profit motivation, as it was trying to figure out how to create an AGI that wasn’t going to kill us, or whatever. This was, at least part of the justification there. To what degree did that structure of the company, and the fact that it built a for-profit arm underneath it in 2010, to what degree does that play into the broader dynamics that are happening here?

MI: I think it fucked everything up in the sense of what they wanted it to be. It would probably be a very different company, and these people probably wouldn’t have been involved in it. Like that’s the whole genesis of it, is that they wanted it to be a nonprofit, and they didn’t want market forces dictating what this should be. But it would have been a lot cleaner, and probably a lot less board struggles and power plays at the top — if they had just made it into an LLC from the beginning — but that’s not what happened. So originally they created this to be a nonprofit structure, any profits are capped and then funneled back into the nonprofit instead of paid among shareholders, which is how normal companies work. And that was supposed to tamp down any incentives for people to do different things other than more towards the mission of creating AI that can help the world, sort of thing. To condense some of it, like I think over time, it started becoming more complicated based on different founders, side activities, or different founders opinions on the direction they should go or who should be leading, or the amount of money that could be made out of this.

Sam basically made it into a for-profit company backwards, basically. And so there’s these diagrams of the governance structure floating out there in the ether. I think the meme is something like: Who could have thought this governance would be a crazy thing. And it’s like a very convoluted, I don’t know what those are called, but like: If this then yes, or if this then no, and one of those trees. Anyway, the governance was broken to begin with, just because all of the machinations that Sam and others did, were to make it easier to recruit talent by making upside a possibility, because AI talent is very expensive, or to have sort of different share sales to outside investors. Because the legitimate problem they had was, it’s very hard to get people to give you millions of dollars, if there’s no upside for them. Their philanthropy efforts hit a wall. People are like: Yeah, great, whatever, but I’ll give you a million dollars, not a billion dollars, if you’re not going to give me some stake in this. So I would say it was part necessity, and then part lucrative: What do we have here and how are we going to sort of capitalize on this? If that makes sense.

PM: I found the Elon Musk story — about him trying to take over the company, and all that kind of stuff — really interesting as well, because the way that Sam Altman tells it, in hindsight, seems to be set up in such a way as to justify the move toward creating a profit model. Because it was like: Elon Musk promised us a billion dollars, and then he tried to take over OpenAI, and we didn’t let him, and he stopped giving us the money. And so we needed to find money somewhere, so that we could pay for all this stuff. And then the following year, 2019, is when they set up the for-profit subsidiary or part of the company or whatever. And so I don’t know if that’s the accurate telling of the tale. But that’s the telling of the tale that Sam Altman has, to justify it. I don’t know, if you have thoughts on that

MI: We can go by what’s out there in public. I guess, all I can do is like: They’re saying this stuff on the record, so we can point to that. But again, like the truth on a lot of this stuff is a mixture of who’s willing to talk and whatever. And this company, in particular, with all these intense personalities at the top is going to be really hard to get to ground truth on what actually happened, I guess, is what I would say.

PM: It makes perfect sense. And so I want to go back to this weekend, we talked about Friday. This news comes out. There’s all this speculation that’s happening, why did it happen? All this kind of stuff, then we move into Saturday. And all of a sudden the story is: Oh, by the way, Sam Altman might be coming back. So what is happening on Saturday?

MI: That was so good. I remember getting a tip from someone before it got out. I mean, the problem and the good thing about work at the Times is there are many layers to get something out, including you need to verify you need to get multiple people to talk to you about it. And then you need to get editors on board and have it signed off. And those are good, proper checks. But it also puts me at a disadvantage when someone’s tweeting random shit that they hear. And then can go back and say: Well, I was right, because I tweeted five things and one of those turned out to be right. But anyway, I heard about it. I remember this, and all my colleagues who I’m working with on the story were like: What the fuck? First of all, we didn’t believe it, because it sounded too stupid to be true. But it slowly became apparent that Sam, once he was fired, he put out the statement saying: Oh, I love my time there, whatever. And then Greg Brockman quits, who is his co-founder, and was the president there and has a lot of technical knowledge and was actually a big loss. And slowly, the investors, employees, and even executives start to rally around Sam.

So I think at some point, he realizes: I’m not done here. I actually can make an effort here to pressure the board. The board is four people it’s very tightly knit. They’re very interestingly obstinate people who are willing to take a lot of pressure but they can’t be removed by anyone except themselves, basically. Or they need to take a board vote to do that and if they need to dissolve the board or agree to resign. And so, Sam wages this pressure campaign from the outside, basically, all across Twitter and using media outlets to get different messages across. I’ve been in between, more trying to go back and review the stories and different driplets that are out there. And it seems to work. There was a point where the board, I mean, the fact that the board is reengaged shows that it is working to some degree; He didn’t take it lying down. The other part that made it work was the fact that all the employees were like: Well, guess what we’re going to quit, which is actually existential, to the company, because you can live without your CEO, but you can’t live without 760 of the 770 employees that work there.

PM: Which is interesting, because the tech companies do like to make us feel it’s the other way around all the time.

MI: That’s totally right. I’ve been thinking about this for a long time. And employees do have more power than they realize, but they are treated so well that they don’t usually have any reason to leave or at least push back. But when shit gets really bad is when they start doing stuff, or sort of banding together, which is what happened here. Including the interim CEO, this woman, Mira Murati, was named, basically was smart enough to read the tea leaves and see that everyone wanted Sam. I mean, Sam superpower here was marshaling support for him from everyone at the company and using that effectively to push back on the board. And so just to keep the timeline going, we walked through that, Sam’s just pushing, pushing, pushing, pushing. It seems like talks break down at some point and the board issues another letter basically slamming the door shut on Sam. So the weekend, I can’t even remember what day it is, but over the weekend, it’s all but inevitable that he’s coming back. And then: Oh, he’s not coming back. And holy shit, everything is collapsing. There’s a ticking clock at least for Microsoft, which we haven’t even talked about.

PM: We’ll get into that.

MI: Because they need to do it before the market opens on Sunday, so dramatically on Sunday night. Satya Nadella, Microsoft CEO tweets: We support a OpenAI and also we’re hiring Sam Altman and Greg to come join our company, anyone else can come who wants to. Basically doing a talent raid on OpenAI, which everyone lost their shit over that it was wild.

PM: And especially to come, it was right around midnight Pacific time. So it’s like: Oh, okay, this is really happening. Just capping off a really wild weekend. And of course, the one thing I would just add to what you were saying is about an hour or two before that, we got the news from OpenAI that they had chosen a new CEO, which was very much saying…

MI: I forgot about that.

PM: Yeah, saying: Sam is not coming back because we chosen this guy, Emmett Shear, who used to lead Twitch, I believe it was, to be the CEO. Anything we should know about that?

MI: Well, it’s telling that I forgot about that, because he’s such a non-entity in all of this.

PM: And immediately, people started digging up his old tweets, and saying just the worst kind of stuff. I don’t even know if I want to get into it.

MI: I know. It’s totally gnarly. And I’m like: Nope, not touching that.

PM: I’ll put a link in the show notes. If people want to know more.

MI: Go look at the Forbes story. It was very good and very gross. It tells you the level of rushing that was done here to get this stuff out, basically. Because, the way you usually do this, you go through and then, literally there are agencies that you can hire to scrub your probo tweets over the years after years of having this stuff up. So you’re not in a Travis Kelce situation or whatever that I’m talking about. Although, that guy’s tweets rule. His squirrel feeding tweet was probably my favorite thing I’ve ever read. I think the guy Emmett Shear is an interim CEO who they were like: Well, we trust this guy to lead in the interim. I’m not sure how they decided that, but clearly, they made a very large miscalculation. I think it was someone they trusted. They apparently also talked to other people who turned down, probably smartly. And just immediately the employees are revolting. They’re like: No, we do not accept this new guy. This is not going to happen. So to your point, that happens, all hell breaks loose, and then at the end of the night, Satya says: The guys are coming over here. Now that would be an end if it was an end, but it’s not the end.

PM: You would think that’s the end. I do want to come back to Microsoft because I want to continue on our timeline, and then we can return and more deeply discuss what is going on there. But as you say, you would imagine: Okay, Microsoft hire’d Sam Altman and Greg Brockman. That’s the end of the story. There’s probably going to be some final things that happen over at OpenAI maybe some more people are going to leave whatever. But you would imagine that’s the conclusion. But then we move into Monday and all of a sudden the stories again: Oh, by the way, Sam Altman might still go back to OpenAI. So what is, then, happening to still have this moving forward.

MI: Yes, everyone besides, being delirious on Sunday night at like one o’clock in the morning, everyone was like: Oh damn, this is a crazy capstone to this whole thing. And Microsoft is an important player here, because they have this very expensive partnership with OpenAI. They’re the single largest investor, upwards of 10 to 13 billion invested, although the caveat there is that most of that is in computing power, not straight cash. And so everyone’s like: Damn coup de gras. Satya just killed OpenAI and stole their talent, and it’s crazy. But for me, it didn’t end there because it’s still playing out, there’s still talk of the board or whatever. Bu it’s slowly sort of, after talking to people, I’ve started figuring out I was like: Satya would be better off with a living OpenAI compared to a dead one where he has to suck up all the talent start from scratch inside of Microsoft, pay all the salaries. There’s probably regulatory or legal risk there that I haven’t even thought of, even though it’s not a full on acquisition. Maybe there’s a legal argument where it’s a merger, or this much absorbing talent plus some version of the IP that comes with the employees, opens them up to legal risk, I don’t know.

But it was clear to me that the situation they have right now is much more desirable to them as a functioning OpenAI then restarting from scratch. So if you believe that, which I do, you start to see that announcement as a bargaining chip against OpenAI’s board. Then Satya goes on TV, CNBC, and Bloomberg, and does interviews basically not saying a lot, but basically saying: I need to kill the board. He straddles the line. This is the first time he kind of says publicly like: Well, we welcome Sam and them. But he made it sound like it wasn’t a done deal with their hiring. And so, I think people start catching on like: Okay, they’re still trying and they’re still trying to force the board to do something. Which is crazy, it’s crazy amount of pressure on the board. There’s trillion dollar market cap company, or whatever, CEO bearing down on him. Satya Nadella is incredibly powerful person who is basically just railing against these four people. That was Monday?

PM: Yeah, that was Monday.

MI: So we’re recording on Tuesday. And so the sort of status quo over the past few days has basically been: Employees threatening to quit, board taking up negotiations, again, with Sam, are on and off, but I think as of right now, they’re negotiating again. And in the story that we just put up on the board story, they basically want containment around Sam if he were to come back. Basically like they are demanding some concessions and Sam is trying to create insurance for himselfm, basically. So the reason it’s been locked in a sort of stalemate with board members who are, these people are like zealots, man, true believers in what the safety of the company and of the mission that they’re doing. Which is why you haven’t seen them cave, because any other person who didn’t really give a shit probably would have stopped a long time ago.

PM: Really, though. That gives us a good end to talk a bit more about Microsoft as well, because Satya Nadella does seem really good at this kind of stuff, and managing this relationship. Even though, he was totally blindsided by the board’s decision to get rid of Sam. But you can see in those interviews he did with Bloomberg — and I believe CNBC, I only watched the Bloomberg one — he has a good kind of command of this. So if you’re a major tech company like Microsoft, I guess there’s two paths that you can really go. Where you’re building this massive AI division in house to become a real competitor and player in this space, like Google has done or like Meta has done. Or there’s another path where you have this partnership with OpenAI, and that allows you access to these tools without having it directly in house. So why did Microsoft choose this direction to go, and what benefits does it get from this relationship with OpenAI?

MI: We’ve been thinking about that for a while, my colleague Karen Wise, who reports regularly and Microsoft made some really good points. Which is just the arm’s length relationship actually does behoove them. First of all, it was never clear they can ever even swallow OpenAI with a straight on acquisition. If it was a different regulatory environment. I imagined Satya probably would have at least considered it, if not done it. But there are benefits to keeping it spun out. They have 49% stake, that’s a giant stake. They get all the upside of this partnership by saying they’re powering Bing which, again, not many people use thing but they’re still believe that generative AI search is a way to take on Google’s search dominance. They’re going to power, not just Bing, but all of their products of Microsoft Office Suite. Microsoft is this fascinating company that’s huge and super powerful and makes me go to sleep. But really, they should be more heavily covered in different ways, I guess is what I would say.

PM: I think the fact that they genuinely make us go to sleep, works for Microsoft pretty well.

MI: Totally. No one gives a shit about them. Because they’re all like: Well, what is Facebook doing today, or whatever?

PM: Exactly. What’s Google up to with its search engine dominance, and how is Facebook fucking up the world today? And it’s like: Oh, I’m just using Microsoft Word on my computer and whatever.

MI: Clippy isn’t going kill me.

PM: At least, unless we’re all going to be turned into little clippies.

MI: That would not surprise me. So I think, just get back to my point, is Microsoft briefs many of the benefits. Also, if OpenAI sold someone else or somehow went public or whatever, it would be something of a payday. But Microsoft doesn’t really need that money, they need the tech, they need the competitive advantage they have here. If you believe that this is going to be some version of the future, then they have the biggest competitive advantage compared to Meta, Google, Amazon — I have no idea what Apple’s doing, but probably apple. So I think they get to have their cake and eat it too. In that regard. There’s also a reason that it didn’t get done inside of these big companies. I’ve been thinking about this for months now is AI development and AI research is not new, it is not a new thing.

This has been going on forever. And Google and Facebook, now Meta, and Microsoft really have been spending tens of millions, if not billions, of dollars, hiring people to do this very thing for more than a decade. But the fact that this startup was able to get away with putting out this half-tested product into the waters and then explode, caught them all off guard. And I think Microsoft pouncing on that was a really smart move by Satya. And so now, he is still, regardless of like the sort of cheering he got on Sunday, and regardless of the many articles that I see, saying: Microsoft is the winner here. He’s still trying to salvage this deal, and is still in some version of trouble, because he just wants to hit reset on it, he wants to go back to Friday at 11 o’clock in the morning, basically.

PM: I wonder, to pick up on a few points of what you’re saying there, I guess some speculations I have around Microsoft and see what you think. But I feel like if we’re thinking about the way that a startup — so to speak, or a smaller company was able to really push this kind of AI stuff forward — it seems in a sense that they certainly got the resources of having the partnership with Microsoft. And they had a bit more of the freedom by not having these big bureaucracies to work with. But I also wonder if they were able to unleash these products in a move fast and break things way, in a way that a Google, or a Meta or something, wouldn’t have been able to, because they were smaller and don’t have the level of scrutiny, and regulatory pressure and stuff like that. Or at least didn’t at the moment that they launched these products, as, for example, if OpenAI was in Microsoft, and Microsoft tried to do it on its own.

MI: I pitched that exact story a few months ago, I think we wrote it. But that’s exactly right, people don’t give shit about OpenAI because they don’t know what it is, do they can do whatever they want. But Facebook has spent the past seven years getting its ass kicked in front of Congress, snd of course that’s given them PTSD on doing anything that could be even remotely controversial. Google I have feelings about because they’re just this giant, lumbering, slow moving company with so many divisions, so much politics and infighting. So many people who think they’re smarter than other people like — getting something out the door. And look, I’m not saying that they should be doing these things, I’m just saying this is the reason why they are now finding themselves behind. So that is exactly, being the like little guy can really pay off. I mean, this is probably some version of the Innovator’s Dilemma, the Clay Christiansen philosophy, which is: Once you get big enough, you’re going to die, or at least not be on it. But I think that this is probably another reason why the Microsoft partnership helps Microsoft in the sense that they don’t have to worry about this getting killed or bogged down, internally.

PM: Absolutely, and I felt like for Google too, OpenAI going forward and pushing out ChatGPT and pushing out DALL-E, then also gives it permission to be able to push out it’s similar AI products. We can debate whether they’re at the same level or whatever. But if it had put out Bard, without the context of open AI having released ChatGPT, I think it would have been a very different conversation that we would have been having about these tools than what has happened.

MI: I will say that Congress moved faster in the States than it did in, let’s say, 2016, and there’s probably hanging over there from 2016 That’s not saying that they’re going to do anything, but Biden issued a whole AI — it was an executive order, I think is what it was — like: Here are the guidelines here. And it wasn’t a crazy number of things they should or shouldn’t do. But they weren’t doing that six years ago, seven years ago. Josh Hawley, who is a prominent Republican senator here, was like calling AI meetings on the Hill all the time. They’re moving faster, I guess is what I would say, which is probably indicative of some embarrassment over six or seven years ago. And then probably just how everyone in tech is freaking out over this and saying how important it is, and that folks need to be on it way more compared to last time is what I would say.

PM: Absolutely, you can see how, on the one hand, it is good that they are being a bit more proactive about these things, even if we can still criticize the approach that they’re taking. And still being kind of a bit too driven by what the CEOs say, then some of the watchdogs and critics, and things like that. But to give you another point on Microsoft, I guess the other thing that comes to mind, when I think about what you’re describing there is, I feel like they have set up these partnerships or able to jump on these moments of hype to integrate them into their products, regardless of whether they will be the next big thing, but keep the investors happy. Like Microsoft was still a big pusher of the Metaverse stuff when Meta was really pushing that and making us believe that was the future. It was integrating it into its products and all that kind of stuff. And then that quietly got shuffled off to the corner earlier this year when AI became the next big thing.

Then it already had this partnership with OpenAI that it had made back in 2019, that then it could say: Okay, OpenAI, you’re the big leader here, you’re getting all the attention. Let’s deepen that partnership now, because we already have it. And then the benefit comes to Microsoft to still having access to these tools without facing the regulatory pressure and scrutiny that would come with having that division in house or trying to acquire open AI in that moment. And then the one other thing I would add to that is that, for me, I wonder if the goal is really to benefit from rolling out the Metaverse, or to benefit from rolling out the AI tools. Or if it’s much deeper to say: We are trying to dominate in the cloud computing data center space. And by getting everyone to adopt these AI tools, even if we don’t control them or own them ourselves, that’s going to require a lot more computing power, We’re going to be building out all these data centers. That gives us a lot more kind of influence in whatever this infrastructurial piece of the industry.

MI: Totally agree. I think there’s probably a little bit of all of it in there. But I mean, look, again, Satya is very smart, as is Zuckerberg. I think everything most of these guys do tend to, if they believe that it may work, there’s no downside to just being like: We’re going to try this. It’ll be a net positive for the entire network of businesses that we do, and fuck it just move forward towards it. But like we can do this, and we can make these very big and expensive bets because we have an underlying business that is throwing out a ton of cash already. Which is absolutely true for Facebook, which is absolutely true for Microsoft, absolutely true too for Google and for Apple and for Amazon. All of these companies do have businesses that are funding these next things.

I think Zuckerberg actually does believe in the Metaverse. I think he knows that. He has staked a lot on it his reputation, the company, like a lot of it. But I think he actually does kind of believe that this either should or might be a thing. The other companies they buy into these things to varying degrees, but also I think the strategies is kind of like a VC strategy, which is just risk tolerance and why not plow at least some portion of it into this in case it becomes a thing? And then if it doesn’t, then whatever, everyone will forget about it, and we can just shut our Metaverse division into like a corner of Redmond, Washington and everyone will forget about it, or something. Then in a few years, we’ll lay him off or will transfer him and kill the division. It’s fucked up, but I feel like that’s how it works!

PM: Totally! And they had the cash to make the bet and the investors still expect them to have something to do with whatever is exciting or getting all the hype in this moment. So, for Microsoft it can say: Look, we’re doing the Metaverse, or look, we got the big AI partnership. When really they’re using that, I think, in service of their more foundational businesses to further cement those or shore them up, or whatever we would want to say. So we’re talking about Microsoft there, its role in this, how it benefits from it. But this story is not over yet, and this is obviously speculation, I’m sure we’ll have actual answers by the time the show goes live on Thursday.

MI: Maybe!

PM: Where do you see this going in the next few days, do you think this ends anytime soon? Or is it even possible to tell at this moment?

MI: It has changed dramatically from, not even day to day, hour to hour, like this morning to right now. And so I keep thinking we’re getting closer to something. Look, the outcomes are either Sam comes back, or he doesn’t. But if he doesn’t, he sure doesn’t seem satisfied with not coming back. And he’s sure pressing every lever possible to get back in. So I think the board is trying to safeguard the company, and also accept that Sam might have to come back in some capacity because they’re going to implode if they don’t have him back. I think everyone needs to take a breath on the cheerleading and take a breath on the imminent return because the momentum builds. And Sam and his allies have worked this very well. But I think, I don’t know how long that’s going to take some, people that we’ve talked to said: We’re in this for long haul. And other people are like: It’s coming in an hour like it’s crazy. I don’t know, I would like to have it happen before Thanksgiving so I can cook a pumpkin pie or something.

PM: To close it off, then what do you think this means for Sam Altman? Obviously, this is a man who’s star has really risen over the past year, as this kind of AI hype has taken off. He was certainly already in a powerful position in the Valley, as heading up Y Combinator in the past, before going over the Open AI. But his profile and his power, I think it’s fair to say, have significantly increased over the past year since then. So being pushed out of this company, and now potentially going back into it, or going to Microsoft or whatever. What does that mean for him? We also had these stories that he was out there looking for money to raise to start some other companies, particularly a chip company or something. So what does this mean for him going forward and his future position in the Valley?

MI: I think it’s mixed. I think there’s a version of it where he comes out on top, like he’s been spending in front of everyone for a while now, and on web pages across the world, and whatever, on TV. And then if he goes back, he wins. If he goes to Microsoft, it might be a harder road, but he still kind of wins. But I also think the flip side of that is people are rooting through his garbage now in a way that they were not before. You’ve seen these stories sort of circulate all across Twitter, on Reddit, on Hacker News, or whatever. I think the like rumor mill of, “what did he do,” has got people digging around and I couldn’t tell you either way, whether he has to worry about that or not. All the attention is on him, and that is good and bad. And that is the case for a lot of people whenever this shit happens.

PM: I love that stay tuned for the sample that is going to come after this whole drama and process maybe concludes we’ll see. Anyway, Mike, fantastic chat fantastic to dig into this with you and learn about everything that’s happened over the past little while but also the bigger picture of what it all means. Really fantastic to chat. I know that you’ll be back at some point. Thanks so much.

MI: Anytime, thank you for having me. I appreciate it.