Chatbots Won’t Take Many Jobs

Aaron Benanav

Notes

Paris Marx is joined by Aaron Benanav to discuss OpenAI’s claims that generative AI will take our jobs, how previous periods of automation hype haven’t resulted in mass job loss, and why we need to ensure it doesn’t further empower employers.

Guest

Aaron Benanav is an Assistant Professor of Sociology at the Maxwell School at Syracuse University and the author of Automation and the Future of Work. Follow Aaron on Twitter at @abenanav.

Support the show

Venture capitalists aren’t funding critical analysis of the tech industry — that’s why the show relies on listener support.

Become a supporter on Patreon to ensure the show can keep promoting critical tech perspectives. That will also get you access to the Discord chat, a shoutout on the show, some stickers, and more!

Links

Transcript

Paris Marx: Aaron, welcome back to Tech Won’t Save Us.

Aaron Benanav: It’s really great to be here. Thanks for having me, Paris.

PM: I’m very excited to chat with you again. It’s been two and a half years since you were last on the show. I just can’t believe it’s been that long to be quite honest. Sometimes it’s hard to even think I’ve been doing the podcast for that long. Last time you were on the show, we were talking about what was happening in the mid-2010s when there was all this scare and hype around whether robots were going to take all of our jobs and what really came out of that. Of course, with everything that has been happening with ChatGPT and Generative AI lately, I was thinking about that period, especially as there’s been more talk of these technologies, taking a bunch of jobs. Of course, I jumped into your DMs recently to ask what you were thinking about it.

Then you wrote an article in the New Statesman recently giving your thoughts on what was going on here. I thought it would be a good opportunity to have you back on the show and to discuss all of this with you. I want to lay a bit of the groundwork, a bit of that history, out for the listeners, and then we can dive into it. In 2013, there’s this Oxford paper that says 47% of jobs will be lost to automation in one to two decades. That kicks off a bunch of sensationalized coverage about automation and job loss in the years that follow that. There are a number of other studies that follow on this one that say: Oh, my God, so many jobs are going to be lost, and there’s these questions of what is going to happen there? The idea is that plenty of jobs are going to be eliminated, including all driving professions, like truckers and taxi workers.

Also, the media covered all kinds of robots serving coffee and taking care of the elderly and operating Amazon warehouses, and even more with the message that all of these robots were just on the cusp of taking away all these jobs and what we’re going to do after. Then that kicked off campaigns for universal basic income, and even bigger visions, like “Fully Automated Luxury Communism,” but then the expected job losses didn’t come. Now, we’re all hearing about how there’s a tight labor market, where there aren’t enough workers, even at the same time, as there’s talks of, once again, technology taking all of our jobs. If we go back to that period in the mid-2010s, what was going on then that you think really stands out? Why didn’t that mass wave of job destruction that many people were expecting actually come to pass?

AB: There’s a sense of déjà vu in our current moment. We see this paper that’s come out from OpenAI saying the ChatGPT and related technologies are going to take, they say, 49% of our jobs, which is 2%, more than the Frey and Osborne Paper from 10 years ago predicted. It’s a really good time, you’re right, to look back on the original context of the 2013, 2014, 2015 period. This was a major period of automation hype, and to think about what was going on then, and what happened with all these claims about robots? Also, back then it’s important to say, too, there was already this sense that it wasn’t just robots, but also artificial intelligence in the form of more simple machine learning and deep learning algorithms, which are also the basis in many ways of the ChatGPT and Generative AI revolution.

When I went back and looked at the Frey and Osborne Paper, one thing I found very interesting is that the paper tried to take a methodology that was applied to the question of offshoring, like what kinds of jobs in the US were susceptible to offshoring? Could we look at the tasks those jobs involve and guess which jobs are likely to move overseas, where you have remote workers, often working for much lower wage? So what jobs can be offshored and what can’t? Frey and Osborne took that idea and tried to say: What if we applied the same thing to computers? What if we looked at the kinds of jobs that computers can do, that they could take over, that US workers are currently doing?

I think already you see there, what the problem is, like human beings in other countries, they may have a different set of skills. There might also be limitations to the work that they can do from far away, from a distance, but they still have human minds, and they have human cognitive capabilities. The same wasn’t true for the computers and robots and machine learning that Frey and Osborne were referring to. The other problem is that in the original paper, as in the one that OpenAI just published, the evaluation of which jobs could actually be done by computers was just done by a bunch of computer experts who didn’t really know anything about what those jobs actually required. So there’s something so silly and self referential about it. Then the other thing about that paper, which is also repeated in the new one, is that they asked for the technology itself, in that case, in the Frey and Osborne Paper, they asked a machine learning algorithm to categorize jobs and say which ones it could replace.

In the current paper, they asked ChatGPT to do the same thing. That’s how you know. When I see that, that’s when my bullshit meter goes through the roof because there’s something so silly about that particular strategy. But in any case, it did lead to this huge wave of hype, as you said, and we heard so many stories about machines, robots and computers that we’re going to take you over all of our jobs, or at least 47% of them as Frey and Osborne said. I’m always looking back at those technologies and seeing which ones have succeeded in which failed. The truth is that the vast majority of them, like 99% of them, have failed. They didn’t work out, they were just startups that were riding that same wave of hype to ply their ware, aand almost all of them failed. Of course, a lot of startups failed, but you don’t ever hear the stories about that in the media reporting.

PM: They just move on, there was all the hype and then they don’t talk about how it all failed and didn’t work out, they just move on to the next thing that gets hyped up and is the next thing that the tech industry wants us to be excited about. I think it’s so fascinating to hear you describe that paper, everything that came out of it. But in particular, how, once again, we’re asking the technology to protect the impact that it’s going to have as if the technology has any kind of brain behind it, or any kind of understanding, it isn’t just pulling stuff out. As I was talking about with Emily Bender is kind of a very advanced autocorrect sort of function. There’s no intelligence behind it. It’s not making a real prediction or what have you, it’s just this tool that we’ve kind of created.

AB: It’s important to say in retrospect, what went wrong with the technologies, because you see the statistics about robots — the number of robots in the economy and across all these different economies — is rising so quickly, and the price of robots is falling. That was what really excited people and thought that there was just this incredible transformation going on. It’s important to know that a lot of the robots per worker statistics you see are about robots in manufacturing, which is a sector that accounts for a smaller share of the workforce. Just on that like basis alone, it’s important to know that the robots that really haven’t worked out are robots in the service sector. You might see some experiments with using robots busboys and staff and so on. For the most part, robots in services, outside of Amazon warehouses, where they do very specific jobs more like a factory job, haven’t worked.

What I tell my students, which is what I think everyone should know is that the main thing that robots do is they pick up heavy things and move them from place to place. So whenever you hear about innovations and robots, ask yourself the question: Is this robot picking up something heavy and moving it from place to place? You’ll find that that’s what the vast majority of robots still do and that’s why most robots are in the car industry. So like 50% of all the robots that are deployed in Europe are deployed in the German car industry. They do a few other jobs like welding, painting, some simple types of assembly jobs. Those are jobs that robots have been doing for a very long time. What all the analysts in manufacturing say is that a lot of the robots we see, they’re not qualitatively different from the robots we’ve seen over the past 20 or 30 years. And that’s when these robots made the biggest difference.

The ones like self-driving cars that are operating outside of the factory context — it’s all the problems that you mentioned already, that are still there and present with Generative AI, which is just that these tools are now pretty good at operating in open and unpredictable environments. They tend to fail a lot. All the efforts can bring robots out into those unpredictable spaces. The robots in manufacturing are usually in cages — they’re actually separated on the floor from interacting with other human beings. The so-called collaborative robots, which are allowed to work around human beings, are legally required to move more slowly and have less force, which just means that they’re not good at doing most of the things robots are used for and they represent a very small portion of the robots.

But insofar as robots in services are operating these unpredictable environments — they also need to be battery powered, like Spot the dog or whatever the robot dog. I think its max run time is 90 minutes or something. There’s all these problems with doing these things with robots, and the machine learning stuff, as you guys have already said before, is just very limited in terms of what it’s able to do and what kinds of embodiment it can provide to the robot. Those are the main technological reasons why these things failed. And as in the past, everytime you hear this proclamation by tech people — that they finally closed the gap between computers and human beings — just know that every other time they’ve said that, it turns out that that gap was much larger than what they had thought.

That goes all the way back to the 19th century, when you have these images of robots that were steam powered, that were going to replace human beings. That gap is always larger than what they say, which doesn’t mean it might not be close to one day. It’s hard to say what the future holds, but most people in the field who are not basing their predictions on hype or moneymaking. One researchers said that AI is always 40 years away — no matter when you ask people how far away it is. It’s always 40 years, which is just researchers way of saying we have no idea how you got there.

PM: I think those are all such important points. It’s great to have that context on where the these robots are actually being used. Your point about the Spot robot makes me think that: Okay, we’re kind of held off right now from these robots being deployed in dystopian ways because we don’t have the battery technology. But if we get those battery advances that are promised for electric cars that they keep saying are right on the horizon, then oh, man, we’re really in for a dystopian future then with those robots [laughs]. You say about the robots in car manufacturing, many of them being caged off from the rest of the workforce — you also see that in Amazon warehouses. Where a lot of these robots are in use is also in a very caged off area of the workforce, where a lot of this stuff is is happening and where these robots are being used. It’s not on the same floor, or in the same areas as where the human workers are going to collect things and whatnot. I think that’s an important point to make. I wonder, if we look back at that period, just keeping on this for a little bit longer, obviously, we didn’t have these technologies eliminate a whole load of jobs as was expected. But what did we see happen since the mid-2010s, through the rest of that decade, to jobs and how was technology affecting that to the quality of jobs?

AB: That’s a really good question. So the point of my book, which came out in 2020, was about contradicting the robot and the AI hype of that time. A lot of what I based my argument on is that the way we see technology’s progress in the economy, the way we measure that, is by looking at labor productivity, or growth rates. People often find that confusing, because they think: Well, isn’t that a measure of the productivity of workers; we’re interested in the productivity of robots. But the way those statistics are designed, they pick up all of the increases in efficiency that come from augmenting human labor, even replacing human labor with robots. They’re not a measure of what human beings contribute to production. They’re just a measure of how much you produce per hour of human work in total.

So, the decade of the 2010s, which was the decade of this incredible automation hype, turned out to be the decade that saw the lowest rates of productivity growth since they started the modern measure which came about in during World War II and its immediate aftermath. There’s this huge contrast between the hype story and what we know by looking at the data. What that led me to say is that in reality, there was a lot of insecurity in that decade, and that really affects a lot of workers. What I stressed in the book was that the recovery from the Great Recession was very weak, so it just took a long time for all the people who lost their jobs, who dropped out of the labor force, to find a place in the labor force. Wage growth really didn’t pick up until the very last year. It was one of the longest periods without a recession, since they’ve been measuring it.

But wage growth only picked up in the very last year or two before the COVID recession, even then it wasn’t as significant as predicted. So at the time, the Fed with saying that: We don’t think that these low unemployment rates are actually reflective of tightness in labor markets because we don’t see significant wage growth. I attributed that, and that’s a whole long story, to this tendency of especially mature capitalist economies to just grow more and more slowly, as more and more of their jobs turn to services and services tend to see lower rates of productivity growth. I think that’s swamped. The tendency of jobs to kind of shift in the services swamped out a lot of the technological effects.

Although actually, even in manufacturing, they recently redid the numbers. They found that in the United States, at least, productivity growth in manufacturing was zero over that whole decade, on balance across all the firms. So there’s just a real contrast between what people are saying and what actually happened. I think the part of the story that I can talk about more, if you like, is also about how technologies were implemented in that context. Because I think the use of technologies to surveil, watch over, collect all this data about workers change power relations within the workplace. I think that’s a really important part of the story, as well.

PM: Absolutely. I think it is, as well. If we look at the hype and narrative around robots taking all of our jobs, my feeling is that those narratives actually distract us from what is actually happening with technology, how technology is actually used to change jobs in these ways that you’re talking about.

AB: I think that there were a number of people who were analyzing this in terms of Digital Taylorism, which is a very good framework for initially thinking about it and then also shows some limits with regard to what’s happening today. The basic idea is that what managers are doing is they’re trying to collect information about how workers do their jobs, especially workers who are skilled, workers who have some trade that makes it necessary to pay them more money. So managers try to figure out what is it that these workers are doing? How can we make it so less skilled workers can do these jobs with technology? If you can do the same job with less skilled labor, you can pay everyone who does that job less money. I would say that a key example of this was Uber versus taxi drivers and people talked about how taxi drivers, in the past, needed to know their area where they drove really well, they needed to know all of these back routes and special ways to get around traffic, in order to make more money.

Google Maps was able to figure out what a lot of those special routes were. Also, because they were tracking the way people drive and the way people responded to traffic patterns, they were able to give everyone the capacity to use the special routes that used to be available only to people who really driven for a long time and knew the area. There’s a deskilling that happens there. There’s a way that this software, Google Maps, is collecting all this information that makes it possible for anybody with a car and a license to start doing maybe not as well as a taxi driver, but to do the same work without the skills. We were promised that Uber was going to automate away driving, but actually, all it did is open up the possibility for a lot more people to do this work, and especially to do a part time, than before. And they did, of course, pay taxi drivers less.

Now when it turned out that the promises about self driving cars didn’t pan out, Uber used more and more its’ technologies and its platform to try to shape how workers worked. They started paying workers less for each ride. They invented all these bonus systems — Veena Dubal and others have really talked about this a lot — and started to use these algorithmic techniques to manage the work and to figure out ways to get workers to be more available, while paying them less in total. I think that’s an important part of the story that takes it a little bit beyond the Digital Taylorism account because when these techniques were invented in the 20th century in scientific management, they required a lot more managers. If you were going to use workers who didn’t have the skills of your former workforce, you need a lot more managers to be there watching what those workers do. What these digital technologies clearly make possible is that you can extract all this information from workers — you can surveil them — on the basis that you can have the algorithms watch them.

There’s all of these biases built into that and all these disturbing features of algorithmic management system, but you don’t need as many managers as before. There’s another economic theory that I think is very powerful for thinking about this, which is called the efficiency wages theory, which just says — to summarize, I think it’s got some issues — but the basic idea is that when you don’t know exactly what workers are doing, you have to pay them a little bit more. You have to try to get them to identify with the firm, and also make the threat of job loss more significant if you can’t really observe what they do. So what we know about a lot of these technologies — as they’re used by Amazon, Uber everywhere else — is that they make it much easier for the firm to really identify the productivity of this specific worker and to threaten those workers if they don’t do their jobs, according to what Amazon or these other companies want for them.

All those surveillance technologies, it wasn’t just that they were like deskilling workers, but they were making it easier to track the minute differences in contributions across workers, and that on the whole made it possible to pay workers less. You see with truckers, anywhere where there’s more capacity to track and collect data on workers, you’ve been able through this decline in efficiency wages, to pay workers less. All these things, they’re just turning people. They’re trying their best to use the technologies we have since we can’t replace workers, to make workers more robotic, to attach all these wearable technologies to them, to try to make them more and more robotic in their activities at work. It’s pretty bad.

PM: That’s the real dystopian scenario, but it’s not the one that you hear about with all of these endless predictions of robots taking all of our jobs and things like that. It sounds very different, but the implications of it are very sinister and really suck for a lot of workers who would lose a lot of their power in the workplace or have their jobs redefined in this way, such that, the employer has so much more power over them. They have less ability to kind of push back on working conditions to get better pay and all these other things, which is what we’ve seen from companies like Uber and Amazon and over the past decade. I want to kind of start bridging our conversation to what’s going on now.

During the first year or two of the pandemic there was another wave of writing or articles that were arguing that we were about to see a massive surge of investment in automation technologies and robots because of the experience of the pandemic and what employers had to experience when workers were in lockdown, couldn’t come to work, all this sort of stuff. Now, of course, we’re in this moment where we have a very tight labor market based on everything that we can see, even as the Fed, and other central banks around the world keep raising rates. It seems like the economy is still keeping up. There’s still quite a number of people in work. They’re not losing jobs to the degree that economists and central bankers wanted to see. What did we actually see in the aftermath of that period of the pandemic and the predictions that there was going to be a ton of new investment in automation as a result of that that was going to take away a lot of jobs?

AB: Just the short answer is that no, that didn’t happen. It didn’t happen for many reasons. One of them is that, as I’ve just been explaining that technologies are just not up to the task. The technologies just can’t do what the proponents of those technologies claimed. It’s also just that, in the pandemic, the future became really uncertain and getting robots to actually do work better than current people doing jobs, requires massive investments. It requires all of this tinkering with the work process to actually make it work well. The pandemic was not a period in which companies were like: Let’s massively invest and transform the whole way we do things. No one knew what the future was going to be and so companies just didn’t respond that way.

That’s generally what we’d expect from pandemics. They create, and they create, as a shock, even much further into the future, a lot more uncertainty about the future. And that tends to reduce the degree to which businesses invest. And so what was the story of the pandemic? It was that middle class and professional workers were sometimes able to use new technologies to work from home and to protect themselves from being exposed to the virus, whereas many human beings that are poorly paid. — so-called efficiency or like everyday hero workers — have to step up to the plate and put themselves in danger and continue to work during the pandemic. And I think the pandemic had two effects.

One is that it made people appreciate, if only for a moment, that it’s really true that we can’t do the things we need to do without many other people. We have to trust other people, we have to depend on them. There’s a kind of relationship of solidarity there that was only briefly felt, but I think is very important for us to recognize. The other thing it did was give people a sense that there’s other things that are more important than money. You kind of see that a lot. I think that when we talk about what’s going on with jobs today, it’s really important to know that it’s not necessarily that there’s just this incredible demand for work. It’s that because all these businesses shut down for a number of years. Tere’s some pent up demand as a result, but it’s also that if you had a restaurant in a dense and wealthy neighborhood of a city, like New York or Boston, the workers who used to work in those restaurants just couldn’t afford to live there anymore and they left.

They went back home, wherever they’re from. They lived off of their pandemic savings because there were these boosts to people’s incomes and savings at that time. And so they estimate that there’s about two and a half million workers who were missing from the labor force in the US. The US and the UK have been countries that have been hardest hit by this. Part of it is older workers who are afraid of getting sick, who retired early. Part of it — which is why you see all this talk about restaurants that can’t hire people — it’s really in the wealthier neighborhoods and cities where there’s roaring back of all these restaurant jobs, but those areas have become so expensive, that people just left and they’re not living there anymore. Or they took the time of the pandemic to find other things to do. A lot of people died, or got long COVID. That just meant that they weren’t around.

Of course, a big issue, especially in the US, is that there was just a lot more care work for people who had long COVID, for elderly people, for children. Especially as these things became more expensive, people couldn’t afford to work. They had to drop out of the labor force to take care of the people around them and that’s been a big problem. In the US, you have an overall lower labor force participation rate because the country never solved its problems with eldercare and childcare. So it’s a complicated story of what’s going on. None of it fits with “the robots are taking our jobs” or “the AI are taking our jobs” story. But I think it’s important to point out those real things that have been going on, that have been creating a lot of tension in the economy, and will continue to do so for some time.

PM: It’s also a big difference from the general narratives that we hear about why people aren’t working. There’s been a big push by financial publications, places like the Wall Street Journal, to say people just don’t want to go back to where and whatever. When actually, there are a lot of reasons why there’s a tighter labor market right now, that if they really cared to understand, it’s very understandable.

AB: And they’re putting children to work.

PM: They’re very excited about opening up child labor in Alabama and some other places, again, which is just terrible. I think that also gives you quite a contrast from the narratives that robots are taking our jobs or whatever. Then on the other hand, we’re lessening the regulations on child labor, so that more companies can hire children. It’s just wild. I don’t think we really need to go on more.

AB: That would be a disturbing headline. Like, the children are taking our jobs! [both laugh]

PM: Let’s move on. Obviously, we’re talking about how there’s this tighter labor market right now. But also, what we’re seeing in this moment is all of this excitement around ChatGPT and Generative AI and these tools that have really exploded over the past year, as the boom and the hype around cryptocurrencies and Metaverse have tailed off. So now there needed to be something else. This is where Generative AI really comes in. You have your large language models, you have your image generation tools, things like Midjourney and Stable Diffusion. Of course, the narrative that we’re hearing now is that these are the new technologies, these artificial intelligence technologies, are what is going to wipe out a ton of jobs. As you said, OpenAI had a report that said that 49% of jobs are at risk because of this. So what are they actually saying about the potential impacts of these new technologies on work and whose jobs in particular do they imagine are in the crosshairs because of this?

AB: In methodological terms, they just took over exactly what Frey and Osborne did. I mean, they do mention that there were critiques of that perspective. And I think it’s very important to know what those critiques said. A number of researchers did redo the Frey and Osborne numbers. They found that the machine learning algorithm had just miscategorized all these jobs, as I’m sure the ChatGPT model did and in the current paper. But also what further research showed — and there were some researchers at that OECD who really put me this point strongly — is just that the entire way of thinking about how jobs change, that Frey and Osborne adopted and that this new paper also adopted, is just wrong. It’s just not the case that there’s some threshold of tasks in a job and if a computer or robot can do those tasks, then the job disappears. There’s a false assumption there. It leads the public, I think, when they hear when these papers and this methodology gets translated by journalists and others into larger media framework, you get asked these questions.

I got asked this question a lot after I wrote the “Automation and the Future of Work” book. The first question is like: So what jobs have already disappeared? What jobs have gone away, and what’s next? When you listen to the automation hype people answer that question it’s very interesting. The truth is that not that many jobs have gone away — we still have waitstaff at restaurants; we still have nurses. The jobs that have really gone away, like they’re gone. Even there, it’s not totally true. But like travel agents, that’s a really, I think, important one. There used to be a lot more travel agents. Now there’s very few. There used to be a lot more people who manually read utility meters, and those are increasingly being replaced. Tollbooth workers are increasingly being replaced. But even so, it’s very hard to identify jobs that have totally collapsed in terms of their employment. So there’s just something wrong about that methodology.

Part of the reason is that as technologies change, the content of the work people do just changes, and it’s just different. To be a school teacher today is just different than it was 20 years ago, 50 years ago, 100 years ago. It doesn’t mean the number of school teachers has declined. Actually, the number keeps growing, because productivity growth is low. But the kinds of tasks those workers do just changes with changes in the technology. There’s a few really important corollaries to that insight. One is that when you look at this database that the researchers from OpenAI were using, they were acting as if every job is like a fixed set of tasks and if you could just get rid of those tasks, then a computer could do the job. In reality, the way you do jobs is just very different, across workplaces. There’s a database called O*NET — it’s an attempt to figure out on average what a job requires.

But in reality, jobs look very different in different places and that’s true across firms within one country. It’s really true across all the different countries of the world. There’s a lot of reasons for that. One is that, an example I like to think about, is if you’re on a film crew and you’re in Hollywood versus if you’re in Bollywood in India, or Nollywood in Nigeria, probably, the tasks that you do just look very different having to do with different access to technology. But there’s also all these legal frameworks. There’s also all these collective bargaining agreements, there’s worker power. All of those things shape how technology changes work. It’s not just a story about technology. It’s a story about economics. It’s story about politics. And it’s a story about other social factors. All of that is kind of effaced in the approach that these researchers took.

But in the end, the point is that the methodology is exactly the same in this paper, as in the Frey and Osborne paper. They just took this database called O*NET, and they categorize jobs in terms of the tasks they did. They asked a bunch of computer experts which of those jobs could ChatGPT and related technologies accomplish. They said 49% of jobs saw 50% or more of their tasks taken over by computers, and that’s the headline statistic that they’re giving. There’s no way to know even a job that had 50% of its tasks replaced by a machine — and we can talk about how bad these things are predicting that — there’s no reason to believe that even a job that changes by 50% will necessarily go into decline and disappear. It could mean that what those workers do changes.

PM: I think it’s really fascinating to hear you outline that because it gives us some good insight into what the company is actually saying and what is behind these headlines and statistics that we’re now seeing around ChatGPT taking a ton of jobs. I think we can also see how people who are very invested in this industry, who are invested in these technologically determinist narratives, are really echoing and trying to push this notion that AI is going to eradicate all this work and all these jobs. I saw venture capitalist Jason Calacanis tweet the other day, and he’s been on a real tweeting binge lately. People might remember all his tweets about Silicon Valley Bank about a month or so ago. He was tweeting, and I’m quoting here, “AI is going to nuke the bottom third of performers in jobs done on computers, even creative ones, in the next 24 months. White collar salaries are going to plummet to the average of the global work force & the speed at which the top performers can write prompts.”

This is what he’s arguing, there was a long thread that followed that, that I didn’t read the whole thing. I’m certainly not going to read it out on this podcast. But then I also noticed in the replies, the first reply that came up for me, was from a guy named Scott Santens, who was agreeing this and pushing this notion that it was going to eradicate a ton of work, and that this was going to have huge consequences. For people who don’t remember, Scott Santens is a big proponent of universal basic income, and back in the mid-2010s, he was really pushing this notion that robots and AI, in that time, we’re going to eradicate a ton of jobs — that all drivers, truck drivers in particular, were going to be out of work and this was going to transform economies, because it worked for his argument that what we need is a basic income in order to respond to this.

It’s interesting, just to see the same people gravitating toward this narrative once again, because it serves particular agendas that they have, in arguing in favor of the idea that technology is going to eradicate all this work, and what are we going to do about it? So I would like your thoughts on that, but I also want to note, you mentioned in your previous answer, how being a teacher has changed over the past hundres years, and will continue to change into the future. One of the things that some of the proponents, people like Sam Altman, argue that ChatGPT and these Generative AI tools is going to be able to do for us is to replace a ton of teachers and doctors and make education and medical advice so much easier for people to gain access to. What do you make of these grand claims that are being made around ChatGPT and Generative AI based on the things that you’ve been telling us?

AB: I think it’s a really good question and it’s very hard to predict the future. What I focused on in my intervention is just showing how bad the methodology is on which these predictions are being made. I think it’s harder to know what’s actually going to happen and how work will change. I think that something I focus on a lot is that over the past hundred years, productivity growth in services has been very low. So if you want to teach more children, you need to put more teachers in schools. Everybody knows what is the metric of how good school is. Isn’t it often the student-teacher ratio? Better schools have more teachers for every student. That just shows you how low productivity growth has been, that we still think it’s like this bespoke model like that.

It’s possible that these technologies will make teaching easier. There’s been claims like this, of course, for a long time, like Khan Academy, we’ve already been traveling down this pathway. There are potentials in those technologies to make education better. Maybe there are potentials that will help poor performing students to get assistance that lets them rise up towards an average student. One of the big reasons why middle class people are so scared of these technologies is: What if the tutoring that I’m able to get my kid because it costs like thousands and thousands and thousands of dollars is now available in a less good way, but it still does something? At a cheaper price that’s more available to working class families. That’s a real, big fear that those people have. It could happen. I really don’t know.

It could be the case that these technologies help educators in some way, or you can at least imagine a world in which that’s possible. You can imagine many worse worlds in which these technologies are used for really nefarious purposes, even in education. But it seems very unlikely, at least to me — given the limits of these technologies — that they’ll actually be able to replace classes and replace teachers. They might make teaching slightly more efficient, which would be amazing because teaching is a field in which it’s so inefficient and you need more and more teachers to teach more and more people. But I think if you look at the whole trajectory of change there — from the availability of online courses, to Khan Academy to all these things — you’ll just see that the actual benefits have been much less in every round than what the proponents have claimed. So that’s what I would say about teaching.

PM: It’s interesting to me that we can see that there are potential ways that it can be used in a positive way to help, as long as we know the parameters where it’s useful, and we’re not distracted by the hype around how these technologies work versus how people like Sam Altman might want us to think that they work because it benefits the company. But then I feel like we’re also in this environment, in this economic system, where we can see how technologies are actually deployed and how they’re actually used, where I think you can very much see the notion that the ChatGPT can be seen as a teacher or as a doctor or a nurse or something be used to say: Oh, now we don’t need to hire so many teachers because ChatGPT is going to take over and and do some of this stuff.

I feel like, unfortunately, that’s the more likely scenario just based on how things are going and the unfortunate state of the society that we live in is how these technologies tend to roll out. They’re always used in a way that kind of empowers capital — which is not something I have to tell you, that you’re very aware of — instead of improving the world. So more than happy for you to comment on that, but also, let us know how you think ChatGPT might actually affect some of the work that’s out there. Even if it’s not going to eliminate 49% of jobs or whatever, as we might be misled to believe.

AB: Well, I think that, what you just said, is really important. You can think about some of the nefarious ways it might change jobs, or just standard capitalist ways that it might change jobs. Which often tend to be nefarious, which is that if you think about that model I was mentioning before of how Google Maps was able to collect all this information about routes, and then make any driver able to know how to get around traffic in the way that formerly skilled drivers used to. The internet is just full of information produced by skilled, cognitive workers. You know that one of the main tools that’s used to build the translation software is the Canadian Parliament and the UN — like all of these bodies that have to produce the same text in multiple languages — and then have translators who are meticulously doing that work, because it has to be perfect.

These databases are able to take all that information and what they might be able to do is more and more reduce the skill level that’s needed to do that work. In a way that will ultimately be polarizing of jobs, rather than just replacing jobs. You could imagine, especially in fields like computer programming, legal and technical writing, that there might be ways to absorb all this information that comes from skilled workers that’s already available in various kinds of digitized forms. And making it so workers who aren’t as well trained to could look at the version that ChatGPT is produced of a text and then edit it. They might be able to achieve levels of productivity that used to only be true of skilled workers. But then on the other hand, as we know, ChatGPT just isn’t up to the job.

There’s still going to be tons of computer programmers, not just lower skilled programmers checking for mistakes in what ChatGPT writes, but also all these programmers who are concerned with building systems that ChatGPT just isn’t very good at building. Something again, and what it points to, is that you can’t think about these things just in terms of technology. You have to think about a wider set of economic and social factors. Just to consider the economic factors here, no one knows whether increasing programmer’s productivity by like 10% or 20%, will result in a loss of employment for programmers of 10% or 20%. It’s much more likely that if programming gets even a little easier, and a little cheaper as a result, that the demand for this could explode or might just increase a little bit.

But the point is that you never know whether bringing down the price of something will mean that fewer people will be employed in it, or whether, on the contrary, more people will end up being employed in it. That’s how goods go from being luxury goods to mass goods, is that they become cheaper, and then suddenly, they’re more available. I think it’s really important to point out that, as Gary Marcus and Emily Bender and others have said, there’s a lot about these technologies that could be used in really dangerous ways and can be used by scammers. They could be can be used by people with really bad goals, bad actors, as they say, to do all kinds of things that could make our lives substantially worse and also reduce worker’s productivity. I think that’s important to talk about as well.

PM: Absolutely. I completely agree with you on that point. I wonder, maybe as we start to wrap up our conversation, we’ve been talking about how, when it comes to these big promises and bold statements that robots or AI are going to eliminate all of this work that we see time and again we can go back very far back, right to the 1800s, and we can see that those predictions don’t tend to play out as people expect at the time. The kind of sensationalist headlines don’t be realized a few years down the line when we actually see the impacts of these technologies. So I think it’s very likely, that once again, with ChatGPT and Generative AI that we don’t see the same things, but that doesn’t mean that there won’t be impacts, as you’re saying.

I want to put this question to you. In the same way that we saw in the 2010s, where these technologies were not used to eliminate work, but rather to improve algorithmic management to ensure less autonomy for workers, to move more workers into a gig economy where they’re carved out of employment relations. Do you think that there’s a risk that instead of eliminating a ton of jobs, that these technologies, like Generative AI, are implemented in such a way that employers get even more power over workers, or maybe workers in different sectors of the economy, than previous technologies allowed? That might be the ultimate impact of these technologies and that is another reason why we shouldn’t be distracted by these big grander claims and should be paying more attention to what actually might be happening here?

AB: I think that there are really a lot of ways you can imagine that happening. Microsoft is already releasing enterprise versions of ChatGPT. I think one of the things that they might allow employers to do — much like what I was talking about with Amazon warehouse workers or truck drivers — is that they might make it possible to have a better surveillance infrastructure. The Generative AI in an enterprise context, might make it easier to know what particular workers are doing without having to ask them by looking through all of the data they’re generating and producing a summary of it. That might make it possible to better track what individual workers are contributing, and then to lower the efficiency wages to replace carrots. as it were, with sticks. None of us work well when we are surveilled. No one does good work when we feel like people are constantly watching us — we can’t lose ourselves. The few joys that people have in work, one of the main ones, is this feeling of losing yourself in your work and not constantly being reflecting on what you’re doing.

But the more we worry that we’re being tracked as we work, the less we can lose ourselves in what we’re doing. I think these new technologies will probably have the effect, just like old ones. It’s irresistible to employers to just start generating a lot of data, but also being able to understand more what that data means with these tools. That’s why it’s really important both for workers to organize themselves and fight back because what you see is that in countries where workers are stronger, like in Sweden and in other Scandinavian countries, they have more of a say over how technologies are implemented. It’s more possible to imagine with a stronger working class that we could ban certain forms of algorithmic management. Frank Pasquale has also talked about this, like just say: No, you’re not allowed, a company is not allowed to gather this kind of information on its workers.

You can imagine legal changes. You can imagine workers who are empowered negotiating how technologies are implemented. I think you can also imagine just a different world where research into these technologies just takes a different form. It’s not based on move fast and break things and try to figure out the most profitable ways to use things. But ways to actually meet people’s human needs and produce a better life where we all flourish. I think you can see in some of these technologies, like the threads that don’t get followed, that could actually lead to an improved life for people, but are just not the focus of the researchers who are getting paid a lot of money to try to figure out what you can do with these technologies. Again, I think that the Generative AI stuff, I agree with Emily Bender and Gary Marcus, I just think it’s just a lot more limited than what the proponents are saying. But that doesn’t mean that it’s going to be totally useless. It is going to change, it’ll just do it a lot more gradually and a lot less severely than what the proponents are claiming.

PM: I think that is an important point and always something that we need to keep in mind with these technologies and how they are rolling out. Also, how we’re talking about them and thinking about them in the moment. I think, always a good rule of thumb is not to be distracted by the hype and the PR narratives of the companies and actually try to get a good grasp on what is actually going on, so you can understand the potential implications instead of getting distracted. Then a few years down the line, realizing that actually some really negative things have happened, that you maybe could have been able to curtail or lessen if you had realized those things earlier.

I would just say on your point about the workers having some power around this, one thing that we’re seeing right now is that in the United States, the Writers Guild is renegotiating with the Hollywood Studios. One of the things that they immediately put on the table was a clause in the contract around Generative AI to ensure that even if it’s used for some aspects of script writing, that the ultimate credit goes to the writer, and that the technology can’t be credited in that sort of way. That’s just one example of where, if you have the kind of power, you can potentially push back and try to get some wins on this early on. But if you’re not in a union, or you don’t have that kind of collective power, it becomes much more difficult to ensure that employers aren’t implementing it in a way that takes away the rights of workers.

To close off our conversation. Last time, when we spoke we talked a bit about what kind of a post-scarcity world might look like. We talked about how the “Fully Automated Luxury Communism” is probably not going to happen, and pretty unrealistic, because technologies are not really taking away jobs and work in the way that are often predicted. So it’s been two and a half years since we talked, I wonder how your thinking on this has evolved, as you’ve continued to think about the ways that technologies are deployed in society, and how they might be able to be deployed in a way that is actually beneficial for all of us, instead of just benefiting employers and making our work even worse, and paid less as we too often see.

AB: When I was studying all of the AI hype and thinking about their own internal model or vision of a better world. I think that their vision is one where we can use technologies to meet everyone’s wants, every last thing, every whim that people have. They’re trying to think about a world where we have these kind of super powered computers that can just do everything. I think what’s interesting is that those visions often draw from science fiction, like “Star Trek” or the Culture series, or Cory Doctorow’s “Down and Out in the Magic Kingdom. But as the title of that book suggests, we actually read a lot of that literature. What it shows is that even if we lived in a world with limitless resources, there would still be a lot of reasons why people have conflicts, why people are unhappy. A lot of that has to do with the fact that human beings are meaning making animals. We care a lot about the meaning of things and we fight over the meaning of things.

I think that that insight, from that literature, and so much of its being lost in all of the Generative AI talk, all of these efforts to say: No, no, these machines do understand. They are making meaning. When in fact, as your last guest pointed out, that’s a really bad way to understand what these technologies are doing. I think that if you read that literature and think about what’s going on, we can try to save the future from Silicon Valley and from its’ vision, and realize that the much more viable future vision we could have — one that would radically transform our lives — is not a world where we try to meet everyone’s last wish and whim which you can do anyway, no matter how many resources you have. There’s just experiences that are unique; there’s a lot of reasons why you can’t do that.

But we could get to a world where we meet people’s needs. Especially in a world where we’re facing, devastating climate change, where there’s still many people who can’t eat. When we saw during the pandemic, how few of our resources really had gone into health care in this deeper way. Not just health care, but also care for people in all these other senses, mental health care, child care, elder care and so on. We could get to a world where we use our resources, our human resources, our technologies, to really, securely, meet people’s needs. Every hype is a scare about how these technologies are going to take away your security or even a tiny dream of security that you might have. There’s no reason why technologies have to do that. We could use technologies to improve our ability to meet people’s needs with human labor, we can use them to create a world where no one has to worry anymore about going hungry, not having a place to sleep, about the Earth burning up due to climate change.

That would be a world where I think humanity would really be transformed, even if people still have all these wants and wishes that they can fulfill. In fact, as psychologists or psychoanalysts, will tell you desire, unmet desires, that’s a really important part of what it is to be human being. Like having everything at your fingertips isn’t always the best thing for you mentally. So I think we can envision a world like that and I think these technologies can contribute to that. While I was a critic of the “Fully Automated Luxury Communism,” literature, and also these different ideas about how we can use very rapid computer processing power to plan a whole economy with a computer like the one in Westworld. A computer that just plans everything for everyone, and then human beings are just cogs in a machine.

I think that stuff is really silly, but what you are seeing with information communication technologies, and even Generative AI are more and more ways to imagine that people could coordinate themselves without bosses ,with less of a role or even no role for markets. You could imagine a different research program that could lead to a world where people not only meet their needs, but feel like they have some say in the world that we live in, and the future toward which we’re driving. I think it’s really important not to get lost in Silicon Valley’s utopia, which supports a whole silly venture capitalist strategy. You’re just watching these people go from, literally last year, everything was crypto,and now everything is Generative AI. It’s just so obvious that this is just part of this endless hype cycle.

This is under a lot of pressure, it should be said, due to rising interest rates. The real basis of the hype stuff is coming under pressure.We should reject both their utopias and the corresponding dystopias, we should worry about what these technologies are doing to us. They’re having really negative effects on people’s mental health, especially for children, and especially for young girls. So we should think about all those negative effects. We need to create our own positive visions to fight for. I think that’s really important and I worry sometimes in the tech space, there isn’t enough of those kinds of counter-visions counter-utopias or counter-realistic visions of where we can go. I would like to contribute to that with my work.

PM: I’m totally on board for that. I think that it’s fantastic to have us thinking more in that way. Just pulling on what you said, and maybe reconnect it, tie it all kind of up into a bow for us. I think it’s really interesting to think about the narratives that we have around technologies, that we’ve been talking about through this entire conversation, and around the notion that they’re going to eliminate all these kinds of work that we know are even quite important. I was talking to James Wright recently about the efforts to automate care work, eldercare work, in Japan, and how that has not worked out. But how there was a moment in the mid-2010s, when this was the model that we were all going to emulate and Japan was going to show how this was going to happen and then it was going to roll out to the rest of the world.

I think that the risk there and the risk with so many of these technologies, is that it distracts us from the recognition that these are things that do need to happen, and that ultimately, if we’re going to further incentivize this work and make this work better, and be able to deliver these care systems, health care, and all these other services that people rely on to people. The question is not: do we have good enough technology to be able to do it so that we don’t need workers? It’s always inherently a political question and I think that always framing it around the technology, and what is the technology going to do distracts us from that more political conversation and that ability to think about where we’re putting our resources, what things actually matter to us?

Instead of just thinking: are we going to develop technology that’s going to be good enough for us to be able to have better healthcare or what have you? And instead, we don’t need to wait for the technology, rather, we just need to have the kind of political momentum and the political will to act on those sorts of things. I think that’s very much the types of things that you are talking about in your work and certainly when you talk about what a better kind of society might look like, even if we use technology to help us achieve that.

AB: I thought that episode you did with James Wright was really great about eldercare in Japan, because that case study really illustrates something that we can think about throughout the whole economy. And many case studies have generated the same results, which is that automation, already the term, has so much hype around it. But it works best when it’s bottom up, rather than top down. That is to say that works best when it’s about solving problems that workers identify in their work process. When you live in a society that is so hell bent on using technology, to replace workers to try to figure out how we can not deal with their needs or their concerns at work, you don’t get as good results.

If we’re going to have eldercare robots, they’re going to have to be developed in consultation with eldercare workers and eldercare patient, and engineers. They’re all going to be involved in that process. But that would be one in which there’s much more distributed and shared power across these different groups in society. I think that that’s exactly the vision of technological change that OpenAI and these other Silicon Valley institutions do not want you to think about. They want you to think about technology as something that comes from on high for them that either save or destroy the world and we’re all just spectators in their mad scientist propaganda.

PM: In their grand projects in the world that they’re trying to realize. I completely agree and it’s such an important insight to have when we think about technologies and who should really be behind technological development, pushing it forward, deciding the types of things that we’re working on and trying to achieve. Obviously, the whole venture capital system, the whole Silicon Valley model, is set up in such a way that it’s the complete opposite of that, completely flipped on its head. It’s all coming from above and these wealthy people who are choosing what the focus should be and how those technologies are actually implemented. In many cases, as we’ve been talking about, in a way that is very much against workers and against the public to further enrich the venture capitalists and the people at the very top. Aaron, it’s been great to have you back on the show, to get your insights on all of these questions. Thank you so much for taking the time.

AB: It’s a real pleasure to talk to you again. Can’t wait until I have more work to share, so I can come back on another time.

PM: I’m looking forward to it.

Similar