What if, instead of eliminating jobs, AI were to make our work easier?

Daron Acemoglu
Daron Acemoglu’s name frequently comes up in speculations of who might be the next winner of the Nobel Prize in Economics.

Summary

  • Daron Acemoglu, professor of economics at MIT, argues that the development of generative AI technology should not be geared towards automating tasks and replacing human labour; instead, regulatory measures should be put in place to achieve more pro-worker AI development.
  • Generative AI could help people in a wide range of occupations improve their skills and perform their work more efficiently, making AI an assistant to humans rather than a competitor. If we do not choose this path, generative AI will create inequality, just like other earlier technologies that strove for a high degree of automation.
  • Acemoglu is concerned about the impact of AI on democracy in Western countries. In authoritarian countries, such as China, the development of artificial intelligence is increasingly limiting the little freedom citizens currently have.
  • Daron Acemoglu is a Turkish-American economist whose main research fields include economic growth, inequality, and institutional development. Acemoglu is an Institute Professor of Economics in the Department of Economics at the Massachusetts Institute of Technology (MIT). He is one of the most cited economists in the world: According to Google Scholar, his research has already been cited well over 220,000 times.
How do you see the current trajectory of generative AI impacting the future of work?

There’s a real difference between the current trajectory and what is feasible. I think the current trajectory will lead to a bleak future. There have been important advances in generative AI, but they are being used primarily for two things the industry has so far focused on: automation, and digital ad-based monetisation of your personal data. Both of these are problematic. Automation increases inequality and doesn’t actually deliver the kinds of productivity gains that people hope for. Digital ads are creating a toxic environment and an entire business model for that. But there are very promising directions for generative AI to go in, and those focus on creating tools that would be more useful for workers and individuals.

Large language models are merely one and not necessarily a very important application of generative AI. Generative AI is an information retrieval system. We live in an age where information is abundant. Everything somebody knows seems to be on the Internet, but useful information is highly scarce. Good luck trying to find something useful, especially if you are trying to perform complex tasks. For example, if you are an electrician, the set of tasks that you are required to perform is getting more and more complex. But right now, if you wanted to find information on the internet to help you carry out a complex task, you can’t. You need to consult a more experienced electrician or get an advanced degree in electrical engineering yourself. Neither is practical, especially when there is a shortage of electricians.

What generative AI could do for an electrician is to use pictures and basic descriptions of the problem the electrician is dealing with, filter the data for useful information for the given context, and rapidly provide advice that enables a less experienced electrician to perform more complex tasks than they would have been able to do unassisted.

The general principle here is that generative AI is a very good technology for taking a large set of relevant information and then recognising which parts of that information are useful in the given context. It can then present that information to individuals who can act on it, to perform more complex tasks or to do things they like as consumers or individuals or political participants, and so on. Looking at things from that perspective, you can see that we are definitely not going in that direction.

Basic applications such as the ones I described above are very easy to create, as I have outlined in my work. Such applications could be developed given the state of knowledge in education, in blue-collar work and in the legal system. But this is not where much of the attention in the industry is going.

In your work you present arguments for more ’pro-worker’ AI development. How can this be achieved?

Government regulation and labour organisation involvement will be required, and workers’ voices need to heard. But more broadly, and even more importantly in some sense, norm changes will have to come from civil society.

Let’s consider renewables by way of comparison. We’ve made tremendous technological progress in renewables over the last 15 years, which means they are now cost competitive. For consumers, renewables are now much more competitive with fossil fuels, even though 20 years ago they were ten times as expensive.

That came from incentives in the industry, but those were pushed by some regulations and a large amount of civil society action, both from the consumer side and other pressure on companies. This is the sort of model that I have in mind for AI.

The idea of disruption is engraved in many tech companies. They often say that they want to break things. This does seem to lead us into the path of more and more automation. How could we change course?

I think there are several reasons why the industrial sector is seeking increased automation. Firstly, they’ve done it before. Secondly, given the way the US corporate world is organised, there is high demand for automation technologies. And thirdly, I think the industry is very preoccupied with autonomous machine intelligence, which pushes many engineers and company executives towards automation. If these people think that the best way for them to further science and establish their own reputation as a scientist or a company is to show how a machine or an algorithm they have developed can match or outperform humans, then it’s only a very small step from there to automation. Those are the dynamics. If you want to change course, you have to change all of these dynamics.

To change the demand for generative AI-based tools, I think we need to create a better corporate environment in the US where CEOs and managers focus not just on cost cutting, but also on increasing workforce productivity and providing better tools for their workers. Creating a more competitive environment and perhaps one in which the tech industry did not have a monopoly on data would create better potential for new entrants with new ideas.

Another important element is changing the ethical and social responsibility tenets for the industry. Norm change needs to be there as well, which is why civil society is such an important part of the process. In an ideal world, there would be a process of change that will alter both the financial incentives and the priorities of the tech industry.

Is big tech already too big? Do we need to break up the tech giants?

I don’t think either breaking up big tech or limiting the size of the largest corporations by itself is going to be a solution. But it is an important part of a menu of things to consider because these companies are too powerful. If they use their power against redirecting technology and for monopolising the AI market, they will create a real roadblock to more beneficial change.

Why do I say that just breaking them up will not be enough? Or that it’s not even so relevant? Think of Facebook, or Meta. If you break the company into Facebook, Instagram and WhatsApp, their business model is not going to change. These three companies are going to continue doing exactly what they have been doing in terms of the type of products and their incentives to grow very rapidly, collect personal data and monetise it with digital ads. None of that will change. So, what we need is a change in that entire ecosystem. But because Meta is such a powerful company, there will be significant resistance to changing the ecosystem. Breaking that up may be a political rather than a purely economic move.

You have argued, for example, in your book The Narrow Corridor (2019), you argue that technological innovations can impact the future development of political institutions. How do you think AI will affect politics in our societies?

It is already affecting it. When it comes to authoritarian countries, especially China, I think we have fairly good evidence to prove that AI is strengthening the Communist Party and weakening civil society, dissent and communication. It’s becoming a very potent brainwashing tool in China. Moreover, China is also developing these technologies and exporting them to other countries.

A more complex issue that needs to be studied more is the impact of AI in the more democratic parts of the world. Social media is highly reliant on AI algorithms. Generative AI has played a small role so far, but platforms such as X, Facebook and Reddit are now heavily using AI tools. So the question is, what are the consequences? I don’t think we have a definitive answer to that question.

There is quite a bit of evidence, even if controversial, that AI-based algorithms and the specific way in which digital ads are presented is polluting the informational environment. The same applies to companies that use algorithms to attract people to their platforms and keep them there. It’s creating echo chambers or filter bubbles and pressing people’s emotional cues, and hence not enabling them to engage in political debate or dialogue. If that really is going on, and it begins to happen on a massive scale, it’s going to make democratic citizenship very hard. And if democratic citizenship becomes damaged, democracies become damaged.

I think there is a danger of us becoming increasingly pacified if we become completely dependent on algorithms and divorced from our real social networks and real community life. And as we become more and more pacified, that will be detrimental to democracy.

We are very much in the early stages of the process, so it’s difficult to say with certainty exactly where we are going. But there is cause for concern. So yes, I am quite worried about the effects of AI on democracy.

In your latest book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (2023), you and Simon Johnson write extensively on China’s approach to AI. What is China doing in this field?

China is pushing parts of the AI frontier in a very unproductive direction. China invests heavily in monitoring technologies, surveillance technologies, facial recognition, censorship, and so on. China has become a leader in these technologies and is exporting them to other countries around the world.

An argument that the tech industry and some economists and policy makers sometimes make is that if we regulate AI, that will hand leadership in this area to China. I don’t think that’s true. China is going to push for leadership in facial recognition and other things, but it lags behind in many other areas of generative AI. That creates significant elbow room for the US and Western Europe to set a regulatory agenda targeted at directing technology in a different way. And if they do that, China will follow.

This is what happened in the energy sector, which I mentioned earlier. The concern was that if we take an anti-carbon stance in Western Europe, that would hand a comparative advantage in many industries to China. But when European countries started to invest in carbon abatement and mitigation, that triggered the spread of technological changes to China, which then furthered the improvements in solar panels, for example, via large-scale production and going down the cost curve. I expect we would see the same sort of complementarity with AI.

Of course, as long as the Chinese Communist Party is in power, they are going to double down on facial recognition and surveillance technologies in any case. A more nuanced approach is required. We might try to bracket China in terms of trying to reduce their impact on surveillance in other countries as well, while at the same time we can take the right regulatory approach in other technologies.

 

Text by Johanna Vehkoo
Translation by Leni Vapaavuori and Nick Moon