Artificial intelligence will change the way we produce new ideas and innovations

Ari Hyytinen
Ari Hyytinen, Professor of Economics at Hanken School of Economics and Helsinki Graduate School of Economics (Helsinki GSE), specialises in industrial economics and applied microeconomics. He studies topics such as innovations, inventors and entrepreneurship.

Summary

  • In economics, general-purpose technology refers to inventions that provide a wide variety of uses, enable many new applications, and have a transformative impact on society. In human history, such technologies have included writing, railways and the internet. The latest addition to the list is artificial intelligence.
  • Artificial intelligence applications, such as the so-called large language models, enable more advanced automation, but they can also revolutionise the way inventions come about and research is done.
  • Ari Hyytinen, Professor of Economics at Hanken School of Economics and Helsinki Graduate School of Economics (Helsinki GSE), explains how artificial intelligence may affect work and productivity, and how it will change our understanding of intelligence and creativity.

 

In the book “Suomen kasvu” (Finnish Economic Growth) published in 2019 you wrote: “It seems that we have only seen the beginning of technological development related to machine learning.” With the benefit of hindsight, how would you characterise the recent development of artificial intelligence?

In retrospect, you can see that not all writing ages well, but in my opinion the ideas I put forward in that particular chapter have turned out to be pretty accurate. What has surprised us all, however, is the speed of technological development. In the last twelve months or so we have seen the development of many versions of large language models, or LLMs, such as ChatGPT.

Can this pace of development continue?

There is nothing to suggest that this type of technological progress is slowing down. This could be partly attributed to the fact that LLMs have become available to large numbers of people. Companies, researchers and other specialists have been able to conduct their own experiments, which has resulted in a self-reinforcing cycle where one person’s insight benefits someone else. It also seems highly unlikely that the development of underlying computer technology and computational capabilities, in other words the training of large language models, will slow down anytime soon, as more computational power and resources seem to be readily available.

When you say AI is a general-purpose technology, what do you mean by that?

The concept is not yet fully established, and its scope is still being discussed within economics, but at least the following elements are associated with the concept of general-purpose technology:

  1. The technology and its various applications are widely adopted in the economy and society.
  2. The technology evolves, reshapes and improves over time for years or even decades.
  3. The technology enables a wide range of new innovations and applications in a wide range of industries and sectors.

Relatively recent general-purpose technologies include the computer and the internet. If we go back a little further, we come across technologies such as electricity, the internal combustion engine and the printing press. And even further down the line, we find discoveries such as how to use iron, the invention of the wheel, the ability to domesticate animals and to farm the land.

Many of the earlier general-purpose technologies made work easier for humans or replaced humans in the process. We have recently seen newspaper headlines warning of jobs most at risk of being replaced by AI. Is it time for guidance counsellors to start warning young people about the risk of certain occupations disappearing?

Personally, I would avoid such a choice of words. While AI will change many jobs and replace others, that’s only one side of the story. Technological advancement will create entirely new occupations and jobs.

But we do need to react to the transformative effect of new technologies on the labour market. What does this mean for those young people who are currently studying and acquiring skills for working life? I would emphasise the positive aspects. Technologies can be put to good use in all sectors. They can perform tedious repetitive routines so that humans can focus on the more difficult cases where human input is important. Education and training should provide students with the skills they need to use new technologies in their respective fields.

Just recently I gave the first lecture of my course Economics of New Ventures and Innovations. As part of the course, I ask students to evaluate the significance of AI from an economic perspective. They are free to choose a specific angle, but they are required to familiarise themselves with the technology. I also encourage them to use AI to help them write their reports. Young people seem keen to do that, particularly when gently pushed and encouraged.

Future predictions for AI range far and wide. What can economics tell us about its proven effects on labour productivity, for example?

There are studies that have explored a few rather narrow questions, and it takes a long time for an overall picture and evidence to accumulate. To be able to assess cause-and-effect relationships, we need quite specific research designs, and here we are only in the very early stages. However, there was a study that analysed the impact of large language models on programmers’ work. It was a randomised trial in which some programmers used artificial intelligence for assistance and others did not. If I recall correctly, the results suggested that programmers who used AI were more than 55 per cent faster than those who didn’t. Some of the most recent studies that analysed the performance of management consultants and advisors produced similar results.

A few years ago AI was generally referred to as simply a form of automation. Is this still a valid assessment? Does AI have any special features compared to previous general-purpose technologies?

It does have some features of automation, too. For quite some time now, new technologies have enabled us to selectively replace human input with machines or instruments. But with machine learning and AI, we can expect to see humans being replaced in various other work phases, too.

Which way this development will go still remains unknown, but what I find interesting about AI is that it may well change the way scientific discoveries and inventions are produced. It may contain elements that enable more efficient production of information and understanding.

In what way?

New inventions often tend to be based on previously produced knowledge and understanding, which means they are essentially the result of recycling, recombining, and reorganising existing knowledge. With large amounts of accumulated data, it would be possible to generate an enormous amount of combinations based on existing inventions. While most of these combinations are irrelevant, artificial intelligence could screen them for useful ones and thereby help scientists to produce new data much faster.

Since the structure of scientific articles is very formulaic, could AI be a useful tool in writing them?

Many of my colleagues around the world and I myself included have experimented with it. I give AI a bullet list and ask it to use the list to draft an outline of the article. Or if I needed to write a description of the statistical data that we have used in our research in fluent English, I could ask AI to check the text for fluency and correct paragraph segmentation. These are examples of routine tasks involved in research that are not really related to the research itself, unlike producing ideas and insights. Scientists could use the time they save to address more difficult questions.

I believe that many scientists already almost routinely use AI for the purposes I described. And while AI does offer limited assistance, it will not assume any of the scientist’s responsibility or conduct research independently.

Could artificial intelligence help us prepare more accurate forecasts of future economic development?

On the contrary; one could even say that at this stage the advances in AI are creating more uncertainty about long-term economic trends. That said, machine learning technology is largely based on prediction, so it may well be possible that it will make us better equipped to predict economic trends. But even with AI, it might be difficult to produce much more accurate short-term economic forecasts because things happen unexpectedly all the time in our extremely fast-paced world. No algorithm, no matter how good, is able to capture unexpected changes, or shocks as we also call them.

In the past, general-purpose technologies often slowed down productivity growth in their early stages. Is there anything to indicate that this would have happened with AI?

I think it may be too early to tell. But as we have seen in the past, when general-purpose technology begins to spread, adaptation is required, for instance in the form of investments and new applied innovations in production. Innovations require investments, and the results are not necessarily measurable as economic growth and productivity development straight away. Instead, the rewards of hard work can be reaped a little later. In scientific literature, it has been suggested that the spread of general-purpose technology may generate a productivity J-curve in statistics.

Some people, including Daron Acemoglu, Professor of Economics at MIT, have voiced concerns about the impact of artificial intelligence on our societies, specifically about excessive automation and implications for inequality. Do you share these concerns?

The questions Acemoglu raises are big and important for our societies. As far as I understand, his concerns about automation and artificial intelligence are related to the direction of technological development, who decides which way it should go, and what it all means in terms of democratic development and income inequality. My personal opinion is that we shouldn’t give free rein to AI development. It is absolutely necessary to discuss the role governments play in it, and how well equipped and prepared they are to respond to the potentially harmful repercussions of artificial intelligence. At the same time, however, we must be careful not to over-regulate a promising technology and cause its development to slow down.

Many European IT companies and data activists have expressed concerns about the EU’s stringent regulation, which does not allow European companies to collect data sets to train AI. They also warn that we therefore risk losing the development game to Chinese and American tech giants.

It is certainly true that if the EU creates too many obstacles for data-driven development, it will be very difficult for European companies to lead the way in developing new technologies that make extensive use of data. Last semester, I visited the University of Bologna in Italy for research purposes. While I’m not a technology researcher, I experimented with ChatGPT during my research visit, to figure out what it could do for me. Then, quite suddenly, ChatGPT was shut down in Italy for about a month in the spring, because apparently it failed to meet some of the requirements of the authorities. This affected me too and got me thinking about how well-meaning but overly stringent regulation can end up harming technological progress.

In recent discussions, elements of threat have been linked to artificial intelligence. Some say it is advancing too quickly and threatens to escape from its human creators. Even leading AI developers such as Sam Altman of OpenAI, the developer of ChatGPT, and Geoffrey Hinton, dubbed the godfather of AI, have admitted they are worried. This brings to mind films like The Terminator, War Games and 2001: A Space Odyssey. At the same time, other leading scientists say that any fears of renegade AI are unfounded. Who should we believe?

The intense discussion on AI between leading scientists and developmental psychologists is, as far as I can tell, something altogether new. And while it is cross-disciplinary, in some respects it is surprisingly polarised. Some are convinced that we will soon be faced with dilemmas depicted in science fiction films, such as artificial intelligence developing its own awareness and escaping its human creators. But some developmental psychologists try to bring our feet back to the ground by pointing out that artificial intelligence is only able to perform very simple tasks and is nowhere near becoming sentient. In many scientific discussions, participants usually reach common ground rather quickly, and the discussion that follows revolves around finer details and nuances. The way I see it, the polarisation of scientists who are well versed in the topic shows that we are on the verge of something new.

In the context of AI, something that often comes up is its inability to create anything truly original. It has no imagination, and only uses the data set used to train it. But will its ability to combine existing data in a new way eventually produce GPT models that have an imagination?

This is a very interesting question. Artificial intelligence does force us to ask new questions about what we mean by intelligence, understanding and creativity. This leads to deep philosophical, psychological and economic questions. I have done research to establish which factors explain why some people become inventors. I think that AI development is heading towards or has already reached a point where it can support creative processes and generate new data precisely because it is able to combine existing data. This ties in with the idea put forward back in the 1950s by economist Zvi Griliches, who said some inventions are “an invention of a method of inventing”. This pretty well characterises large language models. This may be the very first time in human history that we are on the verge of seeing creativity enhanced with something automatic or automatable, and I find it fascinating.

 

Finnish text by Tuomo Tamminen
Translation by Leni Vapaavuori and Nick Moon