We have to weigh the risks and rewards of adopting artificial intelligence, and that’s a task the tech giants can’t be trusted with

Juliette Powell
Juliette Powell is an independent researcher, entrepreneur, and keynote speaker at the intersection of technology and business. Her consulting services focus on global strategy and scenarios related to AI and data, banking, mobile, retail, social gaming, and responsible technology.

Summary

  • The ultrarapid advancement of artificial intelligence has left both governments and citizens struggling to keep up with the pace of the technology revolution. Companies at the forefront of developing artificial intelligence are mainly interested in the financial rewards and less concerned with long-term effects of their technologies.
  • That is why it is essential that we start regulating artificial intelligence, argues Juliette Powell, researcher, entrepreneur, and keynote speaker. Powell and Art Kleiner have co-authored a book “The AI Dilemma. 7 Principles for Responsible Technology” which was published in 2023.
  • Despite the staggering pace of progress and the problems associated with it, Powell is hopeful. Engineers developing AI systems have started to openly talk about the flaws in their inventions, and authorities no longer blindly trust big tech executives who try to convince us of the superiority of their products.
Your latest book is called The AI Dilemma. Which dilemma are you referring to?

Artificial intelligence or machine learning is something that could potentially benefit us all on the planet in the right hands, but in the wrong hands, it could be disastrous, not just for people, but for the planet itself. And so we, as a global community, have to decide what we want to do with this technology. Do we want to put it at the forefront of the technology race, or do we want to put it in support of the humanity? It seems like we’re trying to do all of these things simultaneously and lots of good things are falling through the cracks, unfortunately.

You wanted to have this interview just before the Tampere Conversations event so that you could give us the most recent examples from a very fast-moving industry. What are the most interesting developments you have seen in these two months since your participation in Tampere Conversations was confirmed?

On Feb 4, a deepfake scammer stole $25 million USD from a multinational company’s Hong Kong office. It was a first-of-its-kind AI heist. In a video conference, an employee was instructed to transfer funds. The victim was the only real person on the call. The scammer used publicly available audio and video to replicate the appearance and voices of employees, including the chief financial officer. No arrests have been made but it’s the latest in a recent string of concerning deepfake activity. This particular US presidential election is being called the deep fake election by many, because before it has even really even begun, we’re starting to see many of these tools falling into the wrong hands and used for impersonating candidates here in the United States and elsewhere. It is just the beginning. In 2024 we’ll see something like 60 elections around the world, and there’s a potential that the majority of our human population will fall under authoritarian control to a certain extent. These technologies have so much sway right now.

So it’s not that two months has made such a big difference in the availability of technology. I think it has just made a huge difference in my own understanding of the impact of not just the US election, but all of these elections happening simultaneously in the world, and what that might mean for all of us.

One of the reasons why I wanted to come to Finland was to get your perspective on things. I’ve always found the Finnish people to be ahead of the curve when it comes to data, when it comes to trust within your own organizations and your own population, and your ability to monetize data to the benefit of all. I have a lot to learn from you all.

Recently there have been vocal calls for more regulation on AI and the Big Tech. The General Data Protection Regulation adopted in the EU is unpopular, with many experts saying it has only burdened companies with more bureaucracy while it fails to effectively protect our data. Many people say regulation is a competitive disadvantage to European businesses competing with American or Chinese companies. What is your stance on regulation?

My dissertation at Columbia University was on the limits and possibilities of self-regulation in artificial intelligence. After doing hundreds of interviews addressing that specific problem for five years, I can tell you that self-regulation in AI just does not work, at least not from the American Big Tech perspective.

But I think that this call to regulation is very, very important. I think it’s particularly remarkable that this all-encompassing Artificial Intelligence Act (AIA), the world’s first comprehensive AI law, is coming from the European Union, for the simple reason that Europeans don’t have as much at stake in the game when it comes to large companies pushing AI to the average person in the world, let alone when it comes to military purposes. At the same time, I do think that GDPR has kind of awakened a few big tech companies.

Why do you feel self-regulation doesn’t work?

Many organizations are looking at AI as an opportunity for exponential growth and exponential monetization. But when they’re looking at the risk-reward benefit, it seems to really just be in terms of the systems themselves, many organizations are not necessarily thinking about the longer-term lawsuits. They’re not thinking about the fines that will be coming out of the EU’S AIA. And they’re certainly not talking about the reputational risk or how much it will cost them if they deploy AI and end up hurting people. Hurting people can look different, depending on who you are, where you are in the world, and the particular system or app you happen to be targeted by.

What I’m not necessarily seeing when we talk about self-regulation, which is de facto baked into the AIA and much of the attempted regulation that I’m seeing around artificial intelligence, is that we’re relying on companies to self-regulate because they know more about technology than the governments that are trying to regulate them, let alone the regulators themselves.

Part of the issue there is that you have all of these competing orders of worth. What’s more important, saving people’s lives or getting that multibillion-dollar Department of Defense contract? Regulation often comes after we hurt people, after we break things. Sadly, with all the lawsuits that we’re seeing, I wouldn’t be surprised if that was the case here as well.

We all need to do what I call a calculus of intentional risk. The risk framework around the AIA very much allows for that.

Last spring the EU fined Meta for violating the GDPR. In response, Meta now offers a paid version of Facebook and Instagram in Europe, and those not willing to pay accept that their data may be used for advertising. I don’t know anyone who pays to use Facebook. Are we being too lazy and ignorant?

I don’t have an answer for that. I think it could be answered not just on an individual level, but also where we all are at that particular moment in time. One thing that I would love to see is the ability for people to own their own personal data and see what kind of havoc that would wreak on the monetization of their data, how people would be able to potentially share in that monetization. If people owned their own data, they could decide “No Facebook, No Meta. You’re not getting my data, but if you’re doing cancer research, you’re welcome to it”. That would allow people to be in real control of their lives. Your personal data determines how you are perceived in the world and the opportunities you receive.

Right now I don’t think that people are in control of their own data and how their life is shaped by their data trails. We have this illusion of control because when we press the Amazon button, we can buy things instantaneously or pay for things instantaneously. But that’s not real control. Real control is really weighing the cost and benefits of making a decision. Real control takes time to deliberate, and choose so most of us just take shortcuts and defer to the limited choices offered.

In your book you give examples of how governments have misused AI. If the governments of rich, liberal West European nations can’t be trusted with AI, who can? To quote your book, who watches the watch robots?

That’s a fantastic question that my co-author Art Kleiner came up with. He is a historian, and this is a question that we’ve been asking throughout our human history. Once you ascend to the upper levels of power, generally you want more. If you are aiming for unbridled power, how much can you be trusted to share power?

With great power comes great responsibility. I think of the billions of people who are coming online every year who have no idea of how much algorithms shape their lives and their future. They are potentially the next generation of technology leaders who could help shape that future and potentially control it. Part of the issue is that we keep throwing technology at our very human problems. Who can we trust to watch the watch robots? I believe that diverse humans from around the world should always be overseeing the watch robots.

For example, the Bletchley Declaration, signed at the UK’s AI Safety Summit last November to coordinate global cooperation on artificial intelligence (AI) safety. It was signed by 28 nations including the UK, India, US, and China. The fact that Turing Award-winning AI scientist and member of the UN’s Scientific Advisory Body, Yoshua Bengio was Chair the Report’s writing group, also gives me more confidence. Also gives me more confidence in who watches the watch robots. I’ve been lucky enough to work with Bengio. I trust him because from what I can tell he is not motivated by money or power. Moreover, he is supported by a diverse group of leading AI academics in that role.

In your book you set principles for responsible technology. What gives you hope that humanity might actually adopt those principles one day?

The EU’s AIA, the Bletchley Declaration are both examples of creative friction, the final principle in my book.

That creative friction is found in the clash of ideas that occurs when people from diverse backgrounds cultures, and ideologies come together to discuss and build the future of AI together. It takes much longer to deliberate but the results are more productive for more people.

Another example of a principle in action: be intentional about risk to humans. System engineers can see the data in servers and evaluate these risks. They can identify the companies and the individuals who are bad actors from their data trails. But they haven’t in the past. One reason is because when they tell their bosses, the bosses don’t want to hear it. By knowing what their clients are using their systems for, companies are de facto liable when those reasons are illegal. They could get sued for knowing that these third parties are using the IR systems for nefarious purposes like human trafficking and pedophilia. So what gives me hope is that those same systems engineers are now speaking out about what they’re seeing.

As a result, we’re seeing companies like Salesforce implementing acceptable use policies around their AI systems since last summer. The fact that you can no longer use their system for these nefarious purposes gives me hope.

When I think of the 60 elections this year in the world that I mentioned earlier, what gives me hope is that many of these populations are taking these deep fake threats seriously and are actively finding ways to distinguish between a deep fake and a real video. That is another example of be intentional about risk to humans.

Before we were very focused on the engineers or technical people just talking to technical people, and government people just relying on big tech or other leaders to guide them. Now we’re seeing events like Tampere Conversations where government, business, engineers, and people from around the world are using creative friction to discuss how to reduce ai risk while also exploring its potential benefits for humanity. I consider that a big step forward.

 

Text by Tuomo Tamminen
Translation by Leni Vapaavuori and Nick Moon