Swiss perspectives in 10 languages

‘In 2023 governments woke up to the realities of AI’

Prof Stuart Russel
Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. © Didier Ruef

Artificial intelligence scientist Stuart Russell was in Geneva this week to attend the annual global summit AI for Good. SWI swissinfo.ch caught up with him to discuss the challenges the technology poses and how to better regulate it.

Russell, a British AI expert and author, has a busy schedule. He meets us in a Geneva hotel early in the morning, having already done an interview with another journalist. He was speaking at several events of the UN’s two-day AI for Good Summit, which wrapped up on July 7.

Russell has been coming to Geneva almost every year since 2017 to attend the summit, which aims to bring together artificial intelligence and robotics innovators, as well as leaders in the humanitarian field, to advance AI as a driver of sustainable development.

Despite the slightly provocative tagline at a time when AI is increasingly criticised, he strongly believes that the technology should focus on what good it can do for humanity. He recalls how annoyed he felt when a previous AI conference in which he took part focused mostly on how to monetise advertising via search engines and social media.

For him part of the solution is not to focus on how AI will replace humans, but on what needs technology can fill that humans cannot.

Stuart Russell 2
© Didier Ruef

“The business model is not ‘can I save money by firing all my employees?’ The business model is that we can now do something that we couldn’t do before,” he says. “For example, meeting many of the UN’s Sustainable Development Goals.”

Russell studied physics at Oxford University and received his PhD in computer science from Stanford University. He is a professor of computer science at the University of California, Berkeley but has also worked as an adjunct professor of neurological surgery: a medical field focused on the surgical treatment of brain and spine disorders. In this role he led a research collaboration to find ways to collect data from patients in ICUs (intensive care units) and monitor all the information so that doctors can intervene early and properly to save lives.   

His interest in computers goes back to childhood. “I think I wrote my first AI program 47 years ago,” he says. Today, students at more than 1,500 universities use his textbook, Artificial Intelligence: A Modern Approach, to study the theory and practice of artificial intelligence.

Rethinking work and education

As a father of four children, Russell is especially concerned about the future of the labour market. He says AI is already threatening certain white-collar jobs, such as commercial writers.

“All routine mental and physical labour will continuously get replaced,” he says.

I ask what jobs he would recommend to a ten-year-old today. He argues the focus in the future will be on interpersonal roles, such as therapists. They are difficult and undesirable to automate, according to Russell.  

He says AI will disrupt the current education-work model. With the scarcity of attractive jobs in the future, it might be difficult to maintain the incentive structure for education.

This will mean “making major economic changes”, he says, rethinking the whole educational system, and potentially introducing a basic income for people whose work is no longer needed due to AI.  

“I think we should psychologically and politically be prepared to do that [introduce universal basic income],” he adds. But Russell thinks that should not be an end in itself. The solution would be to make major policy changes, think about a different education system, and create new professions. But, he says, “all of that takes a lot of time, and probably isn’t going to get done soon enough.”

The year that matters

He believes the arrival of ChatGPT earlier this year has been a wakeup call for governments, many of which he thinks are unprepared for the disruptions AI will bring.

Stuart Russell 3
© Didier Ruef

“2023 is the year when policy-makers have woken up and they are asking what to do,” says Russell. “Government officials use ChatGPT and see for themselves that AI is going to do a lot of people’s work. Some governments totally get it and some are completely oblivious.” Singapore, for example, has started thinking about planning for the societal implications.

In the United States and Europe, governments are only just starting to regulate how the technology will impact society. Bipartisan hearings were held in the US Senate in May during which Sam Altman, CEO of OpenAI, the developer of ChatGPT, urged lawmakers to regulate artificial intelligence. “I think if this technology goes wrong, it can go quite wrong,” he said at the time. 

On the other side of the Atlantic, the European Parliament adopted an AI Act on June 14. Once negotiations are finalised with member states, it could become the world’s first comprehensive legislation on AI. “The EU AI Act is not perfect, but it can have a big impact,” says Russell.

Russell expects some regulation of the technology in the US that may include reserving certain roles for humans, like caring for children, for example.  

Regarding regulation, Russell says time is running out. “Almost everybody’s timeline for when we will start to have real general-purpose AI has moved closer to the present. Governments are listening [so] we really have to come up with a regulatory framework that makes sense and ensures that the AI systems that are built are safe”.

More

Debate
Hosted by: Sara Ibrahim

Is AI going to help or hurt us?

Computers are proving capable of performing tasks that require human intelligence and to influence our decisions. Should we be letting them?

66 Comments
View the discussion

Last March he was one of the signatories, along with Elon Musk, of an open letter calling on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.

Coincidence or not, he says tech companies have not released this kind of technology since then.

Various ideas related to the future of AI were presented at the AI for Good Summit.

Russell is hopeful about 2023: “Policy-makers are belatedly listening and no longer ignoring the potential arrival of AGI [artificial general intelligence].”

Edited by Virginie Mangin

Deeply Read

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR