Swiss perspectives in 10 languages

AIs are out of (democratic) control

Lê Nguyên Hoang

We desperately need more attention, staff and funding to set up artificial intelligence (AI) governance systems akin to those introduced in the airline, pharmaceutical and food industries, says scientist and science communicator Lê Nguyên Hoang.

On March 29 an open letterExternal link demanding to “pause giant AI experiments” was published and has so far been signed by more than 20,000 academics and tech leaders. This call is long overdue.

Over the past decade impressive algorithms have been hastily developed and deployed on massive scales, like ChatGPT and Midjourney. Similar AIs have been widely commercialised, for fraud detection, CV filtering, video surveillance and customer service (often despite known shortcomingsExternal link and biasesExternal link). But their main application is arguably in marketing. Many of today’s biggest tech giants such as Google, TikTok and Meta mostly profit from ad targeting, while ChatGPT’s first public customer was none other than Coca-Cola. This should already be a red flag.

Lê Nguyên Hoang is co-founder and CEO of the cybersecurity start-up Calicarpa, as well as co-founder and president of the non-profit Tournesol Association. Hoang’s YouTube channel “Science4AllExternal link” has surpassed 18 million views since it was launched in 2016. 

Additionally, algorithms have been shown to spread misinformation, recommend pseudo-medicines, endanger mental healthExternal link and have been used to coordinate illegal (even slavery) marketsExternal link. They have also fuelled hate, helped destabilise democracies and even contributed to genocidesExternal link, as asserted by the United Nations and Amnesty International. Algorithms are threatening national security.

Yet their development is exceedingly opaque. Hardly any external entity can peek at Google, Meta or OpenAI’s algorithms. Internal opposition forces have even been removed: Google fired its ethics team, Meta dismantled its responsible innovation team, and Microsoft laid off an ethics team after it raised the alarm about rushed, unethical and insecure deployment. Powerful profit-seeking companies have successfully engineered a world state where their algorithms can be designed with hardly any accountability.

More

Effective AI governance is urgently needed

The software industry is far from being the first out-of-control industry. For decades, the airline, car, pharmaceutical, food, tobacco, construction and energy industries, among many others, have commercialised unchecked products. This has cost millions of lives. Public societies eventually opposed the alarming lack of accountability. In all democracies, strict laws and powerful well-funded regulatory agencies now enforce democratic control over these markets. The software industry needs a similar oversight.

We urgently need to favour secure and ethical technologies, rather than demand that our countries lead the race for eye-catching AIs. Concretely, the impressiveness of the algorithms running our smart grids, cars, planes, power stations, banks, data centres, social networks and smartphones should matter a lot less than their cybersecurity. As my colleague and I warned in a 2019 bookExternal link, if these algorithms are brittle, vulnerable, backdoored or outsourced to an unreliable provider, or if they abuse human rights – which is usually the caseExternal link – then we will all be in great danger.

Yet the software industry and academia, as well as the current legal and economic incentives, are mostly hampering the security mindset. Too often, the most cited, most celebrated and most funded researchers, the best-paid software positions and the more successful companies are those who neglect cybersecurity and ethics. As a growing number of experts reckon, this must change. Urgently.

Our democracies probably cannot afford the decades that were required to set up laws and inspection agencies in other industries. Given the pace at which fancier algorithms are developed and deployed, we have only a very small window of time to act. The open letter signed by me and other AI researchers aims to slightly extend this window.

More
Opinion

More

Governments and companies won’t stop the AI race

This content was published on Artificial intelligence researchers are calling for a pause on “giant AI experiments”, but it won’t happen, argues AI researcher Jürgen Schmidhuber.

Read more: Governments and companies won’t stop the AI race

What you, your organisations and our institutions can do

Installing democratic control over today’s most critical algorithms is an urgent, enormous and fabulous endeavour, which will not be achieved in due time without the participation of a large number of individuals with diverse talents, expertise and responsibilities.

A first challenge is attention. We must all urgently invest a lot more time, energy and funding to make sure that our colleagues, organisations and institutions pay a lot more attention to cybersecurity. Big Tech employees must no longer be invited and celebrated, especially in universities and media, without being challenged about the security and the ethics of the products that fund them. More generally, in all tech discussions, “what can go wrong?” must be asked.

A second challenge is institutional. While new laws are needed, today’s large-scale algorithms are likely already violating existing laws, like profiting from ad-based scamsExternal link. However, the current complete lack of external oversight is preventing justice from being delivered. We must demand that policymakers set up well-funded regulatory agencies to enforce the law online.

Switzerland has often played an exemplary role in setting democratic norms. This is an opportunity to carry on this noble tradition. Furthermore, the Lemanic arc (the area around Lake Geneva) has recently aimed to become a Trust ValleyExternal link in the field of digital trust and cybersecurity. Empowering inspection and cybersecurity organisations will arguably be key to be regarded as such worldwide.

A third challenge lies in designing democratically governed secure alternatives to today’s most impactful algorithms. This is what I have spent most of the past five years working on, as my colleagues and I set up the non-profit TournesolExternal link project. Essentially, Tournesol’s algorithm results from a secure and fair vote on its preferred behaviour by Tournesol’s community of contributors, which anyone is welcome to join.

The quicker we all prioritise the security of our information ecosystems, the sooner we will have a chance to protect our societies from the current massive cybersecurity vulnerabilities.

Edited by Sabina Weiss.

The views expressed in this article are solely those of the author, and do not necessarily reflect the views of SWI swissinfo.ch.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR