Swiss perspectives in 10 languages

Where does Switzerland stand on regulating AI?

Junge Frau und Künstliche Intelligenz (Bild wurde von einer KI generiert)
A symbolic image created with the aid of the Dall-E 2 AI. Created with DALL·E

Everyone seems to agree that there must be limits on what artificial intelligence (AI) is allowed to do. The European Union and the Council of Europe are currently drawing up a set of rules. What about Switzerland? 

From social media, medicine and chatbots to semi-autonomous vehicles, artificial intelligence (AI) has become part of our everyday lives, whether we like it or not. Now, in an open letter, top-level representatives from the branch have recently sounded the alarm about the dangers of AI. Risk reduction has to be a “global priority”, they wrote. 

The worries revolve around human rights, the rule of law and democracy, explains Angela Müller, head of AlgorithmWatch CHExternal link, a civil-society organisation that critically monitors developments in AI. The potentially negative impact of AI can already be seen, she says, pointing to the childcare benefit scandal in the NetherlandsExternal link involving racial discrimination blamed on an algorithm. In Müller’s view, these kinds of cases make regulating AI “relatively urgent”. 

Thomas Schneider, vice-director of the Federal Office of Communications agrees. “Data is the new oil, and AI systems are the new motors,” he says. People are aware that these are crucial issues and that corresponding solutions have to be found, he adds. 

Nevertheless, Schneider, who is also head of the Council of Europe’s Committee on Artificial Intelligence (CAI)External link, thinks a distinction has to be made between a viable and an instant solution, which is why Switzerland is taking a somewhat wait-and-see approach. 

To date, none of the countries in the Organisation for Economic Co-operation and Development (OECD) has introduced AI-specific regulations. The body to have gone furthest in this respect is the European UnionExternal link, with the European Parliament having agreed a first draft of proposed AI legislation (the Artificial Intelligence Act) on June 14 . 

These laws will also cover high-risk applications, such as a ban on real-time facial recognition (as used for instance in the social credit system in ChinaExternal link) or on language-assisted children’s toys that could encourage unsafe behaviour. 

As the “guardian of human rights”, the Council of Europe similarly feels a duty to formulate its own legal mechanisms regarding AIExternal link. Müller stresses that the Council of Europe’s framework AI convention doesn’t clash with the EU regulations, rather it supplements them, “because they follow different approaches – the EU wants to govern AI via product safety”. 

No blanket law 

What seems obvious is that there won’t be one overall law covering AI. Schneider compares AI with engines, for which specific laws and regulations apply according to their use. The same is true of AI, where completely different applications are involved. 

This requires mixed safeguards, Schneider says, because “if you need an AI system for a music streaming service, this has different ramifications than when the same algorithm recommends which step a surgeon should make next during a heart operation”. 

For this reason one blanket law would be inadequate – a fact the EU is also aware of, he says. “The EU has around 30 proposals dealing just with the digital sector”, Schneider explains. For him, the crucial question is what aspects current laws are unable to cover. 

Müller agrees. “It’s not as though we’re currently floating around in a legal void. There are already laws in place, starting with the constitution and the protection of fundamental rights.” The goal now, she says, is to bridge the gaps that the challenges posed by AI have opened up. 

But for her the situation is similarly clear. “It’s not about having one law and then everything’s fine – that’s not going to work.” She says the problems involve anti-discrimination rights, fundamental rights, copyright law, competition law, administrative law and much more besides – in other words a whole plethora of legal areas. 

Where does Switzerland stand? 

AI regulations are currently under discussion is many countries, but what approach should non-EU Switzerland take? First of all, it can be assumed that Switzerland, which has been a member of the Council of Europe since 1963, will follow convention. 

Furthermore, since the US, Canada, Japan, Israel and Mexico, among others, are also members of the Council of Europe, its regulations are likely to have far-reaching effects. 

For the moment, Switzerland’s position is to wait and see and to check the various options, says Thomas Schneider. And it is by no means alone. “Everyone’s looking to see if what the EU is hatching actually works.” But Schneider’s guess is that Switzerland won’t set off in a diametrically opposite direction to what the EU intends. 

In that sense Switzerland is one of “numerous typical countries that aren’t only just sitting tight but are also analysing and thinking about their options without having yet committed themselves to anything concrete”, he says. He adds that when the process gets started, it will still take years, if not decades, of adjustments.  

A European patchwork? 

Switzerland is not part of the EU, like the United Kingdom since Brexit. Could Europe end up facing a patchwork of AI regulations? 

Angela Müller from AlgorithmWatch CH stresses that the aim of the EU laws is to try to prevent this at least within its own boundaries. “Despite this, the regulations laid down by the EU will also apply to external companies whenever they want to sell their products inside the EU,” she says.    

This will presumably also apply to Swiss and British firms. Moreover, the various industries will no doubt also apply a degree of political pressure in order to obtain legal securities in their sectors. 

This in turn raises the question of responsibility. Who is liable if an AI system breaks the rules? This is the topic of a scientific conference on transparency currently being held in the US. 

Computers are never the culprits, Müller points out, because they lack criminal intent. Responsibility is always borne by the people who develop the systems or employ them for specific purposes. The human actors still have to be identifiable.  

“If that becomes impossible, a core foundation of the rule of law is undermined,” she concludes. 

The above conversations took place at the Swiss Internet Governance Forum in Bern. 

An annual event, the platform gives experts the opportunity to discuss major digital topics in Switzerland, including the use and regulation of artificial intelligence. 

Two of the principles in the “Messages from Bern” drawn up on June 13 read: 

  • “AI applications that reproduce discrimination such as sexism or racism must now be legally addressed.” 

  • “The Council of Europe’s AI Convention has great potential. Switzerland should have the courage to build on it and go beyond it.” 


Edited by Balz Rigendinger. Translated from German by Thomas Skelton-Robinson  

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR