The Swiss voice in the world since 1935
Top stories
Stay in touch with Switzerland

AI-driven discrimination: protection in Switzerland is weak

Person protests against discrimination
Ethnic and gender minorities, foreigners and women are among the groups most affected by AI discrimination. Keystone

Algorithms increasingly influence important decisions such as job hires or access to credit and insurance. According to a recent review, legal safeguards in Switzerland remain inadequate. The government has promised to take action.

In Switzerland, the population is not sufficiently protected from the risks of discrimination due to the use of artificial intelligence (AI). This was highlighted in a legal opinion addressed to policy makers on Tuesday by two federal commissions: the Commission against Racism and the Commission for Women’s Issues. 

It is no coincidence that these two commissions have confronted the government with the gaps in protection against the risks of AI. Foreigners, gender and ethnic minorities and women are among the groups most affected. 

Several studies have already shown how AI penalises these groups in areas such as recruitment, loans and insurance premiums. In job-application screening, for instance, AI tools favour male candidates in 85% of cases and women in only 11%, while Black men are systematically disadvantaged, according to international researchExternal link.

“Racial and gender discrimination is perpetuated and amplified by artificial intelligence,” warned Ursula Schneider Schüttel, president of the Federal Commission against Racism, during a national conference in Bern. 

She pointed out that some car insurers in Switzerland use AI-based models to calculate premiums. “Young men who do not have a typically Swiss surname end up paying higher premiums,” she said. Swiss media have already reportedExternal link similar cases. 

Compared with residents of other countries, Swiss citizens find themselves in a more vulnerable position. The European Union, for example, adopted a dedicated lawExternal link in 2024 that obliges providers of high-risk AI systems to ensure transparency and actively mitigate discrimination and bias. 

For these reasons, representatives of the two commissions argue that Switzerland must take action as soon as possible to address algorithmic discrimination. 

The holes in Swiss law

For jurist Nadja Braun Binder, professor of public law at the University of Basel, Swiss laws have obvious gaps when it comes to the risks of discrimination posed by AI tools.

“There is no legal vacuum, but protection is fragmented,” she told Swissinfo.  

The Swiss constitution prohibits discrimination, for example on the basis of sex, origin or language. But algorithms can circumvent these protected traits by using seemingly neutral data – so-called ‘proxies’, such as postcode or name – which ultimately reflect social origin or background. 

The problem for Binder is that it is virtually impossible to identify such discrimination on a case-by-case basis under current laws.

“Only when discrimination affects a large number of people can we know whether it is a systematic phenomenon,” she explains. 

According to the jurist, action must be taken to correct such shortcomings. Binder argues that AI can multiply existing discrimination in society on a large scale, potentially eroding social progress.

“AI is not progressive. It makes conservative decisions based on the historical data it is trained on,” warns Binder.  

This is why in the legal opinion drafted for the committees, Binder and her colleague Florent Thouvenin of the University of Zurich call on politicians to work on a general anti-discrimination law that applies to various sectors, both public and private.

“AI also gives us an opportunity: to identify discrimination in our society and correct it,” says the jurist. 

Swiss government ‘forced to act’ 

In an interview with Swissinfo, Interior Minister Elisabeth Baume-Schneider said she takes Binder’s and Thouvenin’s recommendations seriously.

“We have really become aware of the need to define more transparently the role of algorithms in decisions that impact the population,” she says. 

According to Baume-Schneider, it is clear that AI is everywhere and that its extensive use can make discrimination more widespread, especially against people who are already disadvantaged, poorer or marginalised. For this reason, transparency in decisions taken at any level – medical, social, educational, financial – with the support of AI must be a priority.  

As a first step in this direction, Switzerland has decided to ratifyExternal link the Council of Europe’s Convention on AI. This is an international treaty that aims to ensure that AI technologies are used in a manner that respects fundamental rights, including the right to non-discrimination. 

>> NGOs and civil society have criticised the Council of Europe’s Convention on AI. Find out more:

More

“We must work towards implementing this convention in our country,” says Baume-Schneider.

The interior minister also confirmed that Switzerland is already working on aligning its laws with the convention and on a legal framework to correct existing gaps. It will then be up to the governing Federal Council, and subsequently Parliament, to decide on any legislative changes. 

Baume-Schneider admits that Binder’s and Thouvenin’s legal opinion compels the government to act. 

“It is not a question of demonising algorithms but of recognising that decisions made in an opaque manner by AI can have political, legal and economic consequences,” she says. 

Edited by Marc Leutenegger/ac

Related Stories

Popular Stories

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR