Is Swiss AI a powerhouse for democracy?
United States cybersecurity expert Bruce Schneier has high hopes for a Swiss artificial intelligence (AI) model, as other experts express optimism that AI can become an integral part of democratic institutions.
“Is anyone here from Switzerland?” Bruce Schneier called out to the audience at the World Forum for DemocracyExternal link in Strasbourg, where the US cybersecurity expert was speaking last November. No answer came back to him.
During his talk on AI and democracy, the lecturer from the John F. Kennedy School of Government at Harvard University made repeated references to Switzerland and the notion of an assisted democracy, which originated at the Swiss federal technology institute ETH Zurich. Above all, he praised Apertus, the language model developed by the ETH Zurich.
The Swiss AI model, he said, shows that artificial intelligence can benefit public welfare “without economic interests and stolen data”.
“I think we do have a lot of problems with democracy. They’re not problems caused by AI. They’re often problems exacerbated by AI,” Schneier said during a Signal call in January. “The question is: are there ways we can use it for more democracy? I think the answer is yes, but we need to do it.”
In an article published in Time MagazineExternal link, Schneier compares AI to the railroads of the 19th century. At the time, new rail routes in the US had the potential to “connect the disconnected” and equalise access to power. Instead they created unprecedented wealth among a few people.
“Railways are like AI today – we all use it for something else. This is the reason why Apertus is powerful,” Schneier explained. “It is a platform that anybody can build on.” This is a prime example that technology can exist without corporate giants. “Can we get AI models that are not built by a bunch of white male tech billionaires and Silicon Valley on the profit motive?” Schneier asked rhetorically. A tiny country has shown how it can be done. “Costs are dropping, and we will see more of these models,” he said. Individual language models will become “largely interchangeable”, he added. The expert believes many users will turn to open models like Apertus or Sea LionExternal link, developed in Singapore.
Read our article where we explain the facts and myths surrounding Apertus:
More
Fact and fiction about the Swiss AI model Apertus
Whether or not AI services are used by institutions or emerge from citizens’ initiatives does not determine their significance for democracy. Typewriters, after all, are used as much within institutions as outside of them. “The writing assistant Grammarly is used to edit things in democracy,” said Schneier.
Schneier rejects the idea that a lack of trust in AI could negatively affect democracy. “Everyone you know uses AI to get step-by-step directions on their mobile phones,” he said. People don’t really think about trust when it comes to AI. “Real trust remains in the background.”
The key question, he said, is which AI you trust: “Public trust in AI linked to certain business models may be low. I wouldn’t trust Facebook with anything. But people do trust AI that analyses X-rays. Doctors use it because it can do the job better.” This, he argues, is the essence of trust. “If AI causes harm, blame the companies! Don’t blame tech!” The root cause, he said, lies in corporate decision-making.
Schneier sounded enthusiastic in Strasbourg, much like in his articles on the future published in Time MagazineExternal link. But when he spoke before the Committee on Oversight and Government Reform of the US CongressExternal link last year, his tone was different: “The previous four speakers focused on the promises of this technology. I want to talk about the national security implications of the way our country is consolidating data and feeding it to AI models.” Schneier explained how employees of the so-called Department of Government Efficiency (DOGE) under the Trump administration vacuumed up government databases and offered them to “private companies like Palantir”.
“These actions are causing irreparable harm to the security of our country and the safety of everyone, including everyone in this room, regardless of political affiliation,” Schneier explained. When he talks about present-day reality, he is sharply critical. Above all, his optimism is an appeal to the future.
In Switzerland, where Apertus was developed, public trust in AI is mixed. According to the 2025 National e-Government StudyExternal link, 23% of people think AI should be used in public administration only in exceptional cases, while 40% support its use only where it clearly adds value. In a study on national securityExternal link by the ETH Zurich, also published in 2025, AI ranks last when it comes to public trust. Scoring 4.3 out of ten, it has dropped by a further 0.3 points compared with 2024.
Yet even here, there is forward-looking optimism. Dirk Helbing, a professor of computational social science at the University of Zurich believes that “the path taken with Apertus should be pursued further.” It could be expanded to include “search engines and democracy-promoting platforms for civil-society projects”, he said.
Apertus could “perhaps even become an export hit”, Helbing added. He believes international partnerships could be beneficial for the model’s further development. More generally, he recommends “cooperation in the AI sector with democratic countries that are committed to human rights”, citing Japan, South Korea, Taiwan and India as examples.
At the same time, AI could also help to stabilise dictatorships built on mass surveillance – with transnational repercussions.
Watch our video from 2025 on the possible consequences of AI for democracy:
The fact that democracies struggle around the world, Helbing said, is also linked to “the path digitalisation and AI have recently taken.”
“Tech companies want the largest possible markets, but many people do not live in democracies. Software developed for autocratic systems also affects the software used here,” he said.
It is well known that language models “can manipulate us far more effectively than humans”, he added. Moreover, he said, systems that “work well today” could be running on a completely different algorithm tomorrow.
Helbing lists many reasons for pessimism, yet calls himself “optimistic because in the end, it must turn out well, otherwise we would have gone badly wrong for a very, very long time.”
“Unfortunately, there is little research” on how digitalisation could contribute to “freedom, human rights and democracy”, Helbing said. “Civil society initiatives like Open Data, Open Source, Open Access, Hackathons, Maker Spaces, Citizen Science as well as Participatory Budgeting” should be supported and “awareness of power abuse and the potential misuse of digital technologies” promoted.
“Anything that helps people take greater control of their own destiny should be supported,” Helbing said, as a way to bring a guiding principle of liberal society into the AI era. Science can make a major contribution, but Helbing is convinced that it is “high time” for politics to act. “We are being turned into data mines, and our human rights are restricted. We have to do something about this.”
Read our article where we explain the idea of an Assisted Democracy, which Bruce Schneier also mentioned:
More
Digital citizens could shake up democracy in Switzerland and beyond
Political philosopher Laetitia Ramelet is also aware of this risk. She studies the societal impact of technology at the Foundation for Technology Assessment (TA-Swiss) and considers the use of AI to “analyse our behaviour and our preferences” the greatest threat to democracy today.
“Professionals who are skilled in these methods” could use personalised recommendations and large volumes of content to “subtly influence people,” she said.
Ramelet also believes that AI is already directly influencing the design of voting and election campaigns. “Two things are certain because they are well documented. Written AI outputs can be very persuasive, and persuasiveness carries a lot of weight in a democracy,” she said.
AI models, she argues, can amplify their biases, distortions and tendencies toward uniformity, at least if no preventive measures are in place. Ramelet, who studies deepfakes extensively, also sees the flood of quickly generated fake and misleading content as a risk for informed decision-making in a democracy.
Apart from the risks, Ramelet expects AI services to become an integral part of democratic institutions. She notes there are “many ongoing projects” and “initiatives to this effect”. In Switzerland’s public sector, she has observed that fundamental rights, data protection and control are being “taken seriously” in this process.
The current US government does not care about these issues. “Yes, the [US] government will continue to use AI to dismantle democracy as this is its goal,” said Schneier. “And those who oppose it will use AI to defend democracy.” AI does not change the balance of power but simply gives both sides more leverage.
Do you think AI can become a force for democracy? Tell us about it:
More
Edited by Marc Leutenegger. Adapted from German by Billi Bierling/gw
In compliance with the JTI standards
More: SWI swissinfo.ch certified by the Journalism Trust Initiative
You can find an overview of ongoing debates with our journalists here . Please join us!
If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.