As Americans vote in highly anticipated mid-terms, across the Atlantic there is growing concern over disinformation and manipulation derailing democratic processes. Will Switzerland be spared the onslaught of “fake news” campaigns ahead of next year’s general elections?
It’s a scenario that has become increasingly commonplace, surfacing most recently in the lead-up to a referendum in Macedonia on changing the country’s name, a long-standing barrier to its entry into NATO and the European Union. Trolls, fake accounts and bots (automated accounts) peddle divisive narratives and false information on Twitter and Facebook, in this caseexternal link to convince citizens to boycott the vote when a 50% turnout is needed to validate the result. In the end just 34% of the Macedonian electorate cast their balloexternal linkt, and the outcome – an overwhelming yes – was null and void.
Ever since the highly contested US presidential election two years ago this month brought the problem into focus, “fake news” and interference during elections have preoccupied political leaders, the media and ordinary citizens alike.
Even in Switzerland, known for its politics of compromise and low levels of polarisation, nationwide votes have not been immune to manipulation, as a recent study showed. But while neighbouring countries are experimenting with legislation and specialised units to combat online disinformation (see infobox), the Swiss are taking a “let’s watch” approach. It’s something it can afford to do – for now.
What’s 'fake' and what’s not
Since Donald Trump was elected US president amid accusations that misinformation on social media had influenced voter behaviour, the term “fake news” has become mundane, with one source reportingexternal link a 365% increase in its use.
But it’s also been highly misconstrued. In Switzerland, debate about the phenomenon intensified in late 2016 and 2017, according to Linards Udris, a media expert at the University of Zurich’s Research Institute for the Public Sphere and Societyexternal link (fög). Half of Swiss respondents to a 2018 poll for the Reuters Instituteexternal link said they were concerned about “fake news”, even though few (13%) had actually encountered itexternal link in the previous week.
Udris defines “fake news” as completely made-up stories, yet for many, the definition is much broader: “If politicians get something wrong, if it’s poor journalism or journalists make a mistake or report sloppily – ‘fake news’ is simply what I don’t believe,” he says.
In the US in particular, “fake news” is being used to attack political opponents and journalists whose stories people just don’t agree with, rhetoric that can be contagious and damaging for the media industry.
“When a politician makes a [false] claim, instead of saying ‘you lied’, people say it’s ‘fake news’, making a direct connection between [misinformation] and the news media,” says Udris. “And that’s a problem.”
Despite concerns about disinformation, trust in Swiss media remains high among the population. The reality is that fabricated news sites are still rare for reasons linked to the small size of the Swiss electorate, weak polarisation, a diverse mainstream media that remains the preferred forum for political debates, and relatively limited political discussions on social media.
In 2017 the government decided that no new regulation was needed to combat disinformation. Instead the cabinet has said it will keep a close eye on developments in Switzerland and abroad. For now, there is no coordinated plan at the federal level to prevent disinformation and interference during next year’s general elections, the Federal Chancellery told swissinfo.ch. A spokesperson pointed to a Federal Council statement last spring reiterating that it continues to monitor the situation.
Targeting political campaigns online
This approach is not entirely misplaced, according to experts. “Fake news” in the strict sense of the term has not affected Swiss political campaigns as it has in other countries, says Udris.
Political conversations on social media also happen on a relatively small scale, so fewer voters are exposed to misleading content. And filter bubbles – where people with similar ideas talk only to each other and “fake news” tend to flourishexternal link – are not the dominant form of debate. This correlates with an observation that Stefan Gürtler made when he and his research team examined thousands of tweets that circulated around the campaign to scrap the public broadcasting licence fee (Billag), which came to a nationwide vote in March 2018.
“The debate may not always be polite, but [ideological opponents] exchange messages,” says the lecturerexternal link at the University of Applied Sciences and Arts Northwestern Switzerland (FHNW). “This is a sign that the digital communication culture is different in Switzerland.”
Nevertheless, the discussion on the “No Billag” initiative was highly polarised by Swiss standards and those who wanted to influence voter views took full advantage. In the two months leading up to the vote, the FHNW team found that cyborgs – people with technical assistance – were sending up to 1,000 messages per day on Twitter. Fifty users – all cyborgs – generated half of the conversation on No Billag. Those in favour of abolishing the licence fee had a 55% share of the conversation.
Alarming as the findings are, Gürtler says it’s unlikely that this scale of manipulation happens during every Swiss vote campaign.
At the same time, both he and Udris believe that political parties are likely to make greater use of social media in the run-up to next year’s general elections than they have in the past. Gürtler points out that Facebook recently ran a workshop for Swiss politicians to improve their social media skills. He says to expect “advances in the sheer number of messages” before next October’s elections.
More conversations, more manipulation
As more people go online to talk politics, says Gürtler, the level of manipulation is likely to rise as well.
Today, anyone with basic skills can find a recipe online for programming a bot in 30 minutes, which can be used to quickly spread disruptive content. There are also bot factories on the dark net, he says, that make it easy to buy them in the thousands.
“The technology of manipulation is advancing much faster than the technology of manipulation detection,” Gürtler cautions.
He also claims that “a few parties have bought software […] to organise followers”, software that can also be used to do psychometric targeting, which involves taking users’ social media data to create voter profiles for targeted messaging.
Although it’s unclear if campaigners will go down that route, the case of consultancy firm Cambridge Analytica improperly obtaining millions of Facebook users’ data to sway voters in the US and the UK has raised alarm bells. The Swiss Federal Data Protection and Information Commissioner (FDPIC)external link has set up an expert working group focused exclusively on safeguarding voters’ privacy during the 2019 election campaign. FDPIC spokesperson Hugo Wyler told swissinfo.ch the group will issue a paper to alert tech companies, political parties and strategists of the relevant aspects of Swiss law they should bear in mind when engaging with voters. It will also inform the public of any possible breaches during the campaign.
Meanwhile, Gürtler and his team are working on setting up real-time monitoring of online discussions, “so people can see which topics or candidates are prone to manipulation.” The idea is to arm citizens with knowledge so they can decide which content to consume. It’s a sign that tech platforms themselves are not doing nearly enough to combat disinformation and interference despite coming under intense pressure to do so in the last two years.
Twitter, which, like other social media sites has clear user rules, closed only a handful of problematic accounts during the No Billag campaign, the researcher points out, adding that if social media sites “would live up to their own regulations, communications on these platforms could look very different and be controlled in a much better way.”
Safeguarding elections in Europe
With close to 20 major elections planned before 2020, Europe is not fully preparedexternal link to fight interference in democratic processes. That’s according to the Transatlantic Commission on Election Integrity, created in 2018 to help governments tackle this growing problem. Here is a selection of measures European governments have taken in recent months.
Believing that European parliamentary elections in May 2019 are the next big target for disinformation campaigns, the European Union has been stepping up effortsexternal link. The EU Commission has held public consultations, set up an expert group and outlined a common approach. In September it introduced a code of practiceexternal link for tech companies as a way to compel them to self-regulate. Many observersexternal link, however, are sceptical the code will have any measurable impact.
In France, parliamentarians approved draft legislation this July designed to fight “fake news” by giving courts the ability to rule whether contested news reports and manipulated content should be removed during election campaigns. The law, which some have called useless and unenforceableexternal link, would also force social media platforms to reveal the buyers of sponsored content once it goes into force in 2019.
Months before general elections took place in Sweden this autumn, the government announced plans to create a new national bodyexternal link to tackle disinformation and foreign influence campaigns by promoting factually accurate content.
A parliamentary committee in the UK spent 18 months looking into the subject before releasing a set of recommendationsexternal link this July on how the government can address disinformation. The UK, which believes the Russians were behind efforts to spread false information during the Brexit referendum, has also set up a dedicated unitexternal link to fight disinformation “by state actors and others”.