Swiss perspectives in 10 languages

Long battle ahead to curb fake news

Facebook headquarters
Facebook headquarters in Menlo Park, California: the tech giant has announced plans to limit the spread of fake news, but whether it'll succeed remains to be seen. Keystone

Swiss and European researchers are working on algorithms to detect misinformation circulating on social media but caution that training machines to do the work is no easy task.

Misinformation hit international headlines in 2016, peaking with accusations that fake news on Facebook helped win Donald Trump the White HouseExternal link. After initially denying that false information had had an influence on voters, the world’s most popular social network began testingExternal link measures to limit the spread of hoaxes on its site.

From giants like Google to solitary tech nerds, others are also springing into action. Yet those who began studying the growth of misinformation well before the unexpected results of the American presidential election brought the problem to the fore caution that experts face an uphill battle against fake news.

“It’s a race between machines and people (fabricating information) for fun, political agenda or money,” says Kalina Bontcheva, a professor at the University of Sheffield in the United Kingdom.

The work in this area by computer scientists like Bontcheva and news organisations, including swissinfo.ch, reveal just how difficult it is to actually limit the spread of lies and distortions on social media.

Detecting false information

CEO Mark Zuckerberg announced a plan for curbing the spread of fake news on Facebook that includes “stronger detection … to improve our ability to classify misinformation.” Bontcheva likens technology that can do this to email spam filters. But its powers would likely be limited.

Fake news made in Switzerland

Fake news sites have cropped up in Switzerland but they are few in number and Linards Udris says their following and reach are also limited. One possible reason for this is the size of the country.

“For those who would want to make money (from fake news), it wouldn’t be possible” here, given the relatively small domestic market for news, says the media researcher at the University of Zurich.

Another likely factor, he says, is the comparatively low level of polarisation in Swiss politics, as hyper partisanship is a characteristic of many fake news sites, particularly in the US.

Still, Udris cautions that polarisation is growing in Switzerland, and as more people get their news from social media, experts will need to keep a close eye on how the fake news landscape evolves.

“Fake news sites popping up for monetising purposes are easy to detect,” she says. “The more difficult ones are the claims with hidden agendas, because they’re a lot more subtle” and therefore harder for machines to detect, she adds.

A research projectExternal link she leads is trying to address this challenge. Named Pheme and funded by the European Commission, the project brings together IT experts, universities and swissinfo.ch to devise technologies that could help journalists find and verify online claims.

“We’re trying to use a lot of past rumours as training data for machine learning algorithms,” Bontcheva explains. “We’re training models to spot the opinions of users about a claim, and based on that pick out how likely something is to be true or false.”

Machines are learning, albeit slowly

It may sound straightforward, but training machines to give a clear indication of whether a text is credible or not is a complex task. Scientists must combine approaches, mining both the history of social networks and the content of individual posts to pick out patterns for credible and questionable content alike, says data scientist Pierre Vandergheynst.

“No one has cracked this nut yet,” says the professor at the Federal Institute of Technology Lausanne (EPFL), who studies how information evolves on platforms like Wikipedia. “You can read a text and decide if you should trust it, but a machine does not have the cognitive reasoning to do this.”

Bontcheva admits the development of this technology is still in its early stages.

“It has been three years of experimentation and it’s still far from the level of reliability that we need.”

But she believes Pheme researchers have moved things forward since the project began.

“The technology is getting better and we’ve pushed the state of the art,” she says, adding that project partners have also contributed a large amount of data to the field. “When we started, there weren’t many social media rumours (to use as training data).”

Indeed, researchers often run into the problem of a lack of access to data held by Facebook and other social networks. But the volume of information these companies have to contend with is also an issue for the tech giants themselves, says Bontcheva. It means they must develop systems that can find suspicious content from the enormous volume of posts users share everyday.

Not all tools are created equal

In addition to Facebook and Google, which have both announced plans to curb fake news on their sites, tech savvy users are also trying their hand at fighting online misinformation. Among the proposed solutions that cropped up as fake news became big news in late 2016 is a tool cheekily named “BS Detector” developed by a technologist in the United States. Daniel Sieradski told the mediaExternal link that he created the web browser plug-in, which detects and flags “questionable” news sources on the basis of a list of fake news sites, “in about an hour”.

This method sounds similar to a spam system, says EPFL data scientist Pierre Vandergheynst. And it has its weaknesses.

“You’d need to have a list of all the potential fake news sites” out there for the plug-in to be effective, he says. Even then, it would fail to detect rumours started by social media users with no affiliation to such sites and that then get picked up by mainstream news outlets.

Censorship

Another issue is how to maintain users’ trust in a system that decides which posts contain false information.

“Tech companies need to be totally transparent about how they decide what makes a fake news site,” says Linards Udris, a Swiss media expert at the University of Zurich.

Bontcheva agrees. To avoid accusations of censorship, she says Facebook could give users the option of seeing questionable content in a separate feed, similar to how email inboxes contain a spam folder that people can open at will. Facebook is taking a different tact, piloting a system for flagging “disputed” stories and warning users as they share these items.

The risk of censorship also limits the possibility for states to restrict information. Udris sees little point in introducing new legislation, pointing out that current libel laws – at least in Switzerland – are one way to deal with cases of false, incendiary claims targeting specific persons or groups. But governments could focus their attention elsewhere.

“Tech companies have few commercial incentives” to limit fake news, says Udris, deputy director of the Research Institute for the Public Sphere and SocietyExternal link. When such stories go viral, they help to generate revenue for social platforms. So the state could offer tax breaks, for example, to those firms that take steps against misinformation.

The human factor

Other actors need to get involved. Facebook is testing ways for users and third parties, including fact checking organisations, to help identify misleading posts. But journalists too must be part of the solution.

“The problem is when legitimate (news) websites pick up false information and spread it,” says Pierre Vandergheynst. “At that moment, it’s given the seal of authenticity. That cycle has to be broken.”

With media outlets facing resource cuts to stay afloat, Udris wants to see “a wider debate about how good journalism can be fostered in society.” Public broadcasting is critical, he adds.

“It’s one important pillar where people get high-quality, diverse, verified information.”

The onus is also on online users to become more discriminating news consumers. Udris points to studies that show less than half of people surveyed who get their news on social media pay attention to the source of the information they’re reading.

“Critical thinking is needed,” he says, suggesting there is a need for stronger media education for youth who, according to a recent studyExternal link by the Reuters Institute, are more likely than other age groups to consume news primarily on social media. He also believes that paying for online news can help people to make more critical choices about which outlets to turn to.

Yet, even with efforts from all sectors, the spread of misinformation cannot be stopped completely, and Udris says not to expect short-term miracles.

“Rumours are part of human nature,” he points out.

It’s a thought echoed by Pierre Vandergheynst.

“In the end, the web did not invent conspiracy theories,” says the EPFL researcher.

“It just made them spread quicker, because instead of the local pub you hear them on Facebook.”


In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here. Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR