Drawing the line between freedom of expression and discrimination was difficult enough in the pre-Internet era. Social media and instant communication have made it a nuanced minefield, as a case in Switzerland shows.
Last week in the western Swiss town of Delémont, an altercation between two boys outside the train station was filmed, then posted online. It showed one approaching the other, throwing him to the ground, before both went their separate ways.
Some 50,000 views and 20,000 shares later, the video was taken down by the mother of the assaulted teenager on the advice of local police. The reason? Many of the (hundreds of) comments below the video focused on ethnicity: the aggressor was black, the victim was white, and the discussion veered into a spiralling storm of abuse, much of it anti-immigrant.
Before the boy’s attacker had even been found, the regional prosecutor’s office had warned that any further comments inciting hatred or retribution would be pursued and examined by the justice system; the case was no longer a simple one.
The legal basis
Who and what can be prosecuted in such cases? The example simultaneously shows the difficulty of policing the internet and the growing sense that it is necessary to do so amid growing incidences of online abuse, bullying, and harassment.
Freedom of expression, a similar concept to the American freedom of speech, has been firmly enshrined in Swiss law since the year 2000. Article 16 of the federal constitutionexternal link guarantees every person the right to form, express, impart, and receive opinions and information.
Several international treaties and covenants of which Switzerland is a signatory also cement freedom of expression as a vital component of civic life – notably article 19 of the Universal Declaration of Human Rights and article 10 of the European Convention on Human Rights.
But the interface between this fundamental right and the obligations of citizens towards each other and the state is complicated.
In Switzerland, the flipside of free speech comes mainly in the form of three legal provisions: article 261 of the Criminal Codeexternal link, which forbids racist and anti-religious statements; article 173, which outlaws attacks against “personal honour”; and article 28 of the Civil Codeexternal link, which guarantees “personality rights”.
Under these regulations, cases with specific racist or anti-religious elements, including Holocaust denial, are directly punishable by to three years in prison or a fine. Other instances, such as attacks on non-mentioned minorities such as LGBT people, are similarly punishable, but need to fight their case under the broader umbrellas of personal honour or personality rights.
Problems of policing the web
But enforcing such provisions poses a challenge.
First, the definitions of what constitutes racism, prejudice, dignity etc are open to interpretation within the legal system; one man’s fact is often another man’s slur. According to the information platform humanrights.ch, there is no official definition of “hate speech” in Switzerland, though the United Nations has proposed a list of non-binding criteriaexternal link to help distinguish it.
Next is the question of how to track the millions of comments appearing online each day. Le Tempsexternal link reports that Switzerland recently got its first “ICoPs” – officers working exclusively online to follow Internet-based debates and intervene when necessary – but it remains to be seen whether such measures can get below the tip of the iceberg.
Finally, although police can initiate prosecutions in the case of racial and religious incitements, in other cases it falls upon the individual or group to file a complaint. For those not well-versed in the intricacies of freedom of expression, this may be difficult.
One group, netzcourage.chexternal link, founded by a Zug politician who found herself at the centre of an internet storm following a sexual scandal in 2014, now offers legal and personal advice to those who feel they have been wronged.
For the moment, the onus remains on individuals and the state to enforce existing laws: either through the court system or through education programs focusing on online behaviour (the interior ministry is currently working on such a scheme).
In the wake of fake news scandals, some are also advocating for a greater role for technology companies such as Facebook and Twitter in watching what goes on in their spaces. Currently, Facebook has a “report” feature, which allows users to flag inappropriate messages (Facebook then removes content that attack people on racial, religious, or other grounds). Twitter sometimes freezes offensive accounts, and is currently planning an expansionexternal link of its policy to tackle hate speech and intimidation.
But these actions remain voluntary, at least in Switzerland. In May, the cabinet held off on following Germany’s example by imposing stricter laws on social media companies. For the time being, it said, existing law combined with the social media industry’s tendency to self-regulate should be sufficient to protect online users – but it also vowed to keep an eye on the issue.