The following content is sourced from external partners. We cannot guarantee that it is suitable for the visually or hearing impaired.
FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Facebook logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration/File photo(reuters_tickers)
By Julia Fioretti
BRUSSELS (Reuters) - The European Union is set to demand tech giants like Facebook <FB.O> and Google <GOOGL.O> do more to stop the spread of fake news on their websites by the end of the year to avoid possible regulatory actions, according to a draft document seen by Reuters.
The draft document sets out for the first time the measures the EU would like to see the tech giants take within a certain timeline. The companies have come under fire in Europe for not doing enough to remove misleading or illegal content, including incitement to hatred, extremism and the online sale of counterfeit products.
The European Commission plans to draw up a "Code of Practice" by July that will commit online platforms and advertisers to take a number of measures to prevent fake news being both uploaded and disseminated, "with a view to producing measurable effects by the end of 2018", the draft policy document says.
"Should the results prove unsatisfactory, the Commission may propose further actions, including actions of a regulatory nature, if necessary."
The measures include improving the scrutiny of advertisement placements, stepping up efforts to close fake accounts, ensuring that fighting disinformation is factored in by design when developing online tools and preventing the unauthorised use of users' personal data by third parties - a clear reference to the Cambridge Analytica scandal engulfing Facebook.
The revelations that political consultancy Cambridge Analytica - which worked on U.S. President Donald Trump's campaign - improperly accessed the data of up to 87 million Facebook users have hit the social network's share price and led to 10 hours of questioning for its CEO by U.S. lawmakers.
"So far, platforms have been unable to address the challenge posed by disinformation and some have turned a blind eye to the manipulative use of their infrastructures," the document says.
"The gravity of the threat, however, has become increasingly clear as exemplified by the recent revelations about personal data mined from social media used in a electoral context."
Facebook has stepped up fact-checking in its fight against fake news and is working on making it uneconomical for people to post such content by lowering its ranking and making it less visible.
The world's largest social network is also working on giving its users more context and background about the content they read on the platform.
Some European countries have already moved to tackle the problem, like Germany which has passed a law requiring social media companies quickly remove hate speech. France is also looking at rules to block fake news.
Facebook disclosed in September that Russians under fake names used the social network to try to influence U.S. voters in the months before and after the 2016 election, writing about inflammatory subjects, setting up events and buying ads.
"Platforms have by and large failed to ensure sufficient transparency on political advertising and sponsored content," the Commission document - which is due to be published at the end of April - says.
The Commission also wants companies and advertisers to "establish clear marking systems and rules for bots and ensure their activities cannot be confused with human interactions."
(Editing by Alexandra Hudson)