The EU is stepping up its fight against disinformation spread by accounts on Facebook, Google, Twitter and other social media with a bolstered code of practice presented Thursday.
The text — a reinforced revision of the code originally launched in 2018 — aims to push the platforms to more urgently and systematically crack down on false advertising, demonetise accounts spreading disinformation and boost fact-checking.
The code, obtained by AFP ahead of publication, remains voluntary.
But officials expect around 30 bodies to sign up, including some of the biggest social media companies, including Facebook parent Meta, Twitter, Microsoft and TikTok, along with big advertising corporations.
They all had input into the drafting of the updated code along with fact-checking outfits and media watchdogs such as Reporters Without Borders (RSF by its initials in French).
It was to be presented Thursday by the EU commissioner for transparency, Vera Jourova, and internal market commissioner Thierry Breton.
The new code of practice contains some 40 commitments, around double the number in the previous version, along with indicators to measure how well they are being met.
While the previous text relied on self-regulation, the new one holds the biggest platforms — those with at least 45 million users in the EU — to binding measures set out in the bloc’s Digital Services Act (DSA).
The DSA, in the process of being adopted, requires big online companies to reduce risks linked to disinformation or face fines that could go as high as six percent of the global turnover.
By signing up to the code, they can show they are taking “risk-mitigating” measures demanded by the DSA.
One of the main innovations in the code is to cut advertising platforms off from receiving revenue from ads placed on sites carrying disinformation.
“The platforms shouldn’t be getting even a single euro from spreading disinformation,” Breton said.
“From Brexit to the Russian war in Ukraine — over the past years well-known social networks have allowed disinformation and destabilisation strategies to spread without restraint, even making money out of it,” he said.
– Reinforced fact-checking –
Platforms that front ads such as Google parent Alphabet — whose YouTube platform monthly gets nearly a quarter of the planet’s population using it — pledge to block advertisements with conspirational content and to verify where they come from.
They also commit to actively counter ads containing disinformation.
The signatories to the code have to give users tools to identify and report false or misleading information, and they need to cooperate more closely with fact-checkers in all EU languages. The fact-checkers also get added support, notably by having access to aggregated, anonymised data.
Unlike illegal content, disinformation won’t be subject to immediate deletion because of the principle of freedom of expression.
Rather it would trigger prompts to users to turn to sources of reliable information, notably ones meeting norms fixed by the Journalism Trust Initiative, of which RSF and AFP are partners.
The platforms also commit to making political advertising more transparent, clearly identifying them as such and letting users know why they were targeted.
The signatories commit to cracking down on fake accounts and the amplification of disinformation via bots, as well as identity theft and malicious deepfake videos.