Press play to listen to this article
The European Commission is expected to demand that Facebook, Google and Twitter alter their algorithms — and prove that they have done so — to stop the spread of online falsehoods, according to three people briefed on the proposals that will be published Wednesday.
Under the new rules, which must be negotiated with the world’s largest social media companies after they are published, Brussels will also require the firms to disclose how they are responding to the spread of disinformation on their platforms; what measures they are taking to either remove or demote specific content or accounts that promote falsehoods; and provide online users with greater transparency on how they are targeted with digital ads.
The measures would mark the farthest any country or region has gone in forcing tech companies to disclose the inner-workings of the algorithms used to populate social media feeds. These machine-learning tools have been criticized for promoting viral hateful or false content, including material associated with the COVID-19 pandemic, over more more mainstream sources. The companies deny wrongdoing.
The upcoming announcement, which the three people spoke about on the condition of anonymity because they were not authorized to speak publicly, is part of a revamp of the Commission’s so-called code of practice on disinformation.
The voluntary pact was signed between Brussels and the world’s largest social media players in 2018. An audit from the European Court of Auditors, a body the checks how EU funds are spent, is expected to say in a report to be published next week that the current agreement does not hold the platforms accountable for their role in spreading disinformation.
The code of conduct aims to provide greater transparency over how companies combated online falsehoods, first ahead of the 2019 European parliamentary elections and now during the ongoing pandemic by requiring the firms to publish regular updates on how they are tackling misinformation.
Those rules are now being rewritten ahead of the bloc’s Digital Services Act, a series of separate proposals that will target harmful online content and the sale of illegal goods. They include fines of up to six percent of annual revenue if companies do not stop the spread or sale of such material online.
As part of that structure, the largest social media companies will have to publicly assess vulnerabilities in their online systems, including their algorithms. Planned measures with the Digital Services Act include external auditing of how firms intend to stop the spread of misinformation and a beefed-up role for the Commission and national regulators to police potentially bad behavior.
Bring on the DSA
The revamped code of practice to be published on Wednesday will again be voluntary, until the Digital Services Act become law, likely in two years time. But it will include measures that eventually be used to comply with the Digital Service Act, including the disclosure of how online falsehoods spread online and how many accounts platforms have removed or demoted.
If social media companies sign up to the code, according to two of the people briefed on Wednesday’s announcement, they will be able to use the standards to prove they are assessing and mitigating the risk of online falsehoods spreading on their platforms, and therefore avoid hefty penalties. If they deviate from these commitments, they added, they would become liable for potentially multi-million euro fines when the Digital Services Act becomes law.
Under the proposals to be published Wednesday, social media companies will face greater limits on how they allow advertisers to target people online via digital ads. That will include the requirement to publish more data on how these paid-for messages can pinpoint people online, as well as allowing advertisers to better understand against which online content their ads are displayed.
While the Commission is expected to announce these wide-ranging proposals Wednesday, it still must hammer out the fine details with the companies, many of which have balked at allowing outside groups greater visibility on how their algorithms operate or how online falsehoods spread online.
The Commission and Twitter declined to comment. Google said it was looking forward to discussing the new code of conduct with Brussels. A representative for Facebook was not immediately available to comment.
UPDATED: This article was updated to include information about the European Court of Auditors’ upcoming report.
This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email [email protected] with the code ‘TECH’ for a complimentary trial.