It was long promised that the Digital Services Act would only be used for taking down illegal content. It seems that is expanding.
If it weren’t for Canada’s open war on the open internet, one of the controversies we would be covering would be the controversy surrounding the Digital Services Act in Europe. I mean, we’re based out of Canada and threatened with laws that could shut down core elements of our online operations designed to report the news with real knowledge about how the internet works, so could you really blame us? Still, we’ve had moments to breathe from time to time and in one of our moments to be able to cover international stuff revolved around the Digital Services Act in Europe.
Back in 2021, we managed to cover this very topic. At the time, one of the major concerns was laws requiring the filtering of live streams, ordering platforms to block anything that could be construed as infringing. Live sporting events was often used as an example of this. The problem is that when it comes to enforcing it, you are opening up the very real risk of false positives, lack of accountability, and a whole bunch of other problems. False DMCA takedowns are already bad enough, but imagine how much worse it’ll be when platforms are expected to process takedowns in real time.
Apparently, this debate has evolved since then. Definitions got expanded and illegal speech was also thrown into the mix. This brought on the possibility of censorship along the way. Officials insisted that such provisions would only be used to take down illegal content and not anything related to misinformation. Otherwise, we are talking about government being an arbiter of truth and blow the situation wide open for suppressing politically inconvenient speech.
It’s that line between taking down illegal content and so-called “misinformation” that is apparently blurring. Since the war between Hamas and the country of Israel broke out, there has been a noticeable uptick in racist content and hate speech. The European Commission took the opportunity to issue letters warning of a speech crackdown on various platforms. From Fast Company:
Misinformation has continued to spread rapidly online in the 10 days since the outbreak of the Israel-Hamas war, with doctored images and mislabeled videos spreading false claims about everything from the nature of the attacks to the extent of U.S. aid to Israel.
Almost immediately after the attack by Hamas, the European Commission responded to the surge in false information by issuing a series of stern warnings to major tech companies, including Meta, X, TikTok, and YouTube, saying their platforms are being used to disseminate “illegal content and disinformation” and urging “mitigation measures” to prevent any further harms caused by such content. The goal, according to the letters—which European Commissioner Thierry Breton posted on X—is to ensure social media companies are complying with their duties under Europe’s newly enacted Digital Services Act (DSA), a sweeping piece of legislation that imposes new content moderation obligations on social media platforms operating in the EU.
But experts on the DSA argue that by publicly pressuring platforms to remove what it deems to be “disinformation,” the Commission risks crossing a line into the very kind of censorship that the legislation was crafted to avoid.
“This is a huge PR fail for the Commission,” says Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center. “They want to evangelize the DSA as a model, and Breton is instead showing the world what looks like huge potential for abuse.”
Mike Masnick of Techdirt has been noticing this dangerous trend for a while now. Back in October, he wrote the following:
Some of us have been warning about the dangers of the Digital Services Act (DSA) in the EU for quite some time, and pointed out that Elon Musk was effectively endorsing censorship in May of 2022 (after announcing his plans to purchase then-Twitter) by meeting with the EU’s Thierry Breton and saying that the DSA was “exactly aligned” with his thinking about his plans for Twitter content moderation. As we pointed out at the time, this was crazy, because the DSA is set up to position the EU government as ultimate censors.
Nearly a year ago, I got to moderate a panel at the EU’s brand new offices in San Francisco (set up for the new EU censors to be closer to the internet platforms), where I was told repeatedly by the top EU official in that office, Gerard de Graaf, that there was no way that the DSA would be used for censorship, and that it was only about “best practices,” (while then admitting that if bad content was still online, they’d have to crack down on companies). It was clear that the EU officials were doing a nonsense two-step in these discussions. They will insist up and down that the DSA isn’t about censorship, but then immediately point out that if you leave up content they don’t want, it will violate the DSA.
Indeed, as the DSA has now gone into effect, last month EU officials released a document that reveals the DSA is very much about censorship. The boring sounding “Application of the risk management framework to Russian disinformation campaigns” basically says that failing to delete Kremlin disinformation likely violates the DSA.
No matter what you think of Russian disinformation tactics, we should be very, very concerned when governments step in and tell companies how they must moderate, with threats of massive fines. That never ends well. And the EU is already making it clear that they view the DSA as a weapon to hold over the heads of websites.
On Tuesday, the very same Thierry Breton who Elon Musk insisted he was “aligned” with tweeted a letter addressed to Musk (notably not company “CEO” Linda Yaccarino) basically telling him that exTwitter needs to remove disinformation about the Hamas attacks in Israel.
Now, there’s no doubt that there have been tremendous amounts of disinformation about the attacks flooding across exTwitter (and if I can find the time to finish it, I have another article about it coming). But no matter what you think of that, it should never be the job of the government to step in and threaten websites over their moderation practices. That never leads to good results, and always (always, always) leads to abuse of power by the governments to silence dissent and marginalized voices.
Recently, Masnick wrote a followup article talking about these growing concerns:
I noted that the framers of the DSA have insisted up, down, left, right, and center that the DSA was carefully designed such that it couldn’t possibly be used for censorship. I’ve highlighted throughout the DSA process how this didn’t seem accurate at all, and a year ago when I was able to interview an EU official, he kept doing a kind of “of course it’s not for censorship, but if there’s bad stuff online, then we’ll have to do something, but it’s not censorship” dance.
Some people (especially on social media and especially in the EU) got mad about my post regarding Breton’s letters, either saying that he was just talking about illegal content (he clearly is not!) or defending the censorship of disinformation as necessary (one person even told me that censorship means something different in the EU).
However, it appears I’m not the only one alarmed by how Breton has taken the DSA and presented it as a tool for him to crack down on legal information that he personally finds problematic.
Masnick not only noted the Fast Company article, but also another article which raised similar concerns:
Firstly, the letters establish a false equivalence between the DSA’s treatment of illegal content and “disinformation.”’ “Disinformation” is a broad concept and encompasses varied content which can carry significant risk to human rights and public discourse. It does not automatically qualify as illegal and is not per se prohibited by either European or international human rights law. While the DSA contains targeted measures addressing illegal content online, it more appropriately applies a different regulatory approach with respect to other systemic risks, primarily consisting of VLOPs’ due diligence obligations and legally mandated transparency. However, the letters strongly focus on the swift removal of content rather than highlighting the importance of due diligence obligations for VLOPs that regulate their systems and processes. We call on the European Commission to strictly respect the DSA’s provisions and international human rights law, and avoid any future conflation of these two categories of expression.
Secondly, the DSA does not contain deadlines for content removals or time periods under which service providers need to respond to notifications of illegal content online. It states that providers have to respond in a timely, diligent, non-arbitrary, and objective manner. There is also no legal basis in the DSA that would justify the request to respond to you or your team within 24 hours. Furthermore, by issuing such public letters in the name of DSA enforcement, you risk undermining the authority and independence of DG Connect’s DSA Enforcement Team.
Thirdly, the DSA does not impose an obligation on service providers to “consistently and diligently enforce [their] own policies.” Instead, it requires all service providers to act in a diligent, objective, and proportionate manner when applying and enforcing the restrictions based on their terms and conditions and for VLOPs to adequately address significant negative effects on fundamental rights stemming from the enforcement of their terms and conditions. Terms and conditions often go beyond restrictions permitted under international human rights standards. State pressure to remove content swiftly based on platforms’ terms and conditions leads to more preventive over-blocking of entirely legal content.
Fourthly, while the DSA obliges service providers to promptly inform law enforcement or judicial authorities if they have knowledge or suspicion of a criminal offence involving a threat to people’s life or safety, the law does not mention a fixed time period for doing so, let alone one of 24 hours. The letters also call on Meta and X to be in contact with relevant law enforcement authorities and EUROPOL, without specifying serious crimes occurring in the EU that would provide sufficient legal and procedural ground for such a request.
It should come as no surprise that a hard line people draw is when it comes to government dictating how rules surrounding speech should be enforced on private third party properties. It would be similarly unacceptable that a government would dictate exactly what kind of content news organizations publish, so why should social media be any different? Modern societies have adopted freedom of expression rights for a very good reason. You don’t exactly have a free society when a government is dictating what can and cannot be said. Yes, there are guardrails surrounding very obvious illegal content like CSAM, but when you move beyond that to speech that is actually legal otherwise, that’s when you start running into problems in a very real hurry.
I don’t like what’s going on with this recent war. It’s a war that I continue to remain unconvinced that there are any real good guys in any of this given the long history between the two sides. This not to mention that this is a war that has already cost many lives on both sides which I don’t like seeing at all. However, when we start talking about policing legal speech online, that’s where I start having problems that requires me to speak out here because now you start treading on my territory of coverage. It looks like the European Commission is taking advantage of the situation to start getting even more power on what people can and cannot say online and that, in my opinion, is a problem.
Drew Wilson on Twitter: @icecube85 and Facebook.
Maybe the Digital Services Act is Europe looking at the uptick in global instability and recognizing it needed to be proactive to inoculate itself against the kind of conflicts and conflagrations that happened about 80 or so years ago? Or get more proactive after seeing what’s going on in the U.S. where lies are allowed to spread unchecked and we see how that all turned out?
Regarding “Legal Online Speech”: What’s legal elsewhere isn’t necessarily legal in Europe. Multiple countries have laws against Hate Speech on the books, among other restrictions.
I think that Masnick and others are just worked up over how the EU is not taking the same Free Speech Free-For-All approach that’s popular in techno-libertarian circles.