There’s already been a lot of talk about a possible web filter in the EU and now the Computer & Communications Industry Association (CCIA), a group representing a number of IT related companies, have come out against it.
Note: This is an article I wrote that was published elsewhere first. It has been republished here for archival purposes
There’s been a lot of discussion in Europe about the possibility of an EU-wide filter. Such topics such as a mandatory blacklist have been in discussions for some time amongst EU nations and the consensus amongst those who know a thing or two about the internet for the most part has remained that such a filter would never work in the long run.
Now that there is word of a potential EU-wide filter, the discussion can only escalate from here. Proponents argue that it’s suppose to stop child abuse and child pornography. If websites were blocked, then access cannot be gained and therefore it would stem such illegal content. It sounds very simple, particularly if you don’t know exactly how the internet works.
“There is a real danger that this proposal will have unintended consequences,” Ed Black, president of the CCIA said in an interview.
“We oppose this idea partly because it is an inefficient way to combat online child abuse, but also because it builds on efforts by governments around the world to block what they don’t like on the Net,” he said.
Computer World is already reporting that the European Commission has already poured €300,000 to lobby in favour of proposed laws that would put an EU-wide filter in place.
Whether Black knows it or not, there are already unintended consequences happening. During a talk in Sweden, an anti-piracy organization made a very disturbing comment that “child pornography is great” while hoping to incorporate copyrighted material in to the filter.
Believe it or not, there is already precedent on a mass scale filtering system. Every filter has essentially ended in disaster one way or another. For instance, and perhaps the most famous example, the Great Firewall of China. While China has been trying to crack down for some time on what is being discussed on the internet in their own country with, ironically enough, aid from technology sold by US firms, the filter has yet to be completely successful with dissenting voices still making it out of China through programs like Adopt a Blog and heavy encryption. The US government has been a vocal critic on China’s human rights record.
In Australia, there’s been numerous attempts to filter the internet. The last time Australia’s government successfully put in place a filter to filter out porn, it ended in total disaster. It sparked one of the most a very famous headline, “Teen cracks AU $84 million porn filter in 30 minutes”. Since then, the idea of an Australian wide filter which would capture way more than child abuse (as advocates for the filter so often claimed) has been fiercely debated. In 2009 amidst a renewed effort to implement such a filter, the Australian blacklist leaked with definite evidence that abuse had occurred when sites as innocent as a dental clinic wound up on that blacklist.
Thailand also had it’s attempt to filter the internet. You’d think that a filter might be more successful in a country like Thailand considering how much governmental control there is. That sense of power came crashing down in 2008 when the entire Thailand blacklist leaked in 2008.
The reasons why web filtering will never work became very apparent when it was debated in Australia. The scope, as one filtering company found, was particularly difficult. It may be possible to get some form of success on internet sites through direct keywords, but throw in any other protocol such as p2p traffic or even some messaging systems and the filter runs in to serious trouble trying to catch and block everything. Throw in a little encryption and it’s pretty much game over for the filter. From a technical aspect, it’s pretty much impossible to filter anything online because there will always be a way to circumvent it one way or another. The internet was initially designed to be a communication system that would survive a nuclear strike. It’s unlikely, unless every ISP in the world were to shut down, to significantly change the ability to transfer data from one point to another.
It’s also politically bad news. If child abuse can be filtered, what about political speech? That was an issue for debate in every known web filtering case. Someone from the Electronic Frontier Australia once pointed out that even if the current government wouldn’t do anything wrong with the filtering, what about the next government or the government after? Do you really trust every single subsequent government that they would be ethical with a web filtering system if it was ever possible to construct an effective one? It’s the duty of the government to protect its citizens for both today and for the future – even from future potentially worse governments.
It is bad for business as well. What would happen if it was possible for one business to ruin another business through the filtering? It doesn’t necessarily require government interference, but rather a shady employee with a willingness to profit.
It is socially unsound because it places an overwhelming amount of trust onto a set of individuals. It’s impossible to really find anyone or any group that can be completely trusted. The British population in 2007 found the issue of trust to be in the spotlight when the tax arm of the British government lost the identities of nearly half the population of Britain. To say the least, there was horror and anger over that fiasco. It’s impossible to find any entity that can be trusted with such a large burden of responsibility.
The bottom line, in every angle you look at this, if you think carefully on it enough, such a system fails whether it’s philosophically, practically or technically for instance. The only real good thing an internet filtering system is for is probably political suicide.
Drew Wilson on Twitter: @icecube85 and Google+.