In 2014, a top EU court has ruled Google must amend some search results at the request of ordinary people in a test of the so-called “right to be forgotten”.
The case was made that Google needs to comply with requests to remove irrelevant or outdated data. The law is currently in Europe and has not spread to the US. Critics in the EU have rightly called the rule tantamount to censorship, while some have argued in favor of the ruling, feeling it is unfair to have outdated data remain present in search results.
Who says it’s outdated?
The rule has extended far beyond “irrelevant or outdated” data, as it has been shown that the rule has allowed ordinary people to remove results they would rather not have discovered in internet searches. Worse, pedophiles have successfully had unfavorable results removed from Google as well.
So if Kanye doesn’t like stuff that’s said about him – for which Google probably has a couple terabytes of data reserved just for holding indexed results relating to negative-Kanye-speak – he can make them go poof.
Lots of questions abound about whether the law is something necessary, or whether it’s a precedent-setting step toward increased censorship. Indeed, the internet can be a cruel place.
But is censoring results in a search engine the best to remove unwanted entries on the world wide web? I’d argue no.
Search engines index content.
But hosting providers host that content. The buck doesn’t stop with Google (nor Bing, nor Yahoo, nor the others that are even less important than Yahoo…like, um, Infoseek or Netscape?). If I am Facebook, Twitter, or my-shameless-Tumblr-blog-of-filth, and I’m hosting terrible info about you – shouldn’t the onus be on the content-provider to comply with removal requests? If the EU government hosts a website that lists, say, information for worried parents about pedophiles in their neighborhood, shouldn’t I be able to search a suspected pedophile’s name in Google to see if it comes up?
I’d argue yes.
Google is showing us content we seek. But the provider is hosting the content for Google to index in the first place. Some would argue the inevitability of hosts needing to worry about requests coming their way, and to me this appears to be the more accurate route for deleting content you don’t want.
Getting around the law
The BBC has indicated it will publish a list of removed results. Awesome. But does that mean someone (or many-ones) will ask Google to remove that result too? Will we get an infinite regression, like a black hole of short-sighted search censorship? Will new players like DuckDuckGo not care and pave their own way?
Ultimately, I pray the lawmakers and deciders in the US don’t follow the EU’s suit (pun totally but not really intended). If I didn’t like what a San Francisco Chronicle reporter said about me in print (pretend there’s a day when the internet wasn’t yet made. Crazy scary, yes. But just pretend., I’d probably have to contact them and ask for them to include an errata in the next paper. They’d stuff it at the bottom of some ad-filled page in the back, but whatever.
Point is, because digital things can more easily be tracked/searched/indexed, does that mean courts should decide that we can easily track, search, and delete results we don’t like? There isn’t an easy answer for the best solution, but I’m afraid with the EU’s ruling it seems obvious what a bad solution looks like.
No comments yet. You should be kind and add one!