Search and Destroy
In an attempt to get back at the DOJ for dragging them into court, Google went ahead and deleted all of the pages of a government website. Just kidding!
Well, about it being Google’s fault anyway. Barry of Search Engine Roundtable has started a thread on the SER forums commenting on a post by Alex Papadimoulis entitled, The Spider of Doom. The post explains how webmaster Josh Breckman was contracted to develop a content management system for a ‘fairly large government’ site. Breckman was asked to design a system that would allow employees to log in and make changes to the content on a continuous basis. Because there was already an active website, the client also wanted to be able to ‘reorganize and upload’ the old content onto the new site before it went live. According to Papadimoulis’ post, things were seemingly going quite well until one day all of the content vanished.
Was it Google out to settle the score? An International hijacker hoping to learn government secrets? A touch of company espionage? Jeeves out for world domination? No, none of that. After some searching and scrambling, Breckman was able to locate the cause of the wipeout: the dastardly wicked Googlebot! How did it happen?
“A user copied and pasted some content from one page to another, including an “edit” hyperlink to edit the content on the page. Normally, this wouldn’t be an issue, since an outside user would need to enter a name and password. But, the CMS authentication subsystem didn’t take into account the sophisticated hacking techniques of Google’s spider.”
Yes, that’s right. The Googlebot went in and hit delete on every single page of the government-based website. But you can’t blame the Googlebot – it was just doing its job. That’s what you get for leaving open edit links on the front end of your site.
One thing is for sure: Beware of Googlebot…and client-side security.