NYT To Lower The Gates, Google, Wikipedia & Robots.txt
The New York Times Unlocks Premium Content
I’m-sure-they’re-trustworthy anonymous sources are telling the New York Post that the New York Times will be doing away with the TimesSelect subscription-only content on its Web site (you still with me?). Rumor is NYT publisher Arthur Sulzberger Jr. has already made the decision but is waiting for software issues to be resolved before announcing it. Oh Father, please let this be true.
Personally, I think this makes total sense for the New York Times (and not just because I want to read their op-ed pieces). The TimesSelect wall is keeping the users unwilling to pay $49.95 a year (represent!) away from some of the site’s biggest draws, including its rich archives and, yes, its op-ed columns. The paltry $11 million they make from users dumb enough to cough up the money isn’t worth the traffic they’re missing out on. Getting rid of this gate would be a very, very good decision for the New York Times.
Let’s all keep our fingers crossed that this is real. Try your toes too.
Google SERPS Indexed In Real Time?
A funny thing happened yesterday. I wrote a post yesterday entitled Building Communities Within Your Community and hit publish. I read the post over, saw I had not linked Ciaran Norris’ name to anything (still haven’t. Help me, Ciaran.) and decided to do a quick Google search to see if I could find a good site to link from. I did my Google search and saw that the entry I had just posted was already appearing in Google’s index. See it? Right there towards the bottom of the second page? Good heavens, I thought, Google sure is on top of things.
And, yes, they are. We’ve been hearing about Google’s super speedy indexing for a few weeks now but today Matt Cutts chimed in talking about the minty fresh indexing Google is doing. You have to admit you’re impressed. I know I was. It couldn’t have been more than 10 minutes (and it was probably considerably less than that) that I had posted the published button and my post slightly mocking Ciaran’s penchant for starting silly Facebook groups was already in the index (note: indexed doesn’t equal ranking). That’s the mark of a great search engine. Nice work, Google.
Color-Coded Wikipedia Entries?
This struck me as an interesting idea: ResourceShelf comments on a program developed at the University of California, Santa Cruz that aims to color-code the text found in Wikipedia entries in order to signify which phrases are trustworthy and which may be questionable based on the editors past performance.
"The program analyzes Wikipedia’s entire editing history-nearly two million pages and some 40 million edits for the English-language site alone-to estimate the trustworthiness of each page. It then shades the text in deepening hues of orange to signal dubious content. A 1,000-page demonstration version is already available on a web page operated by the program’s creator, Luca de Alfaro, associate professor of computer engineering at UCSC."
Would that actually work or is that just going to cause more accounts to get hijacked? And I also don’t like the idea of saying just because John’s information was accurate for the Wikipedia page on elephants means he’s also an expert on doll collecting. And as RS points out, how does one measure "quality" anyway? I Is it how it’s written, the sources used? I think we just need to accept that when you open something to the masses quality will always suffer. It’s up to the individual users to track back the sources and deem the material quality or not.
Exclude By Keyword?
Russ Jones proposes a new exclude-by-keyword directive for the standard robots.txt that would tell the search engines to ignore the pages of your site that have been spammed with inappropriate and unrelated words. It would look a little something like this:
What do you think? Would the engines go for it?
Search Engine Roundtable found that Microsoft adCenter is simply getting too popular for its own good. Aw, it’s hard being pretty.