FEATURE: Google Panda Algorithm 2.2
by Jessica Lee, June 16, 2011
This month at the Search Marketing Expo (SMX) Advanced conference in Seattle, the Farmer/Panda Google algorithm update (now shortened to just “Panda”) was once again the topic of conversation. Except this time, the discussion was around Panda version 2.2.
Matt Cutts, head of Google’s Web spam team, confirmed that a new iteration of Panda (version 2.2) has been approved internally, but was not yet rolled out. There’s some speculation that it’s been rolling out this week, but no confirmation from Google has come as of press time.
Some of the things we know from the talk with Cutts at SMX Advanced is that a coming iteration of Panda is looking to address the problem of scraper sites outranking original authors. Scraper sites are those sites that take content from others and republish it to their own websites.
It also looks like a new iteration could potentially address the “harshness” of the original rollout, as some felt Panda was a bit aggressive right out the gate and that perhaps not all the sites affected should have been. Cutts also confirmed that there haven’t been any manual exceptions for those sites that believe they’ve been wrongly affected by Panda.
Those hit by Panda but who have made the proper corrections would not see immediate improvements in Google because the Panda algorithm is one that’s run manually and infrequently.
To further explain, some algorithms run automatically, so to speak, while others run when Google decides they should. Panda falls into the latter category.
Cutts also told the SMX audience that the official date that Panda will be applied fully outside of English language queries is not available.
Previous versions of Panda include the launch (1.0), the rollout internationally to all English language sites (2.0) and more recently, version 2.1, which includes tweaks to 2.0.
Since the next iteration is a tweak to 2.0 as well, we can guess that it’s likely not going to have the impact the launch and 2.0 had on sites. With all the expected iterations forecast for Panda, counting versions could eventually fall by the wayside.
What is the “Low Quality” Panda Looks For?
While not everyone may agree on the definition of low-quality content; people tend to recognize it and agree upon it when they see it. This is the very test Google performed when it conducted research to incorporate human feedback into its Panda algorithm.
You may remember the March SEO Newsletter’s Hot Topic, focused on the Panda update and a specific interview where Cutts and Google engineer Amit Singhal explained to Wired.com the process for determining low-quality sites.
In the interview, Cutts and Singhal said certain sites were shown to a test group, and then participants were asked a series of questions such as:
- Would you be comfortable giving this site your credit card?
- Would you be comfortable giving medicine prescribed by this site to your kids?
Google confirmed its algorithmic definition of low-quality sites was working when it saw that the site-blocking functionality it offered to users through a Google Chrome extension resulted in an 84 percent overlap between sites blocked by users and sites affected by Panda.
In April, Google began using the data from the site-blocking functionality that was now available straight from the Web as a determinant in “high confidence situations.” The assumption here is that when a particular site is in question, but leaning towards low quality, that site being blocked by a user confirms its status.
In an article at Search Engine Land, Vanessa Fox gave a checklist for site owners to use when assessing the quality of their sites for Panda, which includes the following questions:
- Can visitors easily find their way around?
- Is it obvious what topic each page is about?
- Is the content original or is it aggregated from other sources?
- Do the number and placement of the ads obscure the visitor’s ability to quickly access the content?
- When looking objectively at the site, is the primary focus the user need or the business goal?
- Is the content on the page authoritative and valuable? Does it answer the query better than other pages on the web?
- If some of the pages on the site are very high quality and engaging, are other pages on the site not as high quality? (Google has stated that enough low quality content on a site can reduce the entire site’s rankings, not just the low quality pages.)
Fox also cited that Google has said its quality guidelines are a good place to start for improving the quality of a site.
Specifically for content, site owners make huge strides in quality when they aim to have content that is in-depth enough on a subject matter so that the user doesn’t have to go elsewhere to find the information they’re looking for. This is key.
This is something that can be easily supported in search engine optimization efforts through the application of siloing content.
What’s Next for the Panda Algorithm?
According to Cutts at SMX Advanced, Panda is trying to ensure that users have a good experience. Therefore, it will continue to have several iterations, constantly improving its purpose to rid the search engine results pages (SERPs) of low-quality sites and irrelevant content for a person’s query.
What does that meant to the average site owner? Keep paying attention to the quality of your content in preparation for continuous rollouts of Panda.
In the SMX Advanced session with Cutts, he acknowledged the pain that many feel whose sites have lost rankings due to the Panda update; in the same breath, he also reaffirmed that complaints about low-quality results in the SERPs are still there.
Cutts’ piece of advice? Don’t chase after the Google algorithm; chase after what you feel users are going to love, because that’s what Google is after, too.
For permission to reprint or reuse any materials, please contact us. To learn more about our authors, please visit the Bruce Clay Authors page. Copyright © 2011 Bruce Clay, Inc.