SMX London Day 2 — Diagnosing Search Problems
Guest Author Marie Howell of Bruce Clay Europe continues her excellent recap coverage of SMX London.
I wanted to dip in and out of the different tracks available and so I attended a Fundamentals & Tactics Track, moderated by Andy Atkins-Kruger of WebCertain, which introduced a truly excellent panel of experts to discuss diagnosing search problems.
Dixon Jones from Receptional began the presentations (whilst wearing a white tie and a white hat as a counter offensive to a comment in a bar recently about the black trilby he was sporting) by discussing the most fundamental issue affecting many sites: that of duplicate content. He introduced how lots of people think they haven’t got duplicate content issues on their site but, in actual fact, they have. He discussed the perception of getting penalised for duplicate content, getting your brand / domain getting split between different versions of the same URL. Dixon covered some of the reasons for duplicate content:
- Tracking URLs with (source= )
- Alternative domain names: more than one domain pointing to their web site.
- www vs. non www
- Session IDs – when you try to make a SEO friendly version of a URL incorrectly
- Scraped or syndicated sites
- When the Search Engines mess up – e.g. 302s on Microsoft right now
He went on to talk about the different types of redirects: 301 vs 302 and ‘named and shamed’ the Royal Mail and demonstrated how a series of incorrect redirects are causing numerous difficulties for the domains.
Matt Paines from XSEO picked up the baton by looking at the 3 main hindrances to ranking – as, at the end of the day it is all about ranking and anything that gets in the way of ranking is perceived as a problem.
- Crawler accessibility – does the site have a cache? (Has it been crawled?) Ensure that the site is linked to in some way, the Search Engines need to be lead there.
- Page relevance
- Site credibility
Matthew “Chewy” Trewhella, Customer Solutions Engineer at Google was up next looking at Google Webmaster Console and sitemaps. With a beautiful, clear presentation style, Chewy showed how the two tools can help you to spot simple configuration errors and how the analysis tool of the robots.txt file can help- i.e. if there is a wildcard in the wrong place, Google may stop indexing your site. Within the Webmaster Console, he demonstrated some of the features, such as being able to inform Google which canonical domain you prefer – www and non-www – which helps you to get started whilst you fix your issues.
Jake Bailie of STN Labs didn’t do a formal presentation but picked up on the issues raised by the other panellists. He listed out the 6 ways to easily generate duplicate problems including tracking URLs, printer friendly pages and core architecture.
His Golden Rule is to have unique content on sites supported by one corresponding URL. Do not use this URL to track visitors, toggle displays, etc. The URL is solely to identify the location of the content. He continued his advice encouraging attendees to do the fundamental housekeeping: to make sure Title tags are unique, to try to get a <h1> on every page (although don’t be excessive with it), to have a good link structure, use simple anchor text, not to use Flash for primary navigation.
He also picked up on the earlier point of crawler accessibility and discussed this in terms of large corporate sites. He suggested that you only want to give a sample of every level of your site tree on to the search engines via your sitemap and let them find the rest. Never use 302 redirects, he advised. Other people do; let them. And fire your host if they won’t let you do 301s.
A superb, informative and lively session with much frankness and helpful suggestions from the panel, much interaction with the audience and expertly moderated by WebCertain’s Andy A-K. An enjoyable session which was well worth attending!