Get Free Quote
« Friday Recap: Summer... | Blog home | The Life of the... »
June 28, 2010

SEO Sharing Session – 25th June 2010

  • Print Friendly, PDF & Email


Welcome back to our most popular segment of the blog: our bi-weekly Sharing Session *drumroll*. (Well, within the Bruce Clay office at least. Anyone else who enjoys this segment, please do let us know, and drop a comment if you have some free time).

Alright, it’s time to see what’s been happening in the world of Search Engine Optimisation and Social Media for the past two weeks.

SEO in Large Organisations

Hosted by Bruce Clay, and Bruce Clay Inc. staff Susan Esparza and Virginia Nussey, this particular episode of the popular radio show discusses how agencies should alter their SEO strategy and thinking approach for big versus small companies about SEO Services.

Large organisations usually have a competent SEO team which is continually seeking to gather resources to improve the current in-house SEO process. In addition to providing SEO work, the approach is more centered towards which tools are most effective at getting the job done, how to improve advanced analytical reporting and so on.

With small businesses however, the approach is more hands on, educational and implementation oriented. Usually the person in charge is the owner and probably also the marketer and accountant as well! They might not even know what Google Analytics is! With this in mind, it’s important to educate your clients on the benefits of SEO in addition to providing your services.

In the later part, the highlight was an interview with Brent D. Payne, SEO Director of Tribune Company. Tribune is a news organisation who harnesses the power of SEO and writing relevant unique content to build up their online readership. Brent also goes on to talk about the trials and tribulations of getting a traditional media company on board with SEO. I believe it’s quite insightful if you have the chance to listen to it.

8 Canonicalisation Best Practices in Plain English

This is a post on how to avoid canonicalisation on your website. It is a good read, especially if you are an e-commerce or directory focussed website, where many canonicalisation challenges may occur.

Thanks goes to Ian Lurie from Search Engine Land for this helpful post. In addition to Ian’s great tips below, we recommend that if possible you build your website from the ground up. This is to alleviate the need to retroactively fix canonicalization issues at a later date.

  1. Use 301 redirection to ensure that your home page is only found at one URL. If you don’t know how, read Stephan Spencer’s column about rewrites and redirects.
  2. Link consistently to your home page from within your own site. Use a single URL for your home page. Don’t mix in instances of ‘’ with ‘’. If you aren’t doing this properly right now, a quick change may have a big impact on SEO.
  3. Don’t use tracking IDs in internal site navigation. A lot of sites add stuff like ‘?source=blog’ in their navigation. That lets them use their analytics reports to track user movement within, too and from their site. Instead, learn to use your web analytics referrer and navigation path reports. If you must use tracking IDs, change your software to use a hash mark (a ‘#’ sign) instead of a question mark. Search engines ignore everything after the hash, so you’ll avoid confusion.
  4. Don’t use tracking IDs in organic links from other sites. If you get a link on another site, and want it to help with your SEO, don’t put a tracking ID in that, either
  5. Be careful with pagination. Many sites have pagination, where visitors can click a 1, 2, 3 etc. to jump to later pages in search results, product lists or articles. That’s fine, but make sure that the each page has a single URL. For example, if page 1 of the article is ‘’ when I click the article link from the home page, make sure that the number ‘1? in the pagination takes me there, too, instead of to ‘
  6. Set up preventative redirects. Make sure those ‘’ 301 redirects to ‘’.
  7. Exclude ‘e-mail a friend’ pages. Most content management systems that have ‘e-mail a friend’ options direct the user to a unique page that has the same form and content. But every instance of that page has a unique URL like ‘ID=123?, to tell the server which product or article to forward. Use robots.txt and the meta robots tag to exclude these from search engine crawls.
  8. Use common sense when building your site. If you need to change the header, footer or other page element based on where on your site the visitor came from, do it with cookies, or by sniffing out the referring URL. Design to do this ahead of time.

Google Webmaster Tools – Crawl Errors now reports Soft 404s

For those who don’t know, a 404 error is a ‘Page Not Found’ error. These occur when a user or a search engine accesses a URL that does not exist on the domain. This can easily be picked up using add-ons such as Live HTTP Headers or through Google Webmaster Tools, if your website is verified.

Generally, 404 errors will appear on a website. However, in some circumstances a server may return a Soft 404 error instead of a true 404 error. Soft 404 errors occur when custom Page Not Found pages do not return the 404 header status. Sometimes, web masters unintentionally make the error of creating custom not found pages but forget to return the server status code of 404. The ramification of Soft 404’s is that it can limit a site’s crawl coverage by search engines, because these duplicate URL’s may be crawled instead of pages with unique content.

Google is helping out now showing a report of any Soft 404’s on the Crawl error section. Furthermore, Google has also provided the following tips on Soft 404’s errors and how to make them less of a pain in your SEO progress to glory.

Google’s tips for 404 pages:

  1. Check whether you have soft 404s listed in Webmaster Tools
  2. For the soft 404s, determine whether the URL:
    • Contains the correct content and properly returns a 200 response (not actually a soft 404)
    • Should 301 redirect to a more accurate URL
    • Doesn’t exist and should return a 404 or 410 response
  3. Confirm that you’ve configured the proper HTTP Response by using Fetch as Googlebot in Webmaster Tools

Twitter to start tweeting your location

Since late last year, Twitter has included location as a key part of its API. Earlier this year, it was rolled out to as well. But those locations have been abstract cities or areas. Starting now, Twitter is adding actual venues into the mix as well.

On both and, you’ll now be able to tag tweets to specific places (such as venues) Twitter says this is perfect for the World Cup matches currently going on in South Africa. Below are picture examples of the location-based tweets in action.

twitter location-search

twitter location-map

Facebook to Strike a Deal with Localeze

Facebook is now taking their social network platform one step further by striking a deal with Localeze to roll out its own “Places” page in the future.

The implication for this change is that it will start to come into direct competition with existing social networks that also assign locations to when you are updating your status or tweet. It will be interesting to see how this new relationship impacts on the already-dominating social media position of Facebook.

Localeze is a business listings identity manager for local search. The company maintains direct, authorized relationships with local search platforms, national and regional brands, channel partners and local businesses. The company provides businesses with tools to verify, manage and enhance the identity of their local listings across the Web. Localeze is a privately held company headquartered in Vienna, Virginia.

facebook places

  • Print Friendly, PDF & Email

Comments are closed.

Get Started
Learn SEO

Be a Blog Subscriber

Save a trip! Get Bruce Clay's latest digital marketing strategy delivered to your inbox for free.

We respect your privacy and never share your email address

Free Executives Guide To SEO
By continuing to use the site, you agree to the use of cookies. AcceptDo Not Accept