7 SEO Fails Seen in the Wild (And How You Can Avoid Them)

business woman realizes SEO blunder
We often get questions from people wondering why their site isn’t ranking, or why it isn’t indexed by the search engines.

Recently, I’ve come across several sites with major errors that could be easily fixed, if only the owners knew to look. While some SEO mistakes are quite complex, here are a few of the often overlooked “head slamming” errors.

So check out these SEO blunders — and how you can avoid making them yourself.

SEO Fail #1: Robots.txt Problems

The robots.txt file has a lot of power. It instructs search engine bots what to exclude from their indexes.

In the past, I’ve seen sites forget to remove one single line of code from that file after a site redesign, and sink their entire site in the search results.

So when a flower site highlighted a problem, I started with one of the first checks I always do on a site — look at the robots.txt file.

I wanted to know whether the site’s robots.txt was blocking the search engines from indexing their content. But instead of the expected text file, I saw a page offering to deliver flowers to Robots.Txt.

SEO fail on a flower site

The site had no robots.txt, which is the first thing a bot looks for when crawling a site. That was their first mistake. But to take that file as a destination … really?

SEO Fail #2: Autogeneration Gone Wild

Secondly, the site was automatically generating nonsense content. It would probably deliver to Santa Claus or whatever text I put in the URL.

I ran a Check Server Page tool to see what status the auto-generated page was showing. If it were a 404 (not found), then bots would ignore the page as they should. However, the page’s server header gave a 200 (OK) status. As a result, the fake pages were giving the search engines a green light to be indexed.

Search engines want to see unique and meaningful content per page. So indexing these non-pages could hurt their SEO.

SEO Fail #3: Canonical Errors

Next, I checked to see what the search engines thought of this site. Could they crawl and index the pages?

Looking at the source code of various pages, I noticed another major error.

Every single page had a canonical link element pointing back to the homepage:

<link rel=”canonical” href=”https://www.domain.com/” />

In other words, search engines were being told that every page was actually a copy of the homepage. Based on this tag, the bots should ignore the rest of the pages on that domain.

Fortunately, Google is smart enough to figure out when these tags are likely used in error. So it was still indexing some of the site’s pages. But that universal canonical request was not helping the site’s SEO.

How to Avoid These SEO Fails
For the flower site’s multiple mistakes, here are the fixes:

  • Have a valid robots.txt file to tell search engines how to crawl and index the site. Even if it’s a blank file, it should exist at the root of your domain.
  • Generate a proper canonical link elementfor each page. And don’t point away from a page you want indexed.
  • Display a custom 404 page when a page URL doesn’t exist. Make sure it returns a 404 server code to give the search engines a clear message.
  • Be careful with autogenerated pages. Avoid producing nonsense or duplicate pages for search engines and users.

Even if you’re not experiencing a site problem, these are good points to review periodically, just to be on the safe side.

Oh, and never put a canonical tag on your 404 page, especially pointing to your homepage … just don’t.

SEO Fail #4: Overnight Rankings Freefall

Sometimes a simple change can be a costly mistake. This story comes from an experience with one of our SEO clients.

When the .org extension of their domain name became available, they scooped it up. So far, so good. But their next move led to disaster.

They immediately set up a 301 redirect pointing the newly acquired .org to their main .com website. Their reasoning made sense — to capture wayward visitors who might type in the wrong extension.

But the next day, they called us, frantic. Their site traffic was nonexistent. They had no idea why.

A few quick checks revealed that their search rankings had disappeared from Google overnight. It didn’t take too much Q&A to figure out what had happened.

They put the redirect in place without considering the risk. We did some digging and discovered that the .org had a sordid past.

The previous owner of the .org site had used it for spam. With the redirect, Google was assigning all of that poison to the company’s main site! It took us only two days to restore the site’s standing in Google.

How to Avoid This SEO Fail
Always research the link profile and history of any domain name you register under.

A qualified SEO consultant can do this. There are also tools you can run to see what skeletons may be lying in the site’s closet.

Whenever I pick up a new domain, I like to let it lie dormant for six months to a year at least before trying to make anything of it. I want the search engines to clearly differentiate my site’s new incarnation from its past life. It’s an extra precaution to protect your investment.

SEO Fail #5: Pages That Won’t Go Away

Sometimes sites can have a different problem — too many pages in the search index.

Search engines sometimes retain pages that are no longer valid. If people land on error pages when they come from the search results, it’s a bad user experience.

Some site owners, out of frustration, list the individual URLs in the robots.txt file. They’re hoping that Google will take the hint and stop indexing them.

But this approach fails! If Google respects the robots.txt, then it won’t crawl those pages. So, Google will never see the 404 status and won’t find out that the pages are invalid.

How to Avoid This SEO Mistake
The first part of the fix is to not disallow these URLs in robots.txt. You WANT the bots to crawl around and know what URLs should be dropped from the search index.

After that, set up a 301 redirect on the old URL. Send the visitor (and search engines) to the closest replacement page on the site. This takes care of your visitors whether they come from search or from a direct link.

SEO Fail #6: Missed Link Equity

I followed a link from a university website and was greeted with a 404 (not found) error.

This is not uncommon, except that the link was to /home.html — the site’s former homepage URL.

At some point, they must have changed their website architecture and deleted the old-style /home.html, losing the redirect in the shuffle.

Ironically, their 404 page says you can start over from the homepage, which is what I was trying to reach in the first place.

404 error message example

It’s a pretty safe bet that this site would love to have a nice link from a respected university going to their homepage. And accomplishing this is entirely within their control. They don’t even have to contact the linking site.

How to Fix This Fail
To fix this link, they just need to put a 301 redirect pointing /home.html to the current homepage. (See our article on how to set up a 301 redirect for instructions.)

For extra credit, go to Google Search Console and review the Index Coverage Status Report. Look at all of the pages that are reported as returning a 404 error, and work on fixing as many errors here as possible.

SEO Fail #7: The Copy/Paste Fail

The site redesign launches, the canonical tags are in place, and the new Google Tag Manager is installed. Yet there are still ranking problems. In fact, one new landing page isn’t showing any visitors in Google Analytics.

The development team responds that they’ve done everything by the book and have followed the examples to the letter.

They are exactly right. They followed the examples — including leaving in the example code! After copying and pasting, the developers forgot to enter their own target site information.

Here are three examples our analysts have run across in website code:

  1. <link rel=”canonical” href=”http://example.com/”>
  2. ‘analyticsAccountNumber’: ‘UA-123456-1’
  3. _gaq.push([‘_setAccount’, ‘UA-000000-1’]);

How to Avoid This SEO Fail
When things don’t work right, look beyond just “is this element in the source code?” It may be that the proper validation codes, account numbers, and URLs were never specified in your HTML code.

Mistakes happen, and people are only human. I hope that these examples will help you avoid similar SEO blunders of your own. To your benefit, we have created an in-depth SEO Guide outlining SEO tips and best practices.

But some SEO issues are more complex than you think. If you have indexing problems, then we are here to help. Call us or fill out our request form and we’ll get in touch.

Like this post? Please subscribe to our blog to have new posts delivered to your inbox.

Bruce Clay is founder and president of Bruce Clay Inc., a global digital marketing firm providing search engine optimization, pay-per-click, social media marketing, SEO-friendly web architecture, and SEO tools and education. Connect with him on LinkedIn or through the BruceClay.com website.

See Bruce's author page for links to connect on social media.

Comments (35)
Filed under: SEO — Tags: ,
Still on the hunt for actionable tips and insights? Each of these recent SEO posts is better than the last!
Bruce Clay on March 11, 2024
Do Meta Descriptions Matter Anymore?
Bruce Clay on March 7, 2024
Social and PR Practices That Support SEO for Beginners
Bruce Clay on March 6, 2024
How to Rank Higher on Google in 9 Steps

35 Replies to “7 SEO Fails Seen in the Wild (And How You Can Avoid Them)”

Thank you for sharing these valuable insights on SEO fails. It’s crucial to learn from real-life examples to avoid common pitfalls and ensure a successful SEO strategy. Your tips provide a clear roadmap for steering clear of these mistakes and achieving better results. Looking forward to implementing these strategies to improve my own SEO efforts!

I am an SEO person and exactly you are right ! once my client did a mistake on the robots.txt file and enable disallow line for all websites So after 2 days all website pages can not appear from google and he worried. and then I am coming to the Picture and just set one line code and after 1 days issue has solved. So don’t make enough smart!!! just kidding. But your blogs matter to us. Thanks a lot

Hello everyone. I am going through an issue and I think people here can help me out. Basically I have shifted my site from one domain to another and I have placed redirect from my old domain. Now the major issue is that google is not being able to crawl my pages other than homepage. Its been more than 2 months and Google has only crawled and indexed my homepage. I have checked my robots.txt, I have submitted sitemap as well. In console, the coverage tab is not detecting and page at all other than homepage. It only show 3 excluded errors of redirect (http) redirect. That’s all. Looking forward for a reply.

Robert Stefanski

Hi Dota2 Boost,

Thanks for your comment! It’s hard to speculate what may be happening — we would need more info to see if the redirects were all implemented properly, the new site doesn’t have any major technical issues, the new site has crawlable links, etc.

With that said, you could start by checking to see if the old site is still indexed in Google. If it is, try to see the last time the pages were crawled. If they were crawled recently, then Google may not be getting the redirect.

If only the old homepage was redirected, and not the rest of the site, that may be a problem. If other measures like robots.txt were used to try and get Google to not go to the old site, then Google may not be getting the redirects either.

Hope this helps!

thanks for informative article. i will be careful while make seo strategy for my site and try to avoid these mistakes.

Thanks for sharing the useful information. we usually makes these mistakes. we will remember while doing the seo for websites

Hi, Thanks for sharing such information. Actually, I was not getting high ranks due to some mistakes that I have found in this article. Now I will surely come over these mistakes to get high ranks on google.

I Almost making 3 mistakes and thanks for your article.

very very helpful and Informative.

Yes, we have to very careful when setting up redirects for new domain addresses. If that old domain had got the penalty then this will negatively impact the new website as well. Canonical errors are another important thing to be taken care of. Many times they can arise without our knowledge, especially when we are creating a lot of articles. Adding the canonical tag for duplicate pages is crucial.

thank you for sharing this valuable piece of information , As an SEO executive I’ll keep all blunders in mind before doing SEO of My Website.

SEO is one hell of a delicate operations. One need to be very careful when take some kind of decisions. It is very important to check every domain quality and spam score before using it on your already established website. But nevertheless great job bruceclay for restoring the website back to normal.

Hi, I think you are right SEO sometimes fails… In my case it fails so often. :)

These little things can impact deeply on the search appearance and ranking of any business. So everyone should double-check while working on these basic things to avoid mistakes. Than you Bruce for sharing this precious information.

I’ll make sure to avoid all these fails, thank u Paula

Useful examples to avoid similar SEO blunders!

Almost making 2 mistakes and thanks for your article.

very very helpful and Informative.

Thanks bruceclay.com

Paula Allen

Yogesh: We’re glad you were able to correct your course in time! Thanks for letting us know.

Thanks, Bruce for sharing the main problems of SEO. Most of the people face Canonical errors and have no knowledge about how to tackle this kind of problems. You clearly define every SEO problem with their solution.
Keep it Up Bruce

Yeah, that’s true the most common mistake is people do not focus on the Robots.txt file that how to manage it properly.
Most of us only focus on off page SEO.

Thanks, Bruce for sharing the common mistakes with us

I agree with the all the above points. But still we need to review the website everyday to make sure everything is on the place.

these are the mistakes everyone should remember while doing seo and after seo.

Really these minor fails can lead to unexpected ranking results. One should always keep such micro things in mind while optimizing any website because these are basics but plays an important role in website ranking. I agree with the fact that indexing of non-pages should be revoked for better crawling.

Thanks for sharing the main problems people face in SEO. will avoid all these mistakes and kindly give more tips regarding SEO
Thanks in Advance

I’ll make sure to avoid these fails lol. Thank You for amazing write up.

Copy and paste and forgetting to change your website’s info. You have to be distracted if you’re making these errors,

Hi there, Great source of information. I think these are some of the crucial factors in SEO as well. People usually fail to check these mistakes. Thanks :)

The Robots.txt fail is by far the most common issue I see in the agency world. So many don’t use robots.txt properly and use it to block all these pages…

and if they only knew, they wouldn’t have anything really in it.

Use meta tags or use nothing…

Also know, many of these COMMON mistakes, are SEARCHABLE! lol

thanks for sharing main problems of seo
i want to know what the difference between google index and robot.text?or if one put link only in google index then it will rank or not?

Paula Allen

Rishi: Google’s index refers to its database of the internet’s contents, constantly being refreshed, from which it pulls search results. A robots.txt file is a text file that you, as the website owner, put at the root of your website if you want to “disallow” certain pages or directories from being indexed. Search engine bots politely check it each time they crawl your site.

Hi Bruce, we see these type of fails happening all the time.

Over at ContentKing we’re running a real-time SEO auditing platform that continuously checks sites for all sorts of issues and changes, including the ones you describe. And I can tell you we catch so many of these very day.

It’s interesting that at first everyone thinks “This won’t happen to me”, and then it does. Because Murphy’s law.

Very indepth and well researched writeup. I really appricate the efforts put in. Things like Robots.txt basic but really helpful as mistakenly SEOs sometimes miss to check these things.

This is good to know that these mistake may happen while doing seo for site and might be its happening with us so need to look at them. Nice sharing bruce!!!

Copy and paste and forgetting to change your website’s info. You have to be distracted if you’re making these errors, lol! Will add some of this to our checklist, thank you, Bruce.

Most of us have made one of these mistakes before – good to share them – thanks Bruce :-)

Paula Allen

Andy: These mistakes are all too common. Thanks for being a faithful BCI reader and commenter! You always add something to the conversation. :)

LEAVE A REPLY

Your email address will not be published. Required fields are marked *

Serving North America based in the Los Angeles Metropolitan Area
Bruce Clay, Inc. | PO Box 1338 | Moorpark CA, 93020
Voice: 1-805-517-1900 | Toll Free: 1-866-517-1900 | Fax: 1-805-517-1919