How Do I Get Rid of Extra Pages in the Google Index?

Webpages floating in cyberspace landscape.

Let’s say you have an ecommerce website with thousands of products, each with variations in sizes and colors. You use the Google Search Console Index Coverage report to see a list of Indexed pages in the Google search results for your website.

To your surprise, you see way more pages than the website should have. Why does that happen, and how do you get rid of them?

I answer this question in our “Ask Us Anything” series on YouTube. Here’s the video, and then you can read more about this common problem and its solution below.

Why Do These “Extra” Webpages Show Up in Google’s Index?

This issue is common for ecommerce websites. “Extra” webpages can show up in Google’s index because extra URLs are being generated on your ecommerce website.

Here’s how: When people use search parameters on a website to specify certain sizes or colors of a product, it is typical that a new URL is automatically generated for that size or color choice.

That causes a separate webpage. Even though it’s not a “separate” product, that webpage can be indexed like the main product page, if it is discovered by Google via a link

When this happens, and you have a lot of size and color combinations, you may end up with many different webpages for one product. Now, if Google discovers those webpages URLs, then you may end up having multiple webpages in the Google index for one product.

How Do I Get Rid of “Extra” Webpages in Google’s Index?

Using the canonical tag, you can get all of those product variation URLs to point to the same original product page. That is the right way to handle near-duplicate content, such as color changes.

Here’s what Google has to say about using the canonical tag to resolve this issue:

A canonical URL is the URL of the page that Google thinks is most representative from a set of duplicate pages on your site. For example, if you have URLs for the same page (example.com?dress=1234 and example.com/dresses/1234), Google chooses one as canonical. The pages don’t need to be absolutely identical; minor changes in sorting or filtering of list pages don’t make the page unique (for example, sorting by price or filtering by item color).

Google goes on to say that:

If you have a single page that’s accessible by multiple URLs, or different pages with similar content … Google sees these as duplicate versions of the same page. Google will choose one URL as the canonical version and crawl that, and all other URLs will be considered duplicate URLs and crawled less often.

If you don’t explicitly tell Google which URL is canonical, Google will make the choice for you or might consider them both of equal weight, which might lead to unwanted behavior …

But what if you don’t want those “extra” pages indexed at all? In my opinion, the canonical solution is the way to go in this situation.

But there are two other solutions that people have used in the past to get the pages out of the index:

  1. Block pages with robots.txt (not recommended, and I’ll explain why in a moment)
  2. Use a robots meta tag to block individual pages

Robots.txt Option

The problem with using robots.txt to block webpages is that using it does not mean Google will drop webpages from the index.

According to Google Search Central:

A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google.

Also, a disallow directive in robots.txt does not guarantee the bot will not crawl the page. That is because robots.txt is a voluntary system. However it would be rare for the major search engine bots not to adhere to your directives.

Either way, this is not an optimal first choice. And Google recommends against it.

Robots Meta Tag Option

Here’s what Google says about the robots meta tag:

The robots meta tag lets you utilize a granular, page-specific approach to controlling how an individual page should be indexed and served to users in Google Search results.

Place the robots meta tag in the <head> section of any given webpage. Then, either encourage the bots to crawl that page via an XML sitemap submission or naturally (which could take up to 90 days).

When the bots come back to crawl the page, they will encounter the robots meta tag and understand the directive to not show the page in the search results.

Summary

So, to recap:

  • Using the canonical tag is the best and most common solution to the problem of “extra” pages being indexed in Google — a common issue for  ecommerce websites.
  • If you don’t want pages to be indexed at all, consider using the robots meta tag to direct the search engine bots how you want those pages to be handled.

Still confused or want someone to take care of this problem for you? We can help you with your extra pages and remove them from the Google index for you. Schedule a free consultation here

FAQ: How can I eliminate extra pages from my website’s Google index?

The issue of extra pages in your website’s Google index can be a significant roadblock. These surplus pages often stem from dynamic content generation, such as product variations on ecommerce sites, creating a cluttered index that affects your site’s performance.

Understanding the root cause is crucial. Ecommerce websites, in particular, face challenges when various product attributes trigger the generation of multiple URLs for a single product. This can lead to many indexed pages, impacting your site’s SEO and user experience.

Employing the canonical tag is the most reliable solution to tackle this. The canonical tag signals to Google the preferred version of a page, consolidating the indexing power onto a single, representative URL. Google itself recommends this method, emphasizing its effectiveness in handling near-duplicate content.

While some may consider using robots.txt to block webpages, it’s not optimal. Google interprets robots.txt as a directive to control crawler access, not as a tool for removal from the index. In contrast, the robots meta tag offers a more targeted approach, allowing precise control over individual page indexing.

The canonical tag remains the go-to solution. However, if there’s a strong preference for total removal from the index, the robot meta tag can be a strategic ally. Balancing the desire for a streamlined index with SEO best practices is the key to optimizing your online presence effectively.

Mastering the elimination of extra pages from your website’s Google index involves a strategic combination of understanding the issue, implementing best practices like the canonical tag and considering alternatives for specific scenarios. By adopting these strategies, webmasters can enhance their site’s SEO, improve user experience and maintain a clean and efficient online presence.

Step-by-Step Procedure:

  1. Identify Extra Pages: Conduct a thorough audit to pinpoint all surplus pages in your website’s Google index.
  2. Determine Root Cause: Understand why these pages are generated, focusing on dynamic content elements.
  3. Prioritize Canonical Tag: Emphasize the use of the canonical tag as the primary solution for near-duplicate content.
  4. Implement Canonical Tags: Apply canonical tags to all relevant pages, specifying the preferred version for consolidation.
  5. Check Google Recommendations: Align strategies with Google’s guidelines, ensuring compatibility and adherence.
  6. Evaluate Robots.txt Option: Understand the limitations and potential drawbacks before considering robots.txt.
  7. Deploy Robots Meta Tag: Use robot meta tags strategically to control indexing on specific pages if necessary.
  8. Balance SEO Impact: Consider the impact of each solution on SEO and user experience for informed decision-making.
  9. Regular Monitoring: Establish a routine to monitor index changes and assess the effectiveness of implemented strategies.
  10. Iterative Optimization: Continuously refine and optimize strategies based on evolving site dynamics and Google algorithms.

Continue refining and adapting these steps based on your website’s unique characteristics and changing SEO landscapes.

Bruce Clay is founder and president of Bruce Clay Inc., a global digital marketing firm providing search engine optimization, pay-per-click, social media marketing, SEO-friendly web architecture, and SEO tools and education. Connect with him on LinkedIn or through the BruceClay.com website.

See Bruce's author page for links to connect on social media.

Comments (3)
Still on the hunt for actionable tips and insights? Each of these recent SEO posts is better than the last!

3 Replies to “How Do I Get Rid of Extra Pages in the Google Index?”

“Very helpful guide! Dealing with extra pages in the Google index can be a headache, but your step-by-step approach makes it seem manageable. Excited to clean up my website and improve its visibility. Thanks for sharing these valuable insights!”

“Thank you for this helpful guide! “I’ve been searching for a solution to declutter my site’s indexed pages, and your blog provided the perfect answers. Really appreciate it!”

LEAVE A REPLY

Your email address will not be published. Required fields are marked *

Serving North America based in the Los Angeles Metropolitan Area
Bruce Clay, Inc. | PO Box 1338 | Moorpark CA, 93020
Voice: 1-805-517-1900 | Toll Free: 1-866-517-1900 | Fax: 1-805-517-1919