What Is robots.txt? A Beginner’s Guide to Nailing It with Examples

Wooden robot figure stands on a patch of grass.
Ah, robots.txt — one teeny tiny file with big implications. This is one technical SEO element you don’t want to get wrong, folks.

In this article, I will explain why every website needs a robots.txt and how to create one (without causing problems for SEO). I’ll answer common FAQs and include examples of how to execute it properly for your website. I’ll also give you a downloadable guide that covers all the details.

Contents:

What Is robots.txt?

Robots.txt is a text file that website publishers create and save at the root of their website. Its purpose is to tell automated web crawlers, such as search engine bots which pages not to crawl on the website. This is also known as robots exclusion protocol.

Robots.txt does not guarantee that excluded URLs won’t be indexed for search. That’s because search engine spiders can still find out those pages exist via other webpages that are linking to them. Or, the pages may still be indexed from the past (more on that later).

Robots.txt also does not absolutely guarantee a bot won’t crawl an excluded page, since this is a voluntary system. It would be rare for major search engine bots not to adhere to your directives. But others that are bad web robots, like spambots, malware, and spyware, often do not follow orders.

Remember, the robots.txt file is publicly accessible. You can just add /robots.txt to the end of a domain URL to see its robots.txt file (like ours here). So do not include any files or folders that may include business-critical information. And do not rely on the robots.txt file to protect private or sensitive data from search engines.

OK, with those caveats out of the way, let’s go on…

Why Is robots.txt Important?

Search engine bots have the directive to crawl and index webpages. With a robots.txt file, you can selectively exclude pages, directories, or the entire site from being crawled.

This can be handy in many different situations. Here are some situations you’ll want to use your robots.txt:

  • To block certain pages or files that should not be crawled/indexed (such as unimportant or similar pages)
  • To stop crawling certain parts of the website while you’re updating them
  • To tell the search engines the location of your sitemap
  • To tell the search engines to ignore certain files on the site, like videos, audio files, images, PDFs, etc., and not have them show up in the search results
  • To help ensure your server is not overwhelmed with requests*

*Using robots.txt to block off unnecessary crawling is one way to reduce the strain on your server and help bots more efficiently find your good content. Google provides a handy chart here. Also, Bing supports the crawl-delay directive, which can help to prevent too many requests and avoid overwhelming the server.

Of course, there are many applications of robots.txt, and I’ll outline more of them in this article.

But, Is robots.txt Necessary?

Every website should have a robots.txt file, even if it is blank. When search engine bots come to your website, the first thing they look for is a robots.txt file.

If none exists, then the spiders are served a 404 (not found) error. Although Google says that Googlebot can go on and crawl the site even if there’s no robots.txt file, we believe that it is better to have the first file that a bot requests load rather than produce a 404 error.

What Problems Can Occur with robots.txt?

This simple little file can cause problems for SEO if you’re not careful. Here are a couple of situations to watch out for.

1. Blocking your whole site by accident

This gotcha happens more often than you’d think. Developers can use robots.txt to hide a new or redesigned section of the site while they’re developing it, but then forget to unblock it after launch. If it’s an existing site, this mistake can cause search engine rankings to suddenly tank.

It’s handy to be able to turn off crawling while you’re preparing a new site or site section for launch. Just remember to change that command in your robots.txt when the site goes live.

2. Excluding pages that are already indexed

Blocking in robots.txt pages that are indexed causes them to be stuck in Google’s index.

If you exclude pages that are already in the search engine’s index, they’ll stay there. In order to actually remove them from the index, you should set a meta robots “noindex” tag on the pages themselves and let Google crawl and process that. Once the pages are dropped from the index, then block them in robots.txt to prevent Google from requesting them in the future.

How Does robots.txt Work?

To create a robots.txt file, you can use a simple application like Notepad or TextEdit. Save it with the filename robots.txt and upload it to the root of your website as www.domain.com/robots.txt —— this is where spiders will look for it.

A simple robots.txt file would look something like this:

User-agent: *
Disallow: /directory-name/

Google gives a good explanation of what the different lines in a group mean within the robots.txt file in its help file on creating robots.txt:

Each group consists of multiple rules or directives (instructions), one directive per line.

A group gives the following information:

  • Who the group applies to (the user agent)
  • Which directories or files that agent can access
  • Which directories or files that agent cannot access

I’ll explain more about the different directives in a robots.txt file next.

Robots.txt Directives

Common syntax used within robots.txt includes the following:

User-agent

User-agent refers to the bot in which you are giving the commands (for example, Googlebot or Bingbot). You can have multiple directives for different user agents. But when you use the * character (as shown in the previous section), that is a catch-all that means all user agents. You can see a list of user agents here.

Disallow

The Disallow rule specifies the folder, file or even an entire directory to exclude from Web robots access. Examples include the following:

Allow robots to spider the entire website:

User-agent: *
Disallow:

Disallow all robots from the entire website:

User-agent: *
Disallow: /

Disallow all robots from “/myfolder/” and all subdirectories of “myfolder”:

User-agent: *
Disallow: /myfolder/

Disallow all robots from accessing any file beginning with “myfile.html”:

User-agent: *
Disallow: /myfile.html

Disallow Googlebot from accessing files and folders beginning with “my”:

User-agent: googlebot
Disallow: /my

Allow

This command is only applicable to Googlebot and tells it that it can access a subdirectory folder or webpage even when its parent directory or webpage is disallowed.

Take the following example: Disallow all robots from the /scripts/folder except page.php:

Disallow: /scripts/
Allow: /scripts/page.php

Crawl-delay

This tells bots how long to wait to crawl a webpage. Websites might use this to preserve server bandwidth. Googlebot does not recognize this command, and Google asks that you change the crawl rate via Search Console. Avoid Crawl-delay if possible or use it with care as it can significantly impact the timely and effective crawling of a website.

Sitemap

Tell search engine bots where to find your XML sitemap in your robots.txt file. Example:

User-agent: *
Disallow: /directory-name/
Sitemap: https://www.domain.com/sitemap.xml

To learn more about creating XML sitemaps, see this: What Is an XML Sitemap and How do I Make One?

Wildcard Characters

There are two characters that can help direct robots on how to handle specific URL types:

The * character. As mentioned earlier, it can apply directives to multiple robots with one set of rules. The other use is to match a sequence of characters in a URL to disallow those URLs.

For example, the following rule would disallow Googlebot from accessing any URL containing “page”:

User-agent: googlebot
Disallow: /*page

The $ character. The $ tells robots to match any sequence at the end of a URL. For example, you might want to block the crawling of all PDFs on the website:

User-agent: *
Disallow: /*.pdf$

Note that you can combine $ and * wildcard characters, and they can be combined for allow and disallow directives.

For example, Disallow all asp files:

User-agent: *
Disallow: /*asp$

  • This will not exclude files with query strings or folders due to the $ which designates the end
  • Excluded due to the wildcard preceding asp – /pretty-wasp
  • Excluded due to the wildcard preceding asp – /login.asp
  • Not excluded due to the $ and the URL including a query string (?forgotten-password=1) – /login.asp?forgotten-password=1

Not Crawling vs. Not Indexing

If you do not want Google to index a page, there are other remedies for that other than the robots.txt file. As Google points out here:

Which method should I use to block crawlers?

  • robots.txt: Use it if crawling of your content is causing issues on your server. For example, you may want to disallow crawling of infinite calendar scripts. You should not use the robots.txt to block private content (use server-side authentication instead), or handle canonicalization. To make sure that a URL is not indexed, use the robots meta tag or X-Robots-Tag HTTP header instead.
  • robots meta tag: Use it if you need to control how an individual HTML page is shown in search results (or to make sure that it’s not shown).
  • X-Robots-Tag HTTP header: Use it if you need to control how non-HTML content is shown in search results (or to make sure that it’s not shown).

And here is more guidance from Google:

Blocking Google from crawling a page is likely to remove the page from Google’s index.
However, robots.txt Disallow does not guarantee that a page will not appear in results: Google may still decide, based on external information such as incoming links, that it is relevant. If you wish to explicitly block a page from being indexed, you should instead use the noindex robots meta tag or X-Robots-Tag HTTP header. In this case, you should not disallow the page in robots.txt, because the page must be crawled in order for the tag to be seen and obeyed.

Tips for Creating a robots.txt without Errors

Here are some tips to keep in mind as you create your robots.txt file:

  • Commands are case-sensitive. You need a capital “D” in Disallow, for example.
  • Always include a space after the colon in the command.
  • When excluding an entire directory, put a forward slash before and after the directory name, like so: /directory-name/
  • All files not specifically excluded will be included for bots to crawl.

The robots.txt Tester

Always test your robots.txt file. It is more common that you might think for website publishers to get this wrong, which can destroy your SEO strategy (like if you disallow the crawling of important pages or the entire website).

Use Google’s robots.txt Tester tool. You can find information about that here.

Robots Exclusion Protocol Guide

If you need a deeper dive than this article, download our Robots Exclusion Protocol Guide. It’s a free PDF that you can save and print for reference to give you lots of specifics on how to build your robots.txt.

Closing Thoughts

The robots.txt file is a seemingly simple file, but it allows website publishers to give complex directives on how they want bots to crawl a website. Getting this file right is critical, as it could obliterate your SEO program if done wrong.

Because there are so many nuances on how to use robots.txt, be sure to read Google’s introduction to robots.txt.

Do you have indexing problems or other issues that need technical SEO expertise? If you’d like a free consultation and services quote, contact us today.

FAQ: How can I optimize my website’s performance with an effective robots.txt file?

Ensuring your website’s optimal performance is paramount to success. A key aspect often overlooked is the strategic use of a robots.txt file. This unassuming text document wields the power to significantly impact your site’s search engine optimization (SEO) and overall performance.

At its core, a robots.txt file is a gatekeeper for search engine bots, guiding them on which parts of your website to crawl and index. By skillfully crafting this file, you can strategically control how search engines interact with your content. This optimization technique is vital for preventing unnecessary strain on your server, ensuring that valuable resources are allocated efficiently.

One essential application of robots.txt optimization is the ability to exclude specific pages or directories from being crawled. This is particularly useful for hiding unimportant or redundant pages, preventing search engines from wasting resources on irrelevant content. For instance, you can avoid video or audio files from being crawled, preserving your server’s bandwidth for more critical components.

Updating your website can be delicate, often requiring temporary withdrawal of specific pages. By utilizing robots.txt optimization, you can gracefully handle this situation without affecting SEO rankings. Temporarily blocking crawling on pages undergoing updates ensures that search engines won’t index incomplete or inconsistent content, maintaining your site’s credibility.

Moreover, robots.txt optimization empowers you to guide search engines toward your sitemap’s location. This simple step helps search engine bots navigate your site’s structure efficiently, ensuring no valuable content is overlooked. Strategically placing your sitemap in robots.txt enhances the discoverability of your most important pages.

While the benefits of robots.txt optimization are substantial, it’s crucial to proceed cautiously. Improper configuration can inadvertently block important pages, leading to declining search engine rankings. Therefore, seeking the guidance of SEO experts or referring to reputable resources, such as Google’s guidelines, is highly recommended before implementing changes.

A practical robots.txt file is a powerful tool in your SEO arsenal. By optimizing this seemingly unassuming element, you can exert control over how search engines interact with your website, ultimately enhancing performance, resource allocation, and overall user experience.

Step-by-Step Procedure for robots.txt Optimization:

  1. Understand the role of robots.txt in SEO and website performance.
  2. Select any pages or directories from which you would like to exclude crawling.
  3. Create a robots.txt file using any plain-text editor like Notepad or TextEdit.
  4. Specify user-agent directives to target search engine bots (e.g., User-agent: Googlebot).
  5. Utilize the Disallow directive to block access to pages or directories you want to exclude (e.g., Disallow: /videos/).
  6. Implement the Allow directive for specific pages within blocked directories (e.g., Allow: /videos/index.html).
  7. Use the Crawl-delay directive to control the rate at which bots crawl your site, if necessary.
  8. Include the Sitemap directive to guide search engines to your XML sitemap (e.g., Sitemap: https://www.domain.com/sitemap.xml).
  9. Test your robots.txt file using Google’s robots.txt Tester tool to identify any issues or errors.
  10. Upload the robots.txt file to the root directory of your website via FTP or your content management system (CMS).
  11. Monitor your website’s performance and search engine rankings after implementing robots.txt optimization.
  12. Regularly update and refine your robots.txt file as your website’s structure and content evolve.
  13. Consult SEO experts or reputable resources for guidance on best practices and advanced optimization techniques.
  14. Review and analyze your website’s crawl and index statistics to ensure effective robots.txt optimization.
  15. Adjust directives as needed based on changes in your website’s content and goals.
  16. Avoid blocking critical pages that are essential for search engine visibility and user experience.
  17. Continuously stay informed about updates and changes to search engine algorithms that may impact robots.txt optimization.
  18. Prioritize user experience and ensure that any exclusions align with your website’s content strategy.
  19. Regularly audit and maintain your robots.txt file to ensure ongoing optimization and performance.
  20. Keep abreast of emerging trends and best practices in SEO and robots.txt optimization for sustained success.

Bruce Clay is founder and president of Bruce Clay Inc., a global digital marketing firm providing search engine optimization, pay-per-click, social media marketing, SEO-friendly web architecture, and SEO tools and education. Connect with him on LinkedIn or through the BruceClay.com website.

See Bruce's author page for links to connect on social media.

Comments (26)
Still on the hunt for actionable tips and insights? Each of these recent SEO posts is better than the last!

26 Replies to “What Is robots.txt? A Beginner’s Guide to Nailing It with Examples”

I read your article, it’s really helpful for me and others, thanks.

Wow, this site is a treasure trove of information and insights. I have been exploring different avenues to boost my online presence and engage with my audience and this website has been an absolute game-changer. The articles and guides are not only informative but also easy to understand, even for someone like me who is relatively new. I particularly enjoyed the section on Robots.txt file with actionable tips that I can implement right away. Thanks for sharing this valuable resource, and keep up the fantastic work.

robot.txt is the primary element that indexes your website.

If you’ve deactivated this option on your website’s admin panel , the amount of time you conduct SEO won’t add anything to your list.

Your blog is awesome . The information regarding robots.txt file is great and an useful too. It helped me a lot.

Gratitude for giving this inside and out article on making a robots.txt document that assists bots with finding content on a site all the more proficiently. This is an extraordinary asset for certain valuable connections for anybody needing to realize the reason why a site’s robots.txt document is so significant for Web optimization. I value your understanding and time in making this substance.

robot.txt is the key element for your website indexing.

If you have disabled this option from your website admin panel then how aggressively you do SEO will not bring anything on your plate.

I learned the value of a good robots.txt file from this article. I’m grateful. My file was formatted wrongly; I’ll make the necessary revisions.

This article helped me understand the importance of proper robots.txt file. Thank you! I’ll make changes – my file was formatted incorrectly.

Thanks for sharing with us about robots.txt for beginners.

Robots.txt is a nice topic that helps to indicate which page does crawl or not.

robots.txt file is an essential file, tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests.

My favorite website is this one because the information is informative and easy to understand. Thanks for sharing this info about robots.txt.

Hi Bruce Clay,
It’s also important to mention here, that we may not change the robots.txt file name as it is case sensitive.

For example
Robots.txt => Wrong
Robot.txt => Wrong
robot.txt => Wrong
robots.txt => Right

It is mainly used to avoid overloading your site with requests.

In addition to helping you direct “search engine” crawlers away from the less important or repetitive pages on your site, robots. txt can also serve other important purposes: It can help prevent the appearance of duplicate content. Sometimes your website might purposefully need more than one copy of a piece of content.

Any automatically operated machine that takes the place of human effort, even if it does not seem like a human or performs functions in a human like manner.

I always follow this website, the content is informative and easy to understand.

After reading your article I came to know everything about robot.txt and its uses.
This is very unique and helpful. I like it. Thanks

amazing post, very informative and I will definitely suggest other people about it for sure.

Great Information about robots.txt. Excellent blog! Great websites!

A very thorough guide :-)

A complete guide! Like it.

Thanks for providing this in-depth article on creating a robots.txt file that helps bots find content on a website more efficiently. This is a great resource with some useful links for anyone wanting to learn why a website’s robots.txt file is so important for SEO. I appreciate your insight and time in creating this content.

If you want stop google to crawl your page use robot.txt file and upload it on the root folder This article explain robot.txt file in a good way.

Yes that is true Blocking Google from crawling a page is likely to remove the page from Google’s index.

LEAVE A REPLY

Your email address will not be published. Required fields are marked *

Serving North America based in the Los Angeles Metropolitan Area
Bruce Clay, Inc. | PO Box 1338 | Moorpark CA, 93020
Voice: 1-805-517-1900 | Toll Free: 1-866-517-1900 | Fax: 1-805-517-1919