What Is robots.txt? A Beginner’s Guide to Nailing It with Examples
In this article, I will explain why every website needs a robots.txt and how to create one (without causing problems for SEO). I’ll answer common FAQs and include examples of how to execute it properly for your website. I’ll also give you a downloadable guide that covers all the details.
- What is robots.txt?
- Why is robots.txt important?
- But, is robots.txt necessary?
- What problems can occur with robots.txt?
- How does robots.txt work?
- Tips for creating a robots.txt without errors
- The robots.txt Tester
- Robots Exclusion Protocol Guide (free download)
What Is robots.txt?
Robots.txt is a text file that website publishers create and save at the root of their website. Its purpose is to tell automated web crawlers such as search engine bots which pages not to crawl on the website. This is also known as robots exclusion protocol.
Robots.txt does not guarantee that excluded URLs won’t be indexed for search. That’s because search engine spiders can still find out those pages exist via other webpages that are linking to them. Or, the pages may still be indexed from the past (more on that later).
Robots.txt also does not absolutely guarantee a bot won’t crawl an excluded page, since this is a voluntary system. It would be rare for the major search engine bots not to adhere to your directives. But others that are bad web robots, like spambots, malware and spyware, often do not follow orders.
Remember, the robots.txt file is publicly accessible. You can just add /robots.txt to the end of a domain URL to see its robots.txt file (like ours here). So do not include any files or folders that may include business-critical information. And do not rely on the robots.txt file to protect private or sensitive data from search engines.
Why Is robots.txt Important?
Search engine bots have the directive to crawl and index webpages. With a robots.txt file, you can selectively exclude pages, directories or the entire site from being crawled.
This can be handy in many different situations. Here are some situations you’ll want to use your robots.txt:
- To block certain pages or files that should not be crawled/indexed (such as unimportant or similar pages)
- To stop crawling certain parts of the website while you’re updating them
- To tell the search engines the location of your sitemap
- To tell the search engines to ignore certain files on the site like videos, audio files, images, PDFs, etc., and not have them show up in the search results
- To help ensure your server is not overwhelmed with requests*
*Using robots.txt to block off unnecessary crawling is one way to reduce the strain on your server and help bots more efficiently find your good content. Google provides a handy chart here. Also, Bing supports the crawl-delay directive, which can help to prevent too many requests and avoid overwhelming the server.
But, Is robots.txt Necessary?
Every website should have a robots.txt file even if it is blank. When search engine bots come to your website, the first thing they look for is a robots.txt file.
If none exists, then the spiders are served a 404 (not found) error. Although Google says that Googlebot can go on and crawl the site even if there’s no robots.txt file, we believe that it is better to have the first file that a bot requests load rather than produce a 404 error.
What Problems Can Occur with robots.txt?
This simple little file can cause problems for SEO if you’re not careful. Here are a couple of situations to watch out for.
1. Blocking your whole site by accident
This gotcha happens more often than you’d think. Developers can use robots.txt to hide a new or redesigned section of the site while they’re developing it, but then forget to unblock it after launch. If it’s an existing site, this mistake can cause search engine rankings to suddenly tank.
It’s handy to be able to turn off crawling while you’re preparing a new site or site section for launch. Just remember to change that command in your robots.txt when the site goes live.
2. Excluding pages that are already indexed
Blocking in robots.txt pages that are indexed causes them to be stuck in Google’s index.
If you exclude pages that are already in the search engine’s index, they’ll stay there. In order to actually remove them from the index, you should set a meta robots “noindex” tag on the pages themselves and let Google crawl and process that. Once the pages are dropped from the index, then block them in robots.txt to prevent Google from requesting them in the future.
How Does robots.txt Work?
To create a robots.txt file, you can use a simple application like Notepad or TextEdit. Save it with the filename robots.txt and upload it to the root of your website as www.domain.com/robots.txt —— this is where spiders will look for it.
A simple robots.txt file would look something like this:
Google gives a good explanation of what the different lines in a group mean within the robots.txt file in its help file on creating robots.txt:
Each group consists of multiple rules or directives (instructions), one directive per line.
A group gives the following information:
- Who the group applies to (the user agent)
- Which directories or files that agent can access
- Which directories or files that agent cannot access
I’ll explain more about the different directives in a robots.txt file next.
Common syntax used within robots.txt include the following:
User-agent refers to the bot in which you are giving the commands (for example, Googlebot or Bingbot). You can have multiple directives for different user agents. But when you use the * character (as shown in the previous section), that is a catch-all that means all user agents. You can see a list of user agents here.
The Disallow rule specifies the folder, file or even an entire directory to exclude from Web robots access. Examples include the following:
Allow robots to spider the entire website:
Disallow all robots from the entire website:
Disallow all robots from “/myfolder/” and all subdirectories of “myfolder”:
Disallow all robots from accessing any file beginning with “myfile.html”:
Disallow Googlebot from accessing files and folders beginning with “my”:
This command is only applicable to Googlebot and tells it that it can access a subdirectory folder or webpage even when its parent directory or webpage is disallowed.
Take the following example: Disallow all robots from the /scripts/folder except page.php:
This tells bots how long to wait to crawl a webpage. Websites might use this to preserve server bandwidth. Googlebot does not recognize this command, and Google asks that you change the crawl rate via Search Console. Avoid Crawl-delay if possible or use it with care as it can significantly impact the timely and effective crawling of a website.
Tell search engine bots where to find your XML sitemap in your robots.txt file. Example:
To learn more about creating XML sitemaps, see this: What Is an XML Sitemap and How do I Make One?
There are two characters that can help direct robots on how to handle specific URL types:
The * character. As mentioned earlier, it can apply directives to multiple robots with one set of rules. The other use is to match a sequence of characters in a URL to disallow those URLs.
For example, the following rule would disallow Googlebot from accessing any URL containing “page”:
The $ character. The $ tells robots to match any sequence at the end of a URL. For example, you might want to block the crawling of all PDFs on the website:
Note that you can combine $ and * wildcard characters, and they can be combined for allow and disallow directives.
For example, Disallow all asp files:
- This will not exclude files with query strings or folders due to the $ which designates the end
- Excluded due to the wildcard preceding asp – /pretty-wasp
- Excluded due to the wildcard preceding asp – /login.asp
- Not excluded due to the $ and the URL including a query string (?forgotten-password=1) – /login.asp?forgotten-password=1
Not Crawling vs. Not Indexing
If you do not want Google to index a page, there are other remedies for that other than the robots.txt file. As Google points out here:
Which method should I use to block crawlers?
- robots.txt: Use it if crawling of your content is causing issues on your server. For example, you may want to disallow crawling of infinite calendar scripts. You should not use the robots.txt to block private content (use server-side authentication instead), or handle canonicalization. To make sure that a URL is not indexed, use the robots meta tag or X-Robots-Tag HTTP header instead.
- robots meta tag: Use it if you need to control how an individual HTML page is shown in search results (or to make sure that it’s not shown).
- X-Robots-Tag HTTP header: Use it if you need to control how non-HTML content is shown in search results (or to make sure that it’s not shown).
And here is more guidance from Google:
Blocking Google from crawling a page is likely to remove the page from Google’s index.
However, robots.txt Disallow does not guarantee that a page will not appear in results: Google may still decide, based on external information such as incoming links, that it is relevant. If you wish to explicitly block a page from being indexed, you should instead use the noindex robots meta tag or X-Robots-Tag HTTP header. In this case, you should not disallow the page in robots.txt, because the page must be crawled in order for the tag to be seen and obeyed.
Tips for Creating a robots.txt without Errors
Here are some tips to keep in mind as you create your robots.txt file:
- Commands are case sensitive. You need a capital “D” in Disallow, for example.
- Always include a space after the colon in the command.
- When excluding an entire directory, put a forward slash before and after the directory name, like so: /directory-name/
- All files not specifically excluded will be included for bots to crawl.
The robots.txt Tester
Always test your robots.txt file. It is more common that you might think for website publishers to get this wrong, which can destroy your SEO strategy (like if you disallow the crawling of important pages or the entire website).
Use Google’s robots.txt Tester tool. You can find information about that here.
Robots Exclusion Protocol Guide
If you need a deeper dive than this article, download our Robots Exclusion Protocol Guide. It’s a free PDF that you can save and print for reference to give you lots of specifics on how to build your robots.txt.
The robots.txt file is a seemingly simple file, but it allows website publishers to give complex directives on how they want bots to crawl a website. Getting this file right is critical, as it could obliterate your SEO program if done wrong.
Because there are so many nuances on how to use robots.txt, be sure to read Google’s introduction to robots.txt.
Do you have indexing problems or other issues that need technical SEO expertise? If you’d like a free consultation and services quote, contact us today.