What Is a Robots.txt File?

Robots.txt is a textual content file with directions for search engine robots telling them which pages they need to and should not crawl. 

These directions are specified by “permitting” or “disallowing” the conduct of sure (or all) bots.

A robots.txt file appears like this:

robots.txt example

Robots.txt recordsdata might sound sophisticated, however the syntax (pc language) is simple. We’ll get into these particulars later.

On this article we’ll cowl:

  • Why robots.txt recordsdata are essential
  • How robots.txt recordsdata work
  • The best way to create a robots.txt file
  • Robots.txt greatest practices

Why Is Robots.txt Essential?

A robots.txt file helps handle net crawler actions in order that they don’t overwork your web site or index pages not meant for public view. 

Under are just a few causes to make use of a robots.txt file:

1. Optimize Crawl Price range

Crawl funds refers back to the variety of pages Google will crawl in your web site inside a given time-frame. The quantity can differ primarily based in your web site’s measurement, well being, and variety of backlinks

In case your web site’s variety of pages exceeds your web site’s crawl funds, you can have unindexed pages in your web site. 

Unindexed pages gained’t rank, and in the end, you’ll waste time creating pages customers gained’t see. 

Blocking pointless pages with robots.txt permits Googlebot (Google’s net crawler) to spend extra crawl funds on pages that matter. 

Be aware: Most web site house owners don’t want to fret an excessive amount of about crawl funds, according to Google. That is primarily a priority for bigger websites with hundreds of URLs.

2. Block Duplicate and Non-Public Pages

Crawl bots don’t have to sift by each web page in your web site.As a result of not all of them have been created to be served within the search engine outcomes pages (SERPs).

Like staging websites, inner search outcomes pages, duplicate pages, or login pages. Some content material administration methods deal with these inner pages for you. 

WordPress, for instance, robotically disallows the login web page /wp-admin/ for all crawlers. 

Robots.txt permits you to block these pages from crawlers.

3. Disguise Assets

Generally you wish to exclude assets similar to PDFs, movies, and pictures from search outcomes. 

To maintain them personal or have Google concentrate on extra essential content material.

In both case, robots.txt retains them from being crawled (and due to this fact listed).

How Does a Robots.txt File Work?

Robots.txt recordsdata inform search engine bots which URLs they will crawl and, extra importantly, which of them to disregard.

Engines like google serve two important functions:

  1. Crawling the net to find content material
  2. Indexing and delivering content material to searchers on the lookout for info

As they crawl webpages, search engine bots uncover and comply with hyperlinks. This course of takes them from web site A to web site B to web site C throughout tens of millions of hyperlinks, pages, and web sites. 

But when a bot finds a robots.txt file, it can learn it earlier than doing the rest. 

The syntax is simple. 

Assign guidelines by figuring out the user-agent (the search engine bot), adopted by the directives (the principles).

You may as well use the asterisk (*) wildcard to assign directives to each user-agent, which applies the rule for all bots.

For instance, the under instruction permits all bots besides DuckDuckGo to crawl your web site:

all bots except DuckDuckGo instructed to crawl the site

Be aware: Though a robots.txt file offers directions, it might’t implement them. Consider it as a code of conduct. Good bots (like search engine bots) will comply with the principles, however unhealthy bots (like spam bots) will ignore them.

Semrush bots crawl the net to assemble insights for our web site optimization instruments, similar to Site Audit, Backlink Audit, On Page SEO Checker, and extra.

Our bots respect the principles outlined in your robots.txt file. 

For those who block our bots from crawling your web site, they gained’t.

However it additionally means you may’t use a few of our instruments to their full potential.

For instance, if you happen to blocked our SiteAuditBot from crawling your web site, you couldn’t audit your web site with our Site Audit tool. To investigate and repair technical points in your web site.

Errors section in Site Audit tool

For those who blocked our SemrushBot-SI from crawling your web site, you couldn’t use the On Page SEO Checker software successfully. 

And also you’d lose out on producing optimization concepts to enhance your webpage rankings.

On Page SEO Checker tool

The best way to Discover a Robots.txt File

The robots.txt file is hosted in your server, identical to some other file in your web site.

View the robots.txt file for any given web site by typing the complete URL for the homepage and including “/robots.txt” on the finish. Like this: https://semrush.com/robots.txt.

https://semrush.com/robots.txt example

Be aware: A robots.txt file ought to at all times reside on the root area degree. For www.instance.com, the robots.txt file lives at www.instance.com/robots.txt. Place it anyplace else, and crawlers might assume you don’t have one. 

Earlier than studying find out how to create a robots.txt file, let’s have a look at their syntax.

Robots.txt Syntax

A robots.txt file is made up of:

  • A number of blocks of “directives” (guidelines)
  • Every with a specified “user-agent” (search engine bot)
  • And an “enable” or “disallow” instruction

A easy block can seem like this:

Person-agent: Googlebot
Disallow: /not-for-google
Person-agent: DuckDuckBot
Disallow: /not-for-duckduckgo
Sitemap: https://www.yourwebsite.com/sitemap.xml

The Person-Agent Directive

The primary line of each directives block is the user-agent, which identifies the crawler. 

If you wish to inform Googlebot to not crawl your WordPress admin web page, for instance, your directive will begin with:

Person-agent: Googlebot
Disallow: /wp-admin/

Be aware: Most search engines have multiple crawlers. They use totally different crawlers for normal indexing, photographs, movies, and many others. 

When a number of directives are current, the bot might select probably the most particular block of directives out there. 

Let’s say you have got three units of directives: one for *, one for Googlebot, and one for Googlebot-Picture. 

If the Googlebot-Information consumer agent crawls your web site, it can comply with the Googlebot directives. 

Then again, the Googlebot-Picture consumer agent will comply with the extra particular Googlebot-Picture directives.

The Disallow Robots.txt Directive

The second line of a robots.txt directive is the “Disallow” line.

You possibly can have a number of disallow directives that specify which elements of your web site the crawler can’t entry. 

An empty “Disallow” line means you’re not disallowing something—a crawler can entry all sections of your web site. 

For instance, if you happen to wished to permit all engines like google to crawl your total web site, your block would seem like this:

Person-agent: *
Permit: /

For those who wished to dam all engines like google from crawling your web site, your block would seem like this:

Person-agent: *
Disallow: /

Be aware: Directives similar to “Permit” and “Disallow” aren’t case-sensitive. However the values inside every directive are. 

For instance, /picture/ just isn’t the identical as /Picture/. 

Nonetheless, you usually discover “Permit” and “Disallow” directives capitalized to make the file simpler for people to learn.

The Permit Directive

The “Permit” directive permits engines like google to crawl a subdirectory or particular web page, even in an in any other case disallowed listing.

For instance, if you wish to forestall Googlebot from accessing each publish in your weblog apart from one, your directive would possibly seem like this:

Person-agent: Googlebot
Disallow: /weblog
Permit: /weblog/example-post

Be aware: Not all engines like google acknowledge this command. However Google and Bing do help this directive. 

The Sitemap Directive

The Sitemap directive tells engines like google—particularly Bing, Yandex, and Google—the place to seek out your XML sitemap.

Sitemaps typically embrace the pages you need engines like google to crawl and index.

This directive lives on the prime or backside of a robots.txt file and appears like this:

The Sitemap directive example

Including a Sitemap directive to your robots.txt file is a fast various. However, you may (and may) additionally submit your XML sitemap to every search engine utilizing their webmaster instruments.

Engines like google will crawl your web site finally, however submitting a sitemap hurries up the crawling course of. 

Crawl-Delay Directive

The crawl-delay directive instructs crawlers to delay their crawl charges. To keep away from overtaxing a server (i.e., decelerate your web site).

Google now not helps the crawl-delay directive. If you wish to set your crawl price for Googlebot, you’ll must do it in Search Console.

Bing and Yandex, then again, do help the crawl-delay directive. Right here’s find out how to use it.

Let’s say you desire a crawler to attend 10 seconds after every crawl motion. Set the delay to 10, like so:

Person-agent: *
Crawl-delay: 10

Noindex Directive

The robots.txt file tells a bot what it might or can’t crawl, however it might’t inform a search engine which URLs to not index and present in search outcomes. 

The web page will nonetheless present up in search outcomes, however the bot gained’t know what’s on it, so your web page will seem like this:

Example of page indexed and blocked by robots.txt in search results

Google by no means formally supported this directive, however on September 1, 2019, Google introduced that this directive is not supported

If you wish to reliably exclude a web page or file from showing in search outcomes, keep away from this directive altogether and use a meta robots noindex tag

The best way to Create a Robots.txt File

Use a robots.txt generator tool or create one your self. 

Right here’s how:

1. Create a File and Title It Robots.txt

Begin by opening a .txt doc inside a text editor or net browser. 

Be aware: Don’t use a phrase processor, as they usually save recordsdata in a proprietary format that may add random characters. 

Subsequent, title the doc robots.txt. 

Now you’re prepared to start out typing directives.

2. Add Directives to the Robots.txt File

A robots.txt file consists of a number of teams of directives, and every group consists of a number of strains of directions. 

Every group begins with a “user-agent” and has the next info:

  • Who the group applies to (the user-agent)
  • Which directories (pages) or recordsdata the agent can entry
  • Which directories (pages) or recordsdata the agent can’t entry
  • A sitemap (optionally available) to inform engines like google which pages and recordsdata you deem essential

Crawlers ignore strains that don’t match these directives.

For instance, let’s say you don’t need Google crawling your /purchasers/ listing as a result of it’s only for inner use.

The primary group would look one thing like this: 

Person-agent: Googlebot
Disallow: /purchasers/

Extra directions may be added in a separate line under, like so:

Person-agent: Googlebot
Disallow: /purchasers/
Disallow: /not-for-google

When you’re accomplished with Google’s particular directions, hit enter twice to create a brand new group of directives. 

Let’s make this one for all engines like google and stop them from crawling your /archive/ and /help/ directories as a result of they’re for inner use solely. 

It will seem like this:

Person-agent: Googlebot
Disallow: /purchasers/
Disallow: /not-for-google
Person-agent: *
Disallow: /archive/
Disallow: /help/

When you’re completed, add your sitemap.

Your completed robots.txt file would look one thing like this:

Person-agent: Googlebot
Disallow: /purchasers/
Disallow: /not-for-google
Person-agent: *
Disallow: /archive/
Disallow: /help/
Sitemap: https://www.yourwebsite.com/sitemap.xml

Save your robots.txt file. Bear in mind, it should be named robots.txt.

Be aware: Crawlers learn from prime to backside and match with the primary most particular group of guidelines. So, begin your robots.txt file with particular consumer brokers first, after which transfer on to the extra basic wildcard (*) that matches all crawlers.

3. Add the Robots.txt File

After you’ve saved the robots.txt file to your pc, add it to your web site and make it out there for engines like google to crawl.

Sadly, there’s no common software for this step.

Importing the robots.txt file relies on your web site’s file construction and hosting. 

Search on-line or attain out to your internet hosting supplier for assistance on importing your robots.txt file. 

For instance, you may seek for “add robots.txt file to WordPress.”

Under are some articles explaining find out how to add your robots.txt file in the most well-liked platforms:

After importing, verify if anybody can see it and if Google can learn it.

Right here’s how.

4. Take a look at Your Robots.txt

First, take a look at whether or not your robots.txt file is publicly accessible (i.e., if it was uploaded accurately). 

Open a personal window in your browser and seek for your robots.txt file. 

For instance, https://semrush.com/robots.txt

Example of https://semrush.com/robots.txt search

For those who see your robots.txt file with the content material you added, you’re prepared to check the markup (HTML code). 

Google provides two choices for testing robots.txt markup:

  1. The robots.txt Tester in Search Console
  2. Google’s open-source robots.txt library (superior)

As a result of the second choice is geared towards superior builders, let’s take a look at your robots.txt file in Search Console.

Be aware: You should have a Search Console account set as much as take a look at your robots.txt file. 

Go to the robots.txt Tester and click on on “Open robots.txt Tester.”

Open robots.txt Tester

For those who haven’t linked your web site to your Google Search Console account, you’ll want so as to add a property first.

Add a property option shown

Then, confirm you’re the web site’s actual proprietor.

Verification step recommended method

Be aware: Google is planning to close down this setup wizard. So sooner or later, you’ll must immediately confirm your property within the Search Console. Learn our full guide to Google Search Console to learn the way.

You probably have present verified properties, choose one from the drop-down listing on the Tester’s homepage.

Select property from the drop-down list

The Tester will establish syntax warnings or logic errors. 

And show the entire variety of warnings and errors under the editor.

Display of warnings and errors

You possibly can edit errors or warnings immediately on the web page and retest as you go. 

Any modifications made on the web page aren’t saved to your web site. The software doesn’t change the precise file in your web site. It solely assessments in opposition to the copy hosted within the software. 

To implement any modifications, copy and paste the edited take a look at copy into the robots.txt file in your web site. 

Semrush’s Site Audit software can verify for points concerning your robots.txt file. 

First, set up a project in the tool and audit your web site.

As soon as full, navigate to the “Points” tab and seek for “robots.txt.”

Search for “robots.txt” in Site Audit

Click on on the “Robots.txt file has format errors” hyperlink if it seems that your file has format errors.

“Robots.txt file has format errors”

You’ll see an inventory of particular invalid strains.

List of invalid lines

You possibly can click on “Why and find out how to repair it” to get particular directions on find out how to repair the error.

“Why and how to fix it” section

Checking your robots.txt file for points is essential, as even minor errors can negatively have an effect on your web site’s indexability.

Robots.txt Greatest Practices

Use New Strains for Every Directive 

Every directive ought to sit on a brand new line.

In any other case, engines like google gained’t have the ability to learn them, and your directions will likely be ignored. 


Person-agent: * Disallow: /admin/
Disallow: /listing/


Person-agent: *
Disallow: /admin/
Disallow: /listing/

Use Every Person-Agent As soon as

Bots don’t thoughts if you happen to enter the identical user-agent a number of occasions.

However referencing it solely as soon as retains issues neat and easy. And reduces the possibility of human error. 


Person-agent: Googlebot
Disallow: /example-page
Person-agent: Googlebot
Disallow: /example-page-2

Discover how the Googlebot user-agent is listed twice.


Person-agent: Googlebot
Disallow: /example-page
Disallow: /example-page-2

Within the first instance, Google would nonetheless comply with the directions and never crawl both web page.

However writing all directives beneath the identical user-agent is cleaner and helps you keep organized.

Use Wildcards to Make clear Instructions

You need to use wildcards (*) to use a directive to all user-agents and match URL patterns. 

For instance, to forestall engines like google from accessing URLs with parameters, you can technically listing them out one after the other. 

However that’s inefficient. You possibly can simplify your instructions with a wildcard.


Person-agent: *
Disallow: /footwear/vans?
Disallow: /footwear/nike?
Disallow: /footwear/adidas?

Environment friendly:

Person-agent: *
Disallow: /footwear/*?

The above instance blocks all search engine bots from crawling all URLs beneath the /footwear/ subfolder with a query mark.

Use ‘$’ to Point out the Finish of a URL

Including the “$” signifies the tip of a URL. 

For instance, if you wish to block engines like google from crawling all .jpg recordsdata in your web site, you may listing them individually. 

However that might be inefficient. 


Person-agent: *
Disallow: /photo-a.jpg
Disallow: /photo-b.jpg
Disallow: /photo-c.jpg

As a substitute, add the “$” function, like so: 

Environment friendly:

Person-agent: *
Disallow: /*.jpg$

Be aware: On this instance, /canine.jpg can’t be crawled, however /canine.jpg?p=32414 may be as a result of it doesn’t finish with “.jpg.” 

The “$” expression is a useful function in particular circumstances such because the above. However it can be harmful.

You possibly can simply unblock stuff you didn’t imply to, so be prudent in its utility.

Crawlers ignore every little thing that begins with a hash (#). 

So, builders usually use a hash so as to add a remark within the robots.txt file. It helps preserve the file organized and straightforward to learn. 

So as to add a remark, start the road with a hash (#).

Like this:

Person-agent: *
#Touchdown Pages
Disallow: /touchdown/
Disallow: /lp/
Disallow: /recordsdata/
Disallow: /private-files/
#Web sites
Permit: /web site/*
Disallow: /web site/search/*

Builders sometimes embrace humorous messages in robots.txt recordsdata as a result of they know customers hardly ever see them. 

For instance, YouTube’s robots.txt file reads: “Created within the distant future (the 12 months 2000) after the robotic rebellion of the mid 90’s which worn out all people.”

YouTube’s robots.txt file example

And Nike’s robots.txt reads “simply crawl it” (a nod to its “simply do it” tagline) and its emblem.

Nike’s robots.txt example

Use Separate Robots.txt Recordsdata for Totally different Subdomains 

Robots.txt recordsdata management crawling conduct solely on the subdomain through which they’re hosted. 

To manage crawling on a distinct subdomain, you’ll want a separate robots.txt file. 

So, in case your important web site lives on area.com and your weblog lives on the subdomain weblog.area.com, you’d want two robots.txt recordsdata.

One for the primary area’s root listing and the opposite on your weblog’s root listing.

Hold Studying

Now that you’ve got a great understanding of how robots.txt recordsdata work, listed below are just a few further assets to proceed studying:


Author admin

More posts by admin

Leave a Reply