Whatever You Required To Understand About The X-Robots-Tag HTTP Header

Posted by

Seo, in its the majority of standard sense, relies upon one thing above all others: Search engine spiders crawling and indexing your website.

But almost every site is going to have pages that you don’t wish to include in this expedition.

For example, do you actually desire your personal privacy policy or internal search pages showing up in Google results?

In a best-case situation, these are not doing anything to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more vital pages.

Thankfully, Google permits web designers to tell search engine bots what pages and content to crawl and what to neglect. There are numerous methods to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you ought to certainly read.

However in high-level terms, it’s a plain text file that lives in your site’s root and follows the Robots Exclusion Protocol (ASSOCIATE).

Robots.txt supplies crawlers with guidelines about the site as an entire, while meta robots tags include instructions for particular pages.

Some meta robotics tags you may use include index, which informs search engines to include the page to their index; noindex, which informs it not to add a page to the index or include it in search engine result; follow, which advises a search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.

Both robots.txt and meta robots tags are useful tools to keep in your tool kit, however there’s likewise another method to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another method for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it manages indexing for an entire page, along with the specific aspects on that page.

And whereas utilizing meta robots tags is relatively uncomplicated, the X-Robots-Tag is a bit more complicated.

However this, obviously, raises the question:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any directive that can be utilized in a robotics meta tag can also be defined as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are particular situations where you would wish to use the X-Robots-Tag– the two most common being when:

  • You wish to control how your non-HTML files are being crawled and indexed.
  • You wish to serve instructions site-wide rather of on a page level.

For example, if you wish to obstruct a specific image or video from being crawled– the HTTP action method makes this simple.

The X-Robots-Tag header is likewise helpful due to the fact that it allows you to integrate multiple tags within an HTTP response or use a comma-separated list of instructions to define regulations.

Possibly you do not want a specific page to be cached and want it to be unavailable after a specific date. You can use a mix of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these instructions.

Basically, the power of the X-Robots-Tag is that it is a lot more versatile than the meta robotics tag.

The advantage of using an X-Robots-Tag with HTTP reactions is that it allows you to use regular expressions to execute crawl instructions on non-HTML, in addition to use criteria on a bigger, worldwide level.

To help you understand the distinction in between these regulations, it’s handy to classify them by type. That is, are they crawler regulations or indexer instructions?

Here’s a helpful cheat sheet to explain:

Crawler Directives Indexer Directives
Robots.txt– uses the user representative, allow, prohibit, and sitemap regulations to define where on-site search engine bots are permitted to crawl and not allowed to crawl. Meta Robots tag– permits you to specify and prevent online search engine from showing particular pages on a website in search engine result.

Nofollow– allows you to define links that ought to not pass on authority or PageRank.

X-Robots-tag– permits you to manage how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s say you want to block specific file types. A perfect approach would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be added to a website’s HTTP reactions in an Apache server configuration via.htaccess file.

Real-World Examples And Uses Of The X-Robots-Tag

So that sounds excellent in theory, however what does it look like in the real life? Let’s take a look.

Let’s say we desired online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would appear like the listed below:

area ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s take a look at a various circumstance. Let’s state we want to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please note that comprehending how these instructions work and the effect they have on one another is essential.

For instance, what occurs if both the X-Robots-Tag and a meta robotics tag are located when spider bots discover a URL?

If that URL is obstructed from robots.txt, then certain indexing and serving regulations can not be found and will not be followed.

If directives are to be followed, then the URLs containing those can not be disallowed from crawling.

Check For An X-Robots-Tag

There are a few various methods that can be used to look for an X-Robots-Tag on the site.

The most convenient way to inspect is to install a browser extension that will tell you X-Robots-Tag info about the URL.

Screenshot of Robots Exclusion Checker, December 2022

Another plugin you can use to determine whether an X-Robots-Tag is being used, for example, is the Web Designer plugin.

By clicking the plugin in your internet browser and browsing to “View Reaction Headers,” you can see the different HTTP headers being utilized.

Another method that can be used for scaling in order to determine concerns on sites with a million pages is Shouting Frog

. After running a website through Shouting Frog, you can navigate to the “X-Robots-Tag” column.

This will show you which sections of the website are utilizing the tag, in addition to which particular instructions.

Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Comprehending and controlling how search engines interact with your website is

the foundation of seo. And the X-Robots-Tag is an effective tool you can utilize to do simply that. Just understand: It’s not without its dangers. It is really simple to make a mistake

and deindex your entire site. That said, if you read this piece, you’re most likely not an SEO newbie.

So long as you utilize it wisely, take your time and inspect your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Included Image: Song_about_summer/ SMM Panel