Seo, in its most fundamental sense, trusts something above all others: Online search engine spiders crawling and indexing your website.
But nearly every site is going to have pages that you do not wish to consist of in this exploration.
In a best-case circumstance, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more vital pages.
Thankfully, Google allows web designers to tell online search engine bots what pages and material to crawl and what to disregard. There are several ways to do this, the most common being utilizing a robots.txt file or the meta robots tag.
We have an outstanding and in-depth explanation of the ins and outs of robots.txt, which you need to absolutely check out.
However in top-level terms, it’s a plain text file that resides in your website’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).
Robots.txt offers spiders with instructions about the website as a whole, while meta robots tags include directions for particular pages.
Some meta robotics tags you might use include index, which informs search engines to add the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.
Both robots.txt and meta robots tags work tools to keep in your tool kit, but there’s likewise another method to instruct online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for an entire page, as well as the particular components on that page.
And whereas utilizing meta robotics tags is relatively uncomplicated, the X-Robots-Tag is a bit more complex.
But this, obviously, raises the question:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any instruction that can be used in a robotics meta tag can also be defined as an X-Robots-Tag.”
While you can set robots.txt-related regulations in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are certain circumstances where you would wish to use the X-Robots-Tag– the two most typical being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You wish to serve regulations site-wide rather of on a page level.
For example, if you want to obstruct a particular image or video from being crawled– the HTTP reaction approach makes this simple.
The X-Robots-Tag header is also helpful since it allows you to combine several tags within an HTTP action or use a comma-separated list of directives to specify regulations.
Maybe you don’t want a particular page to be cached and desire it to be unavailable after a specific date. You can utilize a mix of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these instructions.
Basically, the power of the X-Robots-Tag is that it is a lot more versatile than the meta robotics tag.
The advantage of using an X-Robots-Tag with HTTP actions is that it allows you to utilize routine expressions to perform crawl regulations on non-HTML, along with apply specifications on a larger, global level.
To assist you comprehend the distinction in between these directives, it’s helpful to categorize them by type. That is, are they crawler regulations or indexer regulations?
Here’s a helpful cheat sheet to discuss:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user representative, permit, prohibit, and sitemap regulations to specify where on-site online search engine bots are enabled to crawl and not allowed to crawl.||Meta Robotics tag– permits you to specify and prevent search engines from revealing specific pages on a website in search results page.
Nofollow– allows you to define links that must not hand down authority or PageRank.
X-Robots-tag– permits you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you wish to obstruct specific file types. A perfect approach would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP reactions in an Apache server setup via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds excellent in theory, however what does it look like in the real world? Let’s take a look.
Let’s say we desired online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would look like the below:
place ~ * . pdf$
Now, let’s take a look at a various scenario. Let’s say we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would appear like the below:
Please note that understanding how these instructions work and the impact they have on one another is important.
For example, what happens if both the X-Robots-Tag and a meta robotics tag are located when crawler bots find a URL?
If that URL is obstructed from robots.txt, then specific indexing and serving regulations can not be found and will not be followed.
If directives are to be followed, then the URLs consisting of those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few various approaches that can be utilized to look for an X-Robots-Tag on the website.
The simplest method to inspect is to set up an internet browser extension that will tell you X-Robots-Tag information about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being utilized, for instance, is the Web Developer plugin.
By clicking on the plugin in your web browser and browsing to “View Action Headers,” you can see the numerous HTTP headers being used.
Another method that can be used for scaling in order to determine concerns on sites with a million pages is Yelling Frog
. After running a site through Shouting Frog, you can browse to the “X-Robots-Tag” column.
This will reveal you which areas of the site are using the tag, in addition to which specific directives.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and managing how online search engine connect with your site is
the foundation of search engine optimization. And the X-Robots-Tag is a powerful tool you can use to do simply that. Just be aware: It’s not without its threats. It is really simple to slip up
and deindex your whole website. That stated, if you’re reading this piece, you’re probably not an SEO newbie.
So long as you utilize it sensibly, take your time and examine your work, you’ll find the X-Robots-Tag to be an useful addition to your toolbox. More Resources: Featured Image: Song_about_summer/ Best SMM Panel