Seo, in its a lot of standard sense, relies upon something above all others: Search engine spiders crawling and indexing your website.
However almost every site is going to have pages that you don’t want to include in this exploration.
In a best-case circumstance, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more crucial pages.
Luckily, Google enables webmasters to tell search engine bots what pages and material to crawl and what to overlook. There are a number of ways to do this, the most common being using a robots.txt file or the meta robots tag.
We have an outstanding and comprehensive description of the ins and outs of robots.txt, which you should definitely check out.
However in top-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REP).
Robots.txt offers crawlers with guidelines about the site as a whole, while meta robots tags include directions for specific pages.
Some meta robots tags you may utilize consist of index, which informs online search engine to add the page to their index; noindex, which informs it not to include a page to the index or include it in search engine result; follow, which advises a search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.
Both robots.txt and meta robotics tags work tools to keep in your tool kit, but there’s likewise another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another method for you to manage how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it manages indexing for a whole page, along with the specific aspects on that page.
And whereas using meta robotics tags is fairly straightforward, the X-Robots-Tag is a bit more complicated.
However this, of course, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any regulation that can be utilized in a robots meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are particular scenarios where you would want to use the X-Robots-Tag– the two most common being when:
- You wish to control how your non-HTML files are being crawled and indexed.
- You want to serve directives site-wide rather of on a page level.
For example, if you wish to obstruct a particular image or video from being crawled– the HTTP reaction method makes this easy.
The X-Robots-Tag header is likewise helpful since it enables you to combine multiple tags within an HTTP action or use a comma-separated list of instructions to define directives.
Maybe you do not desire a specific page to be cached and desire it to be unavailable after a specific date. You can utilize a combination of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is far more flexible than the meta robotics tag.
The advantage of utilizing an X-Robots-Tag with HTTP actions is that it allows you to use routine expressions to perform crawl directives on non-HTML, in addition to use specifications on a larger, international level.
To assist you understand the difference between these directives, it’s practical to classify them by type. That is, are they crawler directives or indexer regulations?
Here’s an useful cheat sheet to explain:
|Spider Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, enable, disallow, and sitemap instructions to specify where on-site search engine bots are allowed to crawl and not allowed to crawl.||Meta Robotics tag– enables you to specify and avoid online search engine from revealing specific pages on a site in search results page.
Nofollow– enables you to define links that ought to not hand down authority or PageRank.
X-Robots-tag– permits you to manage how defined file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s say you wish to block specific file types. A perfect technique would be to include the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be added to a website’s HTTP actions in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds fantastic in theory, but what does it look like in the real world? Let’s take a look.
Let’s state we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would appear like the listed below:
place ~ * . pdf$
Now, let’s look at a various situation. Let’s state we want to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:
Please note that understanding how these regulations work and the effect they have on one another is essential.
For instance, what occurs if both the X-Robots-Tag and a meta robotics tag lie when crawler bots discover a URL?
If that URL is blocked from robots.txt, then specific indexing and serving directives can not be discovered and will not be followed.
If instructions are to be followed, then the URLs including those can not be disallowed from crawling.
Check For An X-Robots-Tag
There are a few various methods that can be utilized to look for an X-Robots-Tag on the website.
The most convenient method to inspect is to set up an internet browser extension that will tell you X-Robots-Tag info about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being utilized, for example, is the Web Developer plugin.
By clicking the plugin in your browser and navigating to “View Reaction Headers,” you can see the numerous HTTP headers being utilized.
Another approach that can be utilized for scaling in order to pinpoint concerns on sites with a million pages is Yelling Frog
. After running a site through Shouting Frog, you can browse to the “X-Robots-Tag” column.
This will show you which areas of the site are using the tag, along with which particular instructions.
Screenshot of Yelling Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Understanding and controlling how online search engine engage with your site is
the cornerstone of search engine optimization. And the X-Robots-Tag is an effective tool you can use to do simply that. Just understand: It’s not without its risks. It is really easy to make a mistake
and deindex your whole website. That stated, if you’re reading this piece, you’re probably not an SEO newbie.
So long as you utilize it sensibly, take your time and examine your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel