If you haven’t heard of Mr. Robots, don’t blame yourself. It wasn’t even on the SEO map till just a couple years ago. Most of you, however, know what it is but don’t know exactly how to dominate the robots.
Robots.txt files are no secret. You can spy on literally anyone’s robots file by simply typing “www.domain.com/robots.txt.” The robots.txt should always and only be in the root of the domain and EVERY website should have one, even if it’s generic and I’ll tell you why.
There’s mixed communication about the robots. Use it. Don’t use it. Use meta-robots. You could have also heard advice to abandon the robots.txt all together. Who is right?
Here’s the secret sauce. Check it out.
First things first, understand that the robots.txt file was not designed for human usage. It was designed to command search ‘bots’ about how exactly they can behave on your site. It sets parameters that the bots have to obey and mandates what information they can and cannot access.
This is critical for your sites SEO success. You don’t want the bots looking through your dirty closets, so to speak.
What is a Robots.txt File?
The robots.txt is nothing more than a simple text file that should always sit in the root directory of your site. Once you understand the proper formats it’s a piece of cake to create. This system is called the Robots Exclusion Standard.
Always be sure to create the file in a basic text editor like Notepad or TextEdit and NOT in an HTML editor like Dreamweaver or FrontPage. That’s critically important. The robots.txt is NOT an html file and is not even remotely close to any web language. It has its own format that is completely different than any other language out there. Lucky for us, it’s extremely simple once you know how to use it.
The robots file is simple. It consists of two main directives: User-agent and Disallow.
Every item in the robots.txt file is specified by what is called a ‘user agent.’ The user agent line specifies the robot that the command refers to.
On the user agent line you can also use what is called a ‘wildcard character’ that specifies ALL robots at once.
If you don’t know what the user agent names are, you can easily find these in your own site logs by checking for requests to the robots.txt file. The cool thing is that most major search engines have names for their spiders. Like pet names. I’m not kidding. Slurp.
Here some major bots:
Mediapartners-Google (Google AdSense Robot)
Xenu Link Sleuth
The second most important part of your robots.txt file is the ‘disallow’ directive line which is usually written right below the user agent. Remember, just because the disallow directive is present does not mean that the specified bots are completely disallowed from your site, you can pick and choose what they can and can’t index or download.
The disallow directives can specify files and directories.
User-agent: * Disallow: privacy.html
You can also specify entire directories with a directive like this:
User-agent: * Disallow: /cgi-bin/
Again, if you only want a certain bot to be disallowed from a file or directory, put its name in place of the *.
This will block spiders from your cgi-bin directory.
Super Ninja Robots.txt Trick
Security is a huge issue online. Naturally, some webmasters are nervous about listing the directories that they want to keep private thinking that they’ll be handing the hackers and black-hat-ness-doers a roadmap to their most secret stuff.
But we’re smarter than that aren’t we?
Here’s what you do: If the directory you want to exclude or block is “secret” all you need to do is abbreviate it and add an asterisk to the end. You’ll want to make sure that the abbreviation is unique. You can name the directory you want protected ‘/secretsizzlesauce/’ and you’ll just add this line to your robots.txt:
User-agent: * Disallow: /sec*
This directive will disallow spiders from indexing directories that begin with “sec.” You’ll want to double check your directory structure to make sure you won’t be disallowing any other directories that you wouldn’t want disallowed. For example, this directive would disallow the directory “secondary” if you had that directory on your server.
To make things easier, just as the user agent directive, there is a similar wildcard command for the disallow directive. If you were to disallow /tos then by default it will disallow files with ‘tos‘ such as a tos.html as well as any file inside the /tos directory such as /tos/terms.html.
Important Tactics For Robot Domination
Killer Robot Tactics
• This set up allows the bots to visit everything on your site and sometimes on your server, so use carefully. The * specifies ALL robots and the open disallow directive applies no restrictions to ANY bot.
User-agent: * Disallow:
• This set up prevents your entire site from being indexed or downloaded. In theory, this will keep ALL bots out.
User-agent: * Disallow: /
• This set up keeps out just one bot. In this case, we’re denying the heck out of Ask’s bot, Teoma.
User-agent: Teoma Disallow: /
• This set up keeps ALL bots out of your cgi-bin and your image directory:
User-agent: * Disallow: /cgi-bin/ Disallow: /images/
• If you want to disallow Google from indexing your images in their image search engine but allow all other bots, do this:
User-agent: Googlebot-Image Disallow: /images/
• If you create a page that is perfect for Yahoo!, but you don’t want Google to see it:
User-Agent: Googlebot Disallow: /yahoo-page.html #don’t use user agents or robots.txt for cloaking. That’s SEO suicide.
If You Don’t Use a Robots.txt File…
A well written robots.txt file helps your site get indexed up to 15% deeper for most sites. It also allows you to control your content so that your site’s SEO footprint is clean and indexable and literal fodder for search engines. That, is worth the effort.
Everyone should have and employ a solid robots.txt file. It is critical to the long term success of your site.
Get it done.
The post Killer Robots From Outer SEO Space: How to Dominate the Robots.txt File appeared first on SEO.com.