Create a robots.txt file

by Mark in 30 Comments — Updated Reading Time: < 1

Background Image

The robots.txt file is used to instruct search engine robots about what pages on your website should be crawled and consequently indexed. Most websites have files and folders that are not relevant for search engines (like images or admin files) therefore creating a robots.txt file can actually improve your website indexation.

A robots.txt is a simple text file that can be created with Notepad. If you are using WordPress a sample robots.txt file would be:

User-agent: *
Disallow: /wp-
Disallow: /feed/
Disallow: /trackback/

“User-agent: *” means that all the search bots (from Google, Yahoo, MSN and so on) should use those instructions to crawl your website. Unless your website is complex you will not need to set different instructions for different spiders.

“Disallow: /wp-” will make sure that the search engines will not crawl the WordPress files. This line will exclude all files and foldes starting with “wp-“ÂÂ from the indexation, avoiding duplicated content and admin files.

If you are not using WordPress just substitute the Disallow lines with files or folders on your website that should not be crawled, for instance:

User-agent: *
Disallow: /images/
Disallow: /cgi-bin/
Disallow: /any other folder to be excluded/

After you created the robots.txt file just upload it to your root directory and you are done!

Share this article

30 thoughts on “Create a robots.txt file”

  1. I’m confused can someone just give me the correct text for a wordpress blog. Will uploading it from notepad work then?

    Reply
  2. Although this post didn’t discuss why and how a certain robots.txt file with some certain entries is best, you made sure people should understand what those codes mean. This is something I have never seen on any other site on my quest to find the best robots.txt file for better SEO. Thank you very much for that.

    Reply
  3. Its nice that people are beginning to think about controlling robots with robots.txt.. You may want to look at my updated wordpress robots.txt file on AskApache, especially regarding the digg mirror, way back archiver, etc..

    Reply
  4. I think there is no harm in allowing the Bots to assecc your images folders. It can bring traffic through the Google image search

    Reply
  5. So many people with so many ideas!!! I like the ideas in general to block robots to avoid duplicate contents. But in my opinion duplicate contents should be avoided at the url level. Dont allow your cms to generate more then one url of the same post.

    Reply
  6. Thanks Daniel I know that the robots.txt file should still go in the root, I just want to know if I have to change the robots file to look inside my new blog installation directory
    like
    disallow: /blogfolder/wp-admin

    Reply
  7. If I am not wrong, even if the blog in on a sub-folder, the robots.txt file should still go on the root directory since it is the first thing a search bot will look for.

    Reply
  8. The particual robot.txt file is an important choice from the SEO point of view. Thanks for the original approach to the problem!

    Reply
  9. you can’t upload robots.txt to the root directory using WordPress – it doesn’t have an FTP function. The only way it would be possible is if someone associated “robots.txt” as a theme associated file, and you could edit and save it in “theme editor” under the “presentation” tab. You wouldn’t think it would be too difficult to associate a file with a theme by modifying a little code, but presently I don’t know how to do it.

    Reply
  10. John, regarding the first question: not all search bots recognize the * attribute. Some people argue that the Google Bot specifically does not interpret the * as a joker, it just ignores it. That is why I avoid using it. Plus, it should not be necessary to add it before a folder like */feed/.

    Reply
  11. Thanks for the post, very helpful. Is there any problem with doing it like this:
    Disallow: */feed/
    Disallow: */trackback/

    Reply
  12. Great article mate. I’m still unsure what needs to be excluded though. But there is some stuff that needs to be blocked to avoid duplicate content. Cheers

    Reply
  13. Ajay /trackback/ will disallow all the trackback pages, and I also think we should disallow comments, but I am not sure if /comment/ is the right attribute for that.

    Reply
  14. One other thing to consider blocking is any duplicate content on your side. WordPress gives you about three thousand ways to access content (/page, /tag, direct links, etc). Blocking some of them might be a good idea.

    Reply
  15. Bes, if I am not the wrong the Google Image Bot will not need to crawl your image folder at all. It will crawl your pages, and it will index all the images on those pages (i.e. posts).

    Reply
  16. Thilak, unless I am mistaken, wouldn’t the Google Image Bot be following the rules of the robots.txt file regardless of where it starts crawling from?

    Reply
  17. I guess disalowing “wp-” will not affect Google Image bot from crawling your images because it crawls them from the post and not from the directory

    Reply
  18. I am not sure if the GoogleImage bot tracks down images from the /images/ folder or directly from the posts where the images where inserted.

    Reply
  19. Same with WordPress. If you disallow “/wp-” then it’s not going to index any of your uploads like images since they are in your wp-content folder. I get quite a bit of traffic from Google Image Search.

    Reply
  20. I agree that robots.txt files are important, however I disagree with blocking the images folder. Some sites achieve some pretty good traffic numbers from google image search, blocking this directory will block your images from showing up in these results.

    Reply

Leave a Comment