The Little Mistake That Could Cost You Loads of Free SEO Traffic

By Ali Luke

This guest post is by SEO expert Joe Williams.

I can’t tell you how many times I’ve seen this mistake. You might be making it without realising.

Sometimes it can cost you a couple of days’ traffic, sometimes a couple of weeks.

I even saw it cost one unfortunate business owner two months of free traffic, and several thousand pounds in revenue loss.

I’ve been an SEO consultant for seven years. I’ve SEO’d big brands and small one-man shops, and this little (but really harmful) mistake happens to the best of them. Thankfully, it’s very simple to fix.

When It Happens and How to Avoid It

Websites are most vulnerable to this traffic killer just after the release of a new design. That’s because the web designer will want to please the client by showing progress and getting feedback on different iterations.

Often, the web designer will create a subdomain for the new website like: newdesign.example.com. This creates a bit of an SEO problem. newdesign.example.com may get indexed by search engines, and this creates duplicate content which isn’t any good for SEO.

So, if the web designer is savvy, he’ll block access for the newdesign.example.com – by adding a robots.txt file. This is a two minute job, and will prevent search engines from accessing the new sub domain.

It’s a regular plain text file and will look like this:

User-agent: *
Disallow: /

So far, so good.

Google’s web crawler is known as Googlebot and its job is to discover and index pages. It’s known as a user-agent. Before it can visit any webpage, it must visit the robots.txt file to learn what areas it can and can’t index. It follows these instruction to the letter.

In User-agent: *, the * acts as a wildcard which means the following rule below it applies to all user-agents (including Googlebot).

In this case, the forward slash in, Disallow: / indicates that all the content on the new subdomain should not be crawled or indexed.

Now for the Little Mistake that has Big Consequences

Typically when new designs get signed-off, they are often behind schedule. So it’s usually a rush getting the new design live onto the main website (e.g. example.com).

The designer will then copy all the files from the development subdomain (e.g. newdesign.example.com), and typically this includes the robots.txt file:

User-agent: *
Disallow: /

If the robots.txt file remains unchanged and goes live on the main site, it’s like traffic workman holding up a big red stop sign. During this time no (SEO) traffic will be allowed to go through. The stop sign only changes to green and welcomes Google back when the robots.txt returns to normal and the forward slash is removed, like this:

User-agent:
Disallow:

Really easy to fix but a really easy mistake to make too, wouldn’t you say?

From my experience this happens  5 to 10% of the time – but usually isn’t a big problem because it’s discovered within a day or so.

But for one client of mine, this had been going on for two months, meaning they missed out on bucket loads of cash. Needless to say, they were pretty mad with their web designer!

Why don’t you check your site right now? The robots.txt file always lives at the root of your website, like this:

robots-txt

The DailyBlogTips robots.txt disallows search engines access to two folders. This is intentional to ensure search engines don’t find and index technical pages within the folder.

And always remember, if you have a new website and you’ve used a designer or developer to help you, check the robots.txt file when the new design goes live.

 

This guest post is by Joe Williams who is an SEO trainer over at Zen Optimize.

 




Share

12 Responses to “The Little Mistake That Could Cost You Loads of Free SEO Traffic”

  • Karleen

    Thanks for this info, Joe! So far I’ve just set up my own WP site and I just checked. My robots.txt page says the same as the one you showed here, so I guess it’s all good.

    I can see how that one little mistake could cause problems, so it’s good to know what to look for when we have someone else build a site for us. Thanks!

  • Raghava digitalchallenger

    I did one of this mistake & got my self busted…There was so much of duplicate content & my friend showed the usage of robots.txt & we were able to get back in good eyes of google again.

  • Pat Coakley

    I’m confused! (What’s new?) as to what the green flag message is after typing in /robot.txt. If I want to rule out that it is NOT a problem, what should I see? When I do this to dailyblogtips.com/robot.txt it takes me to a 404 no such page. But, when I type in the same to my site, I get the final screen shot in this post. So, if you clarify what the green flag looks like I’d appreciate it. Thanks.

  • Joe Williams

    Hi Karleen,

    I had a quick check and your robots.txt looks fine.

    It’s also possible to change the robots.txt file in some SEO plugins. We use the WordPress SEO plugin which is great. One of it’s magic powers is the ability to change the robots.txt file – so that’s another time ti be mindful.

    Good luck with the site!

  • Abhishek

    Well you are absolutely correct here as everything is read by google and even a single comma can cost you a lot. After reading this post I checked the robots.txt of my site and everything was fine. thanks for this post.

  • Shawn Gossman

    Great post, Joe! 🙂

    I have to admit, I have made this mistake years ago in the past and its one of those “Oops, dah!” type of mistakes we all can easily make.

  • Angelina

    It’s known as a user-agent. Before it can visit any webpage, it must visit the robots.txt file to learn what areas it can and can’t index. It follows these instruction to the letter.

  • zimmy

    Thanks for the information. We recently moved M&P to a new host and the traffic took a dramatic dip for a couple of months. We could never figure out what we did wrong. Perhaps, this could be the reason.

  • Rafaqat

    Robots.txt is the filter to fetch organic traffic,the right use will boost up and a little mistake of even slash will go blog down.So be conscious while setting this file,thanks for your guideline.

  • Mike Howg

    Great post! I was thinking about creating a few subdomains on my blog but I had no idea it could count as duplicate content. I might hold off for now on adding a subdomain but if I do I’m definitely going to follow your guide. Thanks.

  • Robin

    Hi Joe ,
    I just checked my robots.txt and it also includes the sitemap URL, is that okay?

    I also had an important seo question which i can’t seem to find an answer for. I recently did a wp import from my old blog to my new website. But for some reason when I do a site: or normal search for my homepage of my new website on google, it shows the site title and meta of my other blog even though the URL is for my new website. I”ve indexed it and it’s been 3 weeks since it’s been happening. Really appreciate the help, Thanks!

  • Marian

    This tip is very useful indeed, I must admit we did the same mistake before. Our rankings drops because of that mistake. But we recover after the site is finished, all we did is create good content and share it to social media.

Comments are closed.