Tuesday, October 29, 2013

How to Protect Your Gated Content from Google Search

Here’s a familiar inbound marketing scenario:
You’ve spent months doing research, graphic design and content development to create an awesome eBook/Whitepaper/Guide for your target audience. The inbound marketing plan is to ‘gate’ the eBook PDF behind an email list opt-in form so that you can grow a relevant lead list for your business. Just as you are smiling wide while watching download forms roll in, you notice that a simple Google search for your eBook returns a direct link to your PDF! Now Google searchers can bypass your opt-in gate and download your content directly.
For organizations that choose to gate their free content resources behind an opt-in information form, it can be frustrating to see those same resources show up in Google search results (Hello, Marketo).
However, there is a very simple process to solving this problem by blocking the page from Google search results. Here’s how to do it using Google Webmaster Tools:

Step 1: Copy the exact PDF URL

Make sure that you have the exact URL to your resource, which should end in .pdf if it is in PDF format. Make sure that you don’t accidentally copy the landing page URL! This is the most sensitive part of the process.

Step 2: Log into Google Webmaster Tools

Sign in, or create an account, with Google Webmaster Tools. Upon logging in, you will see a dashboard similar to the screenshot below. Select the ‘Google Index’ option.
protect gated content

Step 3: Select ‘Remove URLs’

google search

Step 4. Select ‘Create a new removal request’

private content

Step 5. Input the URL that you would like to remove

Again, this is the most sensitive step. There are very few things on your website that you actually want to block from Google, so be very careful on the URL that you input here.
4
Complete this process, give Google some time to index your changes and then your resource will disappear from Google search results.There are other ways to go about this using a Robots.txt file. That process is a bit more tricky and I only suggest going that route if your organization has a dedicated development staff.
I don’t suggest going the Robots.txt route because of the huge downside risk if something goes wrong. Furthermore, if search engines change their guidelines or preferences on the way these files are communicated then your website could face major problems down the road. I have also seen in-house developers put entire websites under a 403 (forbidden) status and then wonder why they can’t be found in Google search! So no matter what a developer tells you, these seemingly-simple actions are open to mistakes.