At the very least you should be checking in on your account(s) once a week to review data, to see how those changes made last have fared, to see how different versions of ad copy have performed and just to get a basic feel for the current lay of the land. In a perfect world you would check in much more often – for example on Monday you could add new keywords (or variations of existing keywords that are performing well) and craft some new ad copy. On Friday you could review the data and make any adjustments as required – then check back in on Monday to see how those adjustments have gone.

In the case of constant testing, evaluating, tweaking and testing again there is little argument. It’s something that every successful PPCer does and does as often as possible.

Which brings up something I’ve thought on for a while now and that is – no experiment is a failure. Every experiment, even if the results are negative or not at all what you expected is a success because it tells you something you did not know for certain till that moment when the results come in. For example your hypothesis – based on some keyword research you have done – is that by adding a certain set of keywords you expect to attract more clicks, generate better click through rates and see more conversions. You add those keywords to test. You do see more clicks and you do see better CTR but you do not generate more conversions. In fact despite all other metrics being very positive, none of the new keywords generates a single conversion. This results in higher CPA.

A disaster? A failure? Not at all.

By testing you showed without a doubt that these keywords, which research showed should have done well for you did not. But hold on. You did see more clicks. You did see stronger CTR. How did the user experience metrics (bounce rate, time spent on site, pages viewed) look? Higher than normal bounce rate? Less time spent on the site and fewer than average page views? Yet the improvements in clicks and CTR (and lets say that all the tested keywords had strong quality scores) suggest that there was a strong connection between these search queries and the ad copy – so could the issue be the landing page itself? Is the page somehow failing to live up to the expectations set by the search query+ad copy? Yet this is a page that normally converts quite well.

So what’s the answer?

You’ll need to test again to find out.

Maybe create a stand alone ad group or campaign that contains these test keywords and sends them to a different or new landing page that is more strongly related to the keywords and ad copy…or possibly create a dynamic landing page using SpeedPPC’s dynamic insertion code.

Regardless of what you do, there is no best practice more important than testing, evaluating, tweaking and testing again.