One of the most frequent requests we receive on the Analytics team is for campaign performance benchmarks.  This request requires making sense of the billions of impressions that we’ve served over a given time period and teeing the results up for future campaigns.

The following looks at how benchmarks can enhance a campaign, as well as why benchmarks don’t always paint the full picture of campaign success.

First of all, why bother at all with benchmarks?  It’s important to realize that campaign performance is all relative.  When we want to know how much to buy or sell a house for, we look at what other similar homes sold for.  Benchmarks allow us to compare like-for-like and give us a line of sight into what we can reasonably expect from a campaign or areas we know we want to optimize.  Benchmarks give us guidance for what we might be able to expect from a certain media plan.

Further, benchmarks confirm to us that each screen behaves differently. 

  • Is your goal to optimize clicks and site visits?  Focus on Desktop and Mobile/Tablet. 
  • Do you want to drive brand awareness?  Advanced TV is the emperor of video completes. 

We also need to remember that each industry vertical is unique – we had one client from a Cable Service provider that doesn’t enjoy the most popular public opinion.  They knew clicks and video completes would end up being harder to come by for them compared to campaigns in other sectors.  Knowing this, we created custom client-specific benchmarks that better lined up with their needs and realities.  The relative success of the campaigns brought in incremental revenue that helped anchor our own bottom line and gave our client a sense of what to expect their campaign performance going forward.

Another caveat for benchmarks is the fact that we know benchmarks by screen will decrease over time – new media types might be all the rage, but VCR and CTR performance will inevitably decline once the newest and shiniest media hits the market or new types of media units pick up steam for a particular client.

Now, despite the utility of using benchmarks, we need to remember that at the end of the day, each campaign is unique and benchmarks don’t always paint the full picture of a campaign’s success.  The real estate pricing analogy mentioned earlier?  Well sometimes a fully renovated kitchen brings a little bit of that “added value” that will positively affect its price over a comparable neighbor. 

Now what happens when we have a set benchmark that fed a campaign KPI but something doesn’t work out and the campaign doesn’t meet the performance goal the benchmark set?  We are constantly learning and adjusting based on campaigns, and unfortunately missing benchmark goals do sometimes happen. 

What to make of a miss?  As mentioned earlier, each campaign is unique, and sometimes the messaging gets lost due to the content that’s served after the messaging. There are also nuances that sometimes result in “oh, duh” situations: I once had a dog flea and tick medicine maker see an inexplicable hard drop in CTR – turns out the Northeast had already had their first frost, thus diminishing the need for flea and tick medicine. 

At the end of the day, benchmarks are a resource.  Some clients take them very seriously, while others focus on their own goals and KPI’s.  Either way, campaign performance from both types of clients will be aggregated with the thousands of others we run, and those are the building blocks that give benchmarks their credibility.

RhythmOne was recently acquired by Taptica.