3 reasons why digital research is backwards

As digital advertising has evolved, our processes for measuring its effectiveness have stayed relatively the same. Columnist Peter Minnium explains why it’s time to rethink our approach and adopt a new learning system.

3 reasons why digital research is backwards

“Say that again?” I asked, startled.

“We don’t pre-test our advertising,” repeated the Fortune 50 marketer.

I convinced myself I was hearing things. Twenty years ago, I worked on several of this marketer’s iconic brands, and they were rigorous adherents to a dogmatic advertising testing process. I often felt paralyzed to make even the smallest decisions, like changing the color of a minor character’s trousers, without first deferring to the opinion of 150 consumers. Test and learn was the mantra, the mighty focus group their stronghold. An endless cycle of “learn-do.”

Worst_focus_group_evercncartoons017375_800_560

My marketer friend further explained that the pendulum had since swung from that rigid learn-do doctrine to the alternate extreme, “do-learn.” They now “move fast” using their “gut” to make creative decisions and then rely on digital technology to facilitate rapid feedback, which they use to course-correct. Marketers and agencies long held hostage by consumer testing were finally free from their former constraints.

From small beginnings

In the late 1990s, digital advertising was in its infancy; banners were modular and cost a few hundred dollars to create, media costs were low, and the medium was responsible for marginal reach at best. Crucially, this iteration of digital also came with a handy built-in metric, the click-through rate, which provided almost instantaneous feedback on consumer response to an ad.

In this low-risk environment, it’s easy to see why pre-testing ads and assessing big ideas before running them were readily abandoned.

But in the industry’s collective haste to embrace this seemingly rosy new paradigm, a new challenge has quietly emerged. As the medium has grown and evolved, the practices for evaluating the advertising we put into it haven’t.

What started as a habit has been embraced as a virtue. The modern approach to digital — going with your gut, moving fast, listening to technology-enabled signals and course-correcting in real-time — has become the de facto approach to digital advertising measurement. In short, “do-learn” in contrast to the old-school, “learn-do.”

3 reasons why digital research is backwards

Ripe for recalibration

As we all know, many of the fundamental truths that made “do-learn” an appropriate approach to digital in 2000 are no longer valid:

  • Reach has exploded. Digital is now a must-buy component of any integrated media plan.
  • The web is increasingly premium. Digital advertising can be expensive to make and expensive to run. (Case in point: Some digital video inventory is more expensive than traditional prime-time TV.)
  • We haven’t solved measurement. Click-through rates have been roundly discredited as a gauge of consumer sentiment (save for direct marketing, where it remains vital).

Most importantly, “digital” no longer means banners; it means some dynamic combination of visual, audio, video, copy, oh and by the way, let’s make sure it works on Facebook, Twitter and Snapchat.

Digital is no longer a low-risk, low-reward environment; like TV, it’s now one in which the rigors of early learning should be applied before entering the market.

The problem remains that an entire generation of ad folks has grown up embracing a do-learn process in digital and still needs to be thoroughly convinced the added layer of rigor to apply learn-do is well worth the effort. Here are three reasons why this is essential:

1. Insufficient scale and speed of learning in the do-learn process.

When done in isolation, in-market optimization doesn’t tell you how your campaign is affecting consumers’ long-term perception toward your brand, whether you need to scrap an idea entirely and start over, or highlight which changes are indispensable to the core concept of a campaign.

It will tell you how to incrementally enhance performance of individual pieces of creative. And it will do so over the course of weeks or months, the time it takes to gain sufficient sample size.

In all campaigns, early ad impact is crucial: During the initial days and weeks when media spend is inclined to be higher, attention is likely to be better attuned to a new message. The adage is true: “You never get a second chance to make a first impression.”

2. Variety of creative executions needed for an integrated digital media plan.

To make great, effective work, you need to have confidence in your core big idea and the range to then adapt it succinctly but meaningfully for the myriad creative executions available to you in digital (though this goes for non-digital mediums as well.)

Testing two dozen pieces of creative work effectively, particularly when you’re working in real time with a live campaign, is not a feasible undertaking. And what happens if/when you do need to make major changes — like re-shooting video or repositioning the product entirely? You need to start over, but you’ve already invested heavily.

Digital production is expensive; enhanced quality is imperative to breakthrough, and numerous assets are needed to complete a plan. Don’t make it even more costly.

3. People are fallible.  

As much as do-learn proposes to be reliant on third-party, unbiased technology for answers, it really is, ultimately, reliant on people. Optimization tech only measures what a person has concepted, created and put into the world.

And people are not rational. They defend things they’ve created more than things other people have created. They judge things based on personal preference instead of from a consumer’s perspective.

It’s dangerous to rely on the opinion of a handful of folks to determine how a campaign will affect (potentially) millions of people. Testing is a filter for learning and a safeguard against failure. The value of testing and learning early in the process is much more than just getting validation of the effectiveness of advertising — it is a catalyst to a thoughtful strategic thinking process that provides immeasurable value.

Well begun is half done

3 reasons why digital research is backwards

I am lucky to have experienced an overly dogmatic learn-do approach in the 1990s as well as the overly enthusiastic do-learn attitude of the 2010s. I’ve ridden the pendulum up both sides of its swing — and enjoyed neither. I’m happy to see a swing toward the center once again.

The center? Yes, you guessed it, taking the best of both approaches and adopting a continual learning system, with “learn-do-learn” becoming the new mantra.

Digital advertising requires that brands create a series of assets that come together to reach the desired effect — video commercials, native advertising, display et al. each play their role in an overall campaign. For some assets, a full-on pre-test will be necessary, especially video that is meant to carry a significant branding load.

For others, do-learn adjustments are optimal as with real-time social programs. Of course, in all cases, validating the strength of the core big idea is non-negotiable.

Getting digital advertising right is hard, and it’s no wonder that it takes twice the effort of the past. Smart marketers and agencies, together with their technology and media partners, are combining the core strategic discipline they have always used to understand and improve the effectiveness of their communications efforts with the opportunities to learn and optimize in-market in real time.


Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.


 

Marketing Land – Internet Marketing News, Strategies & Tips

(23)