Ad tech providers are watching closely as Google fights concerns over YouTube brand safety
How will Google’s effort to keep detergent ads from appearing on Al Qaeda videos affect ad tech? The industry is taking notes.
As one of the 800-pound gorillas in ad tech, Google’s actions to assure brand safety on YouTube affect the rest of the advertising jungle.
To get some sense of these effects, we pinged two ad tech providers.
Earlier this week, the tech giant sent an emailed statement to news media that offered a few more details on the steps it’s taking so brands can avoid ads near objectionable video content.
This is in response to a growing number of major advertisers — including HSBC, Johnson & Johnson, AT&T, Lyft and L’Oréal — that have pulled their ads from YouTube because they are showing up against hate-promoting videos.
In all, more than 250 brands have reportedly pulled ads from YouTube, and there are projections that Google could lose as much as three-quarters of a billion dollars in lost revenue.
In the emailed statement, Google said:
We are working with companies that are MRC (Media Ratings Council)-accredited for ad verification on this initiative and will begin integrating these technologies shortly.
The email also pointed to new machine learning systems that will learn about which videos promote hate; a rapid response of several hours for flagged videos; new settings that, by default, set a higher level of brand safety for ad locations; account-level controls to exclude specific sites, channels or video; and more subject classifiers to fine-tune content exclusion.
In addition to the emailed statement, Chief Business Officer Philipp Schindler earlier this week did interviews with such media outlets as The New York Times, Bloomberg and Recode to emphasize its efforts. Although Schindler downplayed the size of the problem, he noted that new machine learning algorithms can now find five times more non-brand-safe videos than before.
Expectations and standards
All of these efforts are raising the bar for expectations about ad context, Unified CEO Jason Beckerman told me. His company provides business intelligence for social ads.
And this expectation will be reinforced by third parties providing feedback after the fact, he said. Although Google didn’t specifically say so, Beckerman said these third-party observers will undoubtedly have access to ad placement metadata, providing another measure of limited transparency.
YouTube, of course, is a special case for advertisers. It is a walled garden, but it doesn’t offer the kind of safety and control that other walled gardens do, like Facebook.
One difference, Beckerman pointed out, is that ads on YouTube are “inescapably linked to the content.” But, he added, it’s not the way ads are linked to content on, say, CNN, where advertisers similarly don’t know the surrounding content — but “there’s editorial discretion” that governs the content on that news site.
Another ripple effect of Google’s new steps, then, is that content supplied by users and countless organizational or programming sources must undergo some new forms of editorial review, probably based on machine vision and learning.
But editorial control needs boundaries, and Mikkel If Hansen, partner at Blackwood Seven, said the YouTube situation highlights the fact that “we don’t have a standard for what is ok.” Blackwood Seven provides analysis of media campaigns.
Is Google setting itself and the industry up for the next freak-out over ad context, when the standards it sets for AI and third-party oversight raise questions about its editorial judgment?
Hansen thinks it’s time for advertisers and publishers to take the next step and generate some guidelines about what is not acceptable content for brand safety. The goal, he said, is that “everyone should agree on what it is we’re buying and selling.”