Advertisers are pulling their ads from YouTube following the discovery that ads were being run alongside extremist videos containing offensive content and hate speech without advertisers’ knowledge or permission. Over 250 advertisers have frozen ad campaigns on YouTube, including AT&T, Verizon, McDonald’s and Toyota.
This stems from an investigation from The Times, which found that ads for big brands were served on sites with offensive, extremist, or hateful content and were running before inappropriate content (like a pro-ISIS video and a video promoting an East African jihadist group with Al-Qaeda affiliations) on YouTube.
The appearance of tolerating hate speech (let alone endorsing or condoning it) could be detrimental to a brand’s image, and concerns over brand reputation and staunch opposition to the possibility of generating revenue for hate groups have advertisers avoiding YouTube — at least until ad policies and content review practices are firmly ironed out.
YouTube’s major mistake was in not properly regulating and tracking offensive content. According to YouTube’s existing ad policies, this shouldn’t have happened. But something went wrong. Marketers are understandably wary of moving forward with expensive ad campaigns that could end up damaging the reputation of their brands through no fault of their own.
Advertisers are vital to YouTube’s monetization (and, ergo, survival) and jeopardizing the health of its advertiser market by failing to strictly monitor where ads were ending up is cause for concern. YouTube’s total projected ad revenue for the year is over $10 billion, and this particular misstep could carve a $750 million chunk out of that bottom line (7.5%).
YouTube’s in the thick of this controversy right now, but it’s important to note that this problem isn’t unique to YouTube. There’s more content created every day than ever before, and it should be a secret to anyone that much of that content isn’t what one might call “brand-safe.” Any platform that runs ads alongside unregulated content created by users runs the risk of displaying ads next to offensive content, be it discriminatory, extremist, pornographic, or otherwise unsavory in nature.
Strict ad policies and the ability to regulate content and control where ads are displayed is pivotal when content creation happens at light speed.
YouTube is taking steps to fix the problem. It’s enacting ad policies meant to prevent ads from appearing with offensive content and going on a hiring spree, building a team to help enforce those policies and regulate content. But it’s also creating new controls for advertisers.
A blog post from Google’s Chief Business Officer, Philipp Schindler, lays out these changes, promising that stricter guidelines are on the way not only for ads, but for the platform as a whole. It also looks toward a higher threshold for brand safety with customization options for brands where content is concerned.
This isn’t YouTube’s only attempt to make the platform safer and “nicer,” and the advertising boycott also isn’t YouTube’s only recent controversy. The last two weeks have brought plenty of criticism to YouTube’s door as users discovered that its “Restricted Mode” (intended to create a safer viewing experience) was filtering out unoffensive LGBTQ content — things like coming out videos, videos discussing issues affecting the LGBTQ community, and even inoffensive music videos from Tegan and Sara, who are icons in the LGBTQ community.
Restricted Mode isn’t a feature used by the vast majority of YouTube users, so it wasn’t as if thousands of users were suddenly missing out on all of this content. The major concern over its filtering content wasn’t that people weren’t seeing it, it was the implication: That LGBTQ content is somehow inherently offensive. To its credit, YouTube responded, admitting that its strict filtration of LGBTQ content was misguided and that it was working to fix it.
The YouTube incident is, ultimately, just a symptom of the wider problem facing traditional advertising online. Content regulation is a big job that requires a lot of work and sophisticated regulation protocols and fine-tuned controls. YouTube had safeguards in place prior to these incidents, but with over 400 hours of video being uploaded to YouTube every minute, total regulation is an impossible ask.
Handing the over the reigns of a campaign to an automated system that places ads doesn’t offer total protection to brands. Control over brand messaging and the places in which ads appear is vital for brands. More “human” approaches to advertising like influencer marketing offer the kind of control and protection that traditional advertising on online platforms can’t, no matter how sophisticated the regulation practices. Influencers can be vetted.
Brands know what they’re getting into when they embark on influencer campaigns. Perhaps most importantly, they can control where their brand messaging will be seen and in what context.
The content regulation and ad policy discussions won’t end with the current YouTube controversy. As the content boom continues to grow, so too will the problem of ads appearing next to user-generated content that hasn’t been vetted. This is very much still a developing issue, and as best practices and thinking around it change and evolve, online advertising will change, too.