Skip to main content
News Plus 18 Feb 2025 - 8 min read
AMI CPD: 0.5  Share  

Brand safety fail: Allegations platforms served top brand, government ads on sites sharing child abuse images triggers 'know your customer' compliance call

By Ricky Sutton - Founder | Future Media

Ricky Sutton and Arielle Garcia: The brand safety problem is commercial, not technical, says Garcia.

Some of the world's largest advertisers – including PepsiCo, Unilever, Adobe, L’Oreal, Nestlé, Honda, Samsung, Paramount+ and the US Department of Homeland Security are among those mired in allegations that their ads were served on websites containing child sexual abuse. Those accused of failing to spot the warning signs and involved in the supply chain include Google, Amazon, Microsoft, Outbrain and others along with ad verification services monitoring brand safety and ad industry standards bodies including the US-based globally influential Media Ratings Council and TAG. Now US Senators have waded in and want answers. Former UM privacy chief turned watchdog at Check My Ads, Arielle Garcia, thinks the fallout is only just starting.

When some of the world’s biggest advertisers and the US government stand accused of inadvertently funding distribution of child sexual abuse imagery, it’s a strong signal the $700bn digital ad supply chain remains a mirage of patchworked providers watching their P&Ls over product. Prepare for regulation, says former UM privacy lead Arielle Garcia, with direct consequences for how and by whom that money is handled, spent and tracked.

Now at US watchdog Check My Ads, Garcia sees a tipping point rapidly approaching for open web advertising. Allowing those making all the money to hold sway over industry standards has proven not to work.

“This should really be the inflection point where everyone recognises that self-regulation has quite simply failed,” per Garcia.

The “cosiness” of advertiser trade associations and their adtech counterparts “and how they interplay with the standard setting groups is all very tied together”, she says.

“All of those entities are making brands think that when these things happen, they're an edge case. It's simply not. There's no reason to believe that most brands appreciate just how bad this is.”

She hopes the Adalytics report will shift that level of appreciation.

“Hopefully what comes out of this, first and foremost, is brands realise that when they run with TAG-certified and MRC-accredited vendors, that means having their ads delivered on a site that hosts child sexual abuse material. It means having their ads delivered on sanctioned websites.

“Hopefully this is the wake up call.”

Lifting ‘know your customer’ rules from finance industries might be the logical step for regulation of digital media’s supply chain – alongside forcing platforms to show advertisers exactly where their ads are placed. But there’s also “the elephant in the room” of “perverse incentives” between big tech, agents and the brands they are supposed to serve.

Get the full download via this podcast with Ricky Sutton and Arielle Garcia

The investigation

Ad research body Adalytics was running an investigation to see whether Fortune 500 company and taxpayer-funded ads from the US Government were being served to bots.

The goal was to reveal ad fraud and expose failures in the tracking and policing of ad delivery. What it uncovered was much more sinister.

The Adalytics report revealed that industry-certified ad tech companies transacted ads on a website known to host child sexual abuse material – known in industry as the acronym CSAM.

Adalytics notified the FBI, Department of Homeland Security, the National Center for Missing and Exploited Children, and the Canadian Centre for Child Protection.

It named two ad-funded image-sharing sites where the ads were served, which had previously been censured multiple times for facilitating child sex abuse content.

The sites have more than 40 million page views per month – more than the Financial Times, The LA Times, Politico, or the website of the Library of Congress.

Adalytics says its research showed ads were delivered to the site by vendors including Amazon, Google, Criteo, Quantcast, Microsoft, Outbrain, TripleLift, Zeta Global, Nexxen, and more.

Major advertisers who unknowingly had their ads placed there included the US Department of Homeland Security, MasterCard, Starbucks, PepsiCo, Honda, Uber Eats, Sony Electronics, Unilever, Adobe, L’Oreal, Nestle, Adidas, Domino’s, Samsung, Paramount+, HBO Max, Dyson, the Wall Street Journal, and savethechildren.org.

All absolutely awful, but perhaps most disturbing of all was the failure of several market-leading brand safety verification vendors who failed to stop it happening.

Ad tracking tags from the leading vendors DoubleVerify and IAS were found measuring and monitoring the ads on the abuse pages.

Yet both reported the campaigns were 100 per cent brand safe.

Adalytics reported:

“Video ads showing the National Football League were served on a site showing a photo from an online sex game. The ads appeared to use DoubleVerify tags.”

“It’s unclear the degree those policies are enforced if US government ads can be seen on a website that’s been known to host CSAM for more than three years.”

Checks showed the websites were hosting porn, and an ad delivered by Google DV360 for the US Department of Homeland Security.

When child sexual abuse was discovered, the evidence was handed to the FBI and US law enforcement, as well as the National Center for Missing and Exploited Children (NCMEC).

NCMEC was established by the US Congress in 1984 to prevent child victimisation. It publishes annual reports naming platforms where exploitative material is found.

Checks on its records found that the abuse sites serving these ads had been flagged by its register 27 times in the past three years.

Adalytics said it used a bot to crawl historic webpages to look for ads and to track who served them:

“On September 19, 2023, (the) bot crawled the site and whilst screenshotting, it was served a digital ad by Google DV360. The ad was for the US Department of Homeland Security.”

“This means the US government may have inadvertently helped finance a website known to host and distribute child sexual abuse material via its ad spend in Google DV360.”

Senators move

Arielle Garcia and the Check My Ads Institute have since filed a complaint with the Trustworthy Accountability Group.

TAG is an ad industry initiative designed to combat fraudulent digital advertising, improve transparency, and enhance brand safety.

It was founded by major advertising trade groups, including the Association of National Advertisers, the American Association of Advertising Agencies, and the Interactive Advertising Bureau.

It is funded by its members, which include the same organisations identified in the investigation, including Google, Amazon and Outbrain, among others.

The complaint seeks transparency on how TAG sets its standards as well as its enforcement practices.

“Certification without enforcement is a scourge on the industry that undermines trust and cuts against TAG’s very mission,” Garcia says in the complaint.

The complaint echoes concerns raised by prominent US Senators Marsha Blackburn (R) and Richard Blumenthal (D).

The pair, who have worked together on increasing safety for young people on the web, have written to the CEOs of Google, Amazon, DoubleVerify, IAS, the Media Rating Council and TAG demanding accountability.

The Check My Ads Institute has urged TAG to take decisive action, suspending the certification of non-compliant vendors for six months while conducting a public and independent audit.

The senators want answers from the CEOs by Friday.

“We'll see what happens after the responses get submitted by these companies. But the fact alone that Congress is looking at the actual standard setters is an important milestone,” says Garcia.

She says the problem should never have got this far in the first place – and without regulation, is about to become much, much worse as generative AI floods the web.

Systemic fail

The website hosting child sex abuse images highlighted by the Adalytics report should have raised plenty of red flags.

“It was a peer-to-peer photo sharing website where you could upload photos anonymously. It had a bunch of settings that help people prevent things from being found by accident. For example, you can set photos to auto delete. You can make it not indexed by search engines and I think Reddit blocks links from this site. So … [it was] inherently high risk at a website level,” says Garcia.

How has the industry come to a point where advertisers shun news sites citing ‘brand safety’ concerns, yet those paid to ensure brand safety are, albeit inadvertently, channelling major brand dollars into sites enabling distribution of child sex abuse images?

Garcia blames “meticulously-crafted, loophole-written, very specific standards – mostly with no teeth”. She’s picked through them as part of Check My Ads’ Senate complaint.

“Shouldn't it be on the vendors to do basic diligence of the websites that they're monetising? … This was a website that was on a list. Technically it was more than feasible to stop it.”

Greed or incompetence?

Garcia suggests the problem is not technical, but commercial.

“The way that these standards get created tends to be to the appetite of the largest participants in the working groups. Obviously it's beneficial to adtech companies to be able to monetise more inventory,” she says.

“So I don't have an answer for how we got here, other than the fact that there's a commercial interest in maximising what you monetise, and there's a commercial interest in not having to spend that much to do any compliance-related work.”

Garcia believes even basic due diligence between adtech’s main players would have prevented the latest fiasco, stating Google had flagged the peer-to-peer site within its systems as hosting pirated content. And yet, “the other side of Google didn't stop ads from serving on this website. It doesn't make any sense,” she says.

Meanwhile, TAG operates an industry intelligence-sharing platform called Threat Exchange which includes a pirate domain exclusion list.

“They [TAG] say they work with ‘the industry’s leading adtech companies’. Ostensibly, one would think Google would be part of this Threat Exchange. Why, even on the piracy issue, was that not shared?

“There's a whole lot of things that show a breakdown at every level. Yet instead of addressing why this website was being monetised for so long, a lot of the responses that you see from the players involved – especially Double Verify’s responses – are focused on the minutia of their technology,” says Garcia.

“For example, ‘it would not be effectively realistic for them to scan every site on the internet’. No, you're an ad verification firm. We're talking about monitoring where the ads are served.

“Everyone wants to come up with a technical excuse for why this happened. Instead of focusing on ‘why did we not just vet the websites that we were monetising?’”

KYC incoming?

Garcia and others – including the likes of AppNexus founder turned Scope3 CEO Brian O’Kelley – think imposing a ‘know your customer’ obligation on the adtech supply chain may be the next logical step.

Especially with the “money making machine” of generative AI about to swamp publishing.

“You have to imagine that you're that we're reaching a tipping point here. Open web programmatic would become so overrun with AI-generated spam sites that it would become synonymous with garbage.”

Which threatens an existential crisis even for some of those firms now in the firing line of the Senators’ inquiry.

“We've already established self-regulation has failed. So we need actual regulation. We need to have something like know your customer rules for ad tech to be codified,” says Garcia.

“We need the ad platforms to have a responsibility to vet who they're allowing to monetise via their platform so that they aren't monetising illegal activity, child abuse etcetera.”

Follow the money

A further regulatory step may be less welcomed by the adtech majors.

“You need brands to be able to have readily accessible data on where exactly the campaign is aired,” per Garcia. “We need to start with brands having the right to be able to check their own ads.”

She thinks the changed nature of client-agency relationships and “perverse incentive structures” that come with principal trading and “dodgy enterprise-level deals with vendors” is another “elephant in the room” that could be exorcised by lifting regulation from financial markets.

“They should have a duty of care; they should be agents on behalf of their client. These are not new; these are things that exist in other markets, certainly financial services.

“So there are precedents that should give us optimism that it's a solvable problem. And with the engagement of Congress on this issue, hopefully we're at a time now where we're going to see some real movement.”

Get the full download via this podcast with Ricky Sutton and Arielle Garcia

What do you think?

Search Mi3 Articles