Google has a crucial stake in a healthy and sustainable digital advertising ecosystem—something we’ve worked to enable for nearly 20 years. Every day, we invest significant team hours and technological resources in protecting the users, advertisers and publishers that make the internet so useful. And every year, we share key actions and data about our efforts to keep the ecosystem safe by enforcing our policies across platforms.

Bad ads taken down

Dozens of new ads policies to take down billions of bad ads

In 2018, we faced new challenges in areas where online advertising could be used to scam or defraud users offline. For example, we created a new policy banning ads from for-profit bail bond providers because we saw evidence that this sector was taking advantage of vulnerable communities. Similarly, when we saw a rise in ads promoting deceptive experiences to users seeking addiction treatment services, we consulted with experts and restricted advertising to certified organizations. In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.

We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.

Ticket Resellers

As we continue to protect users from bad ads, we’re also working to make it easier for advertisers to ensure their creatives are policy compliant. Similar to our AdSense Policy Center, next month we’ll launch a new Policy manager in Google Ads that will give tips on common policy mistakes to help well-meaning advertisers and make it easier to create and launch compliant ads.

Taking on bad actors with improved technology

Last year, we also made a concerted effort to go after the bad actors behind numerous bad ads, not just the ads themselves. Using improved machine learning technology, we were able to identify and terminate almost one million bad advertiser accounts, nearly double the amount we terminated in 2017. When we take action at the account level, it helps to address the root cause of bad ads and better protect our users.

In 2017, we launched new technology that allows for more granular removal of ads from websites when only a small number of pages on a site are violating our policies. In 2018, we launched 330 detection classifiers to help us better detect “badness” at the page level—that’s nearly three times the number of classifiers we launched in 2017. So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages that violated our publisher policies. We use a combination of manual reviews and machine learning to catch these kinds of violations.

Addressing key challenges within the digital ads ecosystem

From reports of “fake news” sites, to questions about who is purchasing political ads, to massive ad fraud operations, there are fundamental concerns about the role of online advertising in society. Last year, we launched a new policy for election ads in the U.S. ahead of the 2018 midterm elections. We verified nearly 143,000 election ads in the U.S. and launched a new political ads transparency report that gives more information about who bought election ads. And in 2019, we’re launching similar tools ahead of elections in the EU and India.

We also continued to tackle the challenge of misinformation and low-quality sites, using several different policies to ensure our ads are supporting legitimate, high-quality publishers. In 2018, we removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites across our ad network for violations of policies directed at misrepresentative, hateful or other low-quality content. More specifically, we removed ads from almost 74,000 pages for violating our “dangerous or derogatory” content policy, and took down approximately 190,000 ads for violating this policy. This policy includes a prohibition on hate speech and protects our users, advertisers and publishers from hateful content across platforms.  


How we took down one of the biggest ad fraud operations ever in 2018

In 2018, we worked closely with cybersecurity firm White Ops, the FBI, and others in the industry to take down one of the largest and most complex international ad fraud operations we’ve ever seen. Codenamed “3ve”, the operation used sophisticated tactics aimed at exploiting data centers, computers infected with malware, spoofed fraudulent domains and fake websites. In aggregate, 3ve produced more than 10,000 spoofed, fraudulent domains, and generated over 3 billion daily bid requests at its peak.

3ve tried to evade our enforcements, but we conducted a coordinated takedown of their infrastructure. We referred the case to the FBI, and late last year charges were announced against eight individuals for crimes including aggravated identity theft and money laundering. Learn more about 3ve and our work to take it down on our Security Blog, as well as through this white paper that we co-authored with White Ops.


We will continue to tackle these issues because as new trends and online experiences emerge, so do new scams and bad actors. In 2019, our work to protect users and enable a safe advertising ecosystem that works well for legitimate advertisers and publishers continues to be a top priority.

Google has a crucial stake in a healthy and sustainable digital advertising ecosystem—something we’ve worked to enable for nearly 20 years. Every day, we invest significant team hours and technological resources in protecting the users, advertisers and publishers that make the internet so useful. And every year, we share key actions and data about our efforts to keep the ecosystem safe by enforcing our policies across platforms.

Bad ads taken down

Dozens of new ads policies to take down billions of bad ads

In 2018, we faced new challenges in areas where online advertising could be used to scam or defraud users offline. For example, we created a new policy banning ads from for-profit bail bond providers because we saw evidence that this sector was taking advantage of vulnerable communities. Similarly, when we saw a rise in ads promoting deceptive experiences to users seeking addiction treatment services, we consulted with experts and restricted advertising to certified organizations. In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.

We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.

Ticket Resellers

As we continue to protect users from bad ads, we’re also working to make it easier for advertisers to ensure their creatives are policy compliant. Similar to our AdSense Policy Center, next month we’ll launch a new Policy manager in Google Ads that will give tips on common policy mistakes to help well-meaning advertisers and make it easier to create and launch compliant ads.

Taking on bad actors with improved technology

Last year, we also made a concerted effort to go after the bad actors behind numerous bad ads, not just the ads themselves. Using improved machine learning technology, we were able to identify and terminate almost one million bad advertiser accounts, nearly double the amount we terminated in 2017. When we take action at the account level, it helps to address the root cause of bad ads and better protect our users.

In 2017, we launched new technology that allows for more granular removal of ads from websites when only a small number of pages on a site are violating our policies. In 2018, we launched 330 detection classifiers to help us better detect “badness” at the page level—that’s nearly three times the number of classifiers we launched in 2017. So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages that violated our publisher policies. We use a combination of manual reviews and machine learning to catch these kinds of violations.

Addressing key challenges within the digital ads ecosystem

From reports of “fake news” sites, to questions about who is purchasing political ads, to massive ad fraud operations, there are fundamental concerns about the role of online advertising in society. Last year, we launched a new policy for election ads in the U.S. ahead of the 2018 midterm elections. We verified nearly 143,000 election ads in the U.S. and launched a new political ads transparency report that gives more information about who bought election ads. And in 2019, we’re launching similar tools ahead of elections in the EU and India.

We also continued to tackle the challenge of misinformation and low-quality sites, using several different policies to ensure our ads are supporting legitimate, high-quality publishers. In 2018, we removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites across our ad network for violations of policies directed at misrepresentative, hateful or other low-quality content. More specifically, we removed ads from almost 74,000 pages for violating our “dangerous or derogatory” content policy, and took down approximately 190,000 ads for violating this policy. This policy includes a prohibition on hate speech and protects our users, advertisers and publishers from hateful content across platforms.  


How we took down one of the biggest ad fraud operations ever in 2018

In 2018, we worked closely with cybersecurity firm White Ops, the FBI, and others in the industry to take down one of the largest and most complex international ad fraud operations we’ve ever seen. Codenamed “3ve”, the operation used sophisticated tactics aimed at exploiting data centers, computers infected with malware, spoofed fraudulent domains and fake websites. In aggregate, 3ve produced more than 10,000 counterfeit domains, and generated over 3 billion daily bid requests at its peak.

3ve tried to evade our enforcements, but we conducted a coordinated takedown of their infrastructure. We referred the case to the FBI, and late last year charges were announced against eight individuals for crimes including aggravated identity theft and money laundering. Learn more about 3ve and our work to take it down on our Security Blog, as well as through this white paper that we co-authored with White Ops.


We will continue to tackle these issues because as new trends and online experiences emerge, so do new scams and bad actors. In 2019, our work to protect users and enable a safe advertising ecosystem that works well for legitimate advertisers and publishers continues to be a top priority.

Copyright © 2019 Lampkin Consulting Firm LLC