Moat calls itself the “Nielsen of digital.” It’s a service advertisers use to make sure the right people are seeing and clicking on their ads. And those advertisers today have a problem: Because of the automated nature of so much online advertising, cash is increasingly flowing to sites that peddle fake news, often without the knowledge of the advertisers themselves. That’s not the kind of news brands want to be seen paying for. But Moat says it’s got a fake-news fix that could dry up ad dollars that keep fake news sites in business.
Company executives say Moat’s particular window on the internettracking billions of ads every daygives it the kind of data needed to spot fake news stories as they go viral. As it develops its anti-fake news analytics, it’s also coordinatingwith journalists and fact-checkers to put human eyes on flagged stories to develop a consensus on which sites deserve the“fake” label. After they make a decision, cuttingoff the ad dollars is an easy engineering problem.
‘The ad and publishing ecosystem has a responsibility to make it harder for creators of purposely fake news to make money.’Dan Fichter
“The ad and publishing ecosystem has a responsibility to make it harder for creators of purposely fake news to make money,” says Dan Fichter, Moat’s chief technology officer. Now, Moat is developing a fake news “metric” he says will integrate with automated ad-buying systems online to show the company is taking that responsibility seriously.
Because Moat’s tech touches so many ads, Fichter says the company can quickly identify spikes in web traffic. Moat’s filter would flag an unknown site suddenly seeing a traffic surge. This visibility across so much of the web isimportantin the whack-a-mole enterprise of cutting off fake news sites. A fake news writer might publish a story, get caught, and get shut down—thencopy the same story to 10 other sites and start the cycle all over again.
The window on fake news would open even wider, Fichter says,if online ad exchanges would join in. Ad exchanges facilitate buying and selling adinventory from multiple ad networks. They could see whether a particular story is taking off via a sudden jump in demand for ads on the site and cut off funding before ads ever get servedwhich would suit advertisers just fine.
Fake news is a relatively new type of brand un-safe environment, or at least one that advertisers are newly conscious of, says Brian Wieser, a media analyst at Pivotal Research.
After Moat’s system flags a potentially fake news story or site, things get more subjective. Moat’s plan is to work withjournalists, editors, and fact-checkers to come up with a blacklista consensus decision about which sites and stories shouldcount as fake news. Moat hasdiscussed itsefforts with organizations likePoynter’s International Fact-Checking Network, and Fichter says he hopes Moat canconvince them and others to formally collaborate. “An entirely automated solution is nowhere on the horizon,” says Alexios Mantzarlis, who leads Poynter’s fact-checking effort. “You just cannot lose the human element for the moment.”
But creating a list is tricky. If a tally of liberal-leaning and conservative-leaning fake news sources showed one side outnumberingthe other, critics couldand likely willcall the effort biased. Moat hopes to push back against this likely criticism through transparency: showing that Moat’s methodology ensures its tech, paired with the skillsof human professionals,will catch, say, fake liberal news even if it’s not drawing as much traffic as fake conservative news.
“Its important to distinguish between content you dont like, and what is actually not real,” says Moat CEO Jonah Goodhart.
Moat’s plan is to plug the blacklist into its dashboard. If a certain site were publishing fake news, an advertiser using Moat’s analytics would see a new metric showing Moat determined the content on that site was fake.The fake news metric would join Moat’s other measurements, such as whether an ad comes into full view of a user, and whether the visitor to a site was really a human or just a bot. Advertisers use Moat’s metrics to figure out whether an ad should count asa payable impressionif it meets the advertisers’ standards of “brand safety” and “viewability.” If Moat determines that a site that served this ad was fake news, its system would measure the ad impression as invalid.“It would be as if the website had served a non-viewable ad,” says Fichter. Since an advertiser only pays a site or ad network if an ad impression counts, they would ideally see a big fat zero on theirbills forads served on sites that traffic in fake news.
Moat says it wants to move quickly on shipping its fake news spotter to have it in place for other elections around the world, such as the upcoming French election. And it’s not the only ad-tech company working to cut off the dollars platforms are funneling to fake news. AppNexus banned Breitbart News for violating its hate speech policies. Integral Ad Science uses a rating system that shows clients howrisky putting an ad up against specific content might be. DoubleVerifylaunched a filter for“Inflammatory News and Politics” that includes both fake news and heavily partisan sites like Breitbart and Rawstory. Adding Moat’s fake news metric to the mix adds one more tool to undermine the infrastructure that makes fake news possible.
Yes,Moat’s tech alone won’t stop fake news altogether. Nor will anyone else’s. Some number of junky advertisers only care about making asale, not defunding fake news. But Moat hopes it can make a dent. “If the origin of the problem is programmatic, the solution can be programmatic too,” Fichter says. Or to translate from online ad-speak: if tech created the problem, it can also fix it.