For the last few weeks, it’s looked like fake news has spawned its own opposition industry. Money to fight the now-apparent-to-all evil has poured out of foundation coffers faster than Sean Spicer clarifications.
In February, Craig Newmark made his big individual pledge to supporting trust in news, committing $6 million. Then we heard that five funders will create the News Integrity Initiative with $14 million, with Facebook itself participating. Then, just a day later, entrepreneur/journalism funder Pierre Omidyar publicly committed a cool $100 million to “to support investigative journalism, fight misinformation and counteract hate speech around the world.” (First up for that fund are a $4.5 million grant to International Consortium of Investigative Journalists, the Washington-based group behind last year’s Panama Papers investigation, and money for the Anti-Defamation League to fight hate speech.)
Just after the Omidyar largesse was hitting the press, I visited with digital pundit and CUNY professor Jeff Jarvis in his office, just a block away from The New York Times. CUNY had just been named the administrator of the $14 million, with lead money provided by Facebook, the Craig Newmark Philanthropic Fund, John S. and James L. Knight Foundation, the Tow Foundation, and the Democracy Fund. All totaled, 19 organizations had signed on to the initiative, including advertising exchange AppNexus and PR companies Edelman and Weber Shandwick.
I wanted to know how Jarvis and CUNY would fight the scourge that had become the media’s bête noire, even as the term “fake news” had spiraled out of all sense, the epithet of choice for anyone, including the White House’s current occupant, who may disagree with a news company’s reports.
I entered the conversation skeptically. I’ve believed, conservatively, that actually providing money to pay more experienced journalists to do journalism — as some of Omidyar’s and other funding has stepped up to do — may be the best tonic for what ails fact-challenged America in 2017. After all, it’s been The New York Times’ and The Washington Post’s reporting, mildly beefed up and abetted by the surge of digital subscriptions, that has ended several careers at Fox News and helped bring down the prevaricating Michael Flynn.
When I asked Jarvis (who directs CUNY’s Tow-Knight Center and has been focused on the trust issue) how he expects to use the money, he ticked off more than a half dozen ideas — “new metrics of impact,” news literacy, better “public listening,” and the more abstract “How do we rethink the informed conversation?” — and notes the unusual speed with which $14 million fell into CUNY’s lap.
It did seem a little squishy. Then, Jarvis turned to his first funded project — one that got formally announced this morning — and began to enthuse about its prospects.
That’s the Open Brand Safety (OBS) framework. If neither the words nor the acronym tell the story, it’s the companies behind this project and their theory of the marketplace that raises hopes that something positive and impactful could come out of it. The good news is that this first News Integrity Initiative funding — amount as yet undisclosed — is going to smart people who have a sensible, well connected plan to diminish fake news where it hurts most: in the pocketbook.
“The simplest way to describe it is we’re going to try and build the largest list, bringing together all of the lists of all of the fake news domains as well as extreme content,” Storyful CEO Rahul Chopra told me. Storyful is one of two key partners in OBS. “Not only are we finding them, but partners of ours, from First Draft to academia to NGOs to Jeff Jarvis’ team, are too. We’re going to bring together everybody’s lists and give those to the networks, the ad tech firms, the agencies, etc., to try in some ways to choke off money supply coming into these domains and sites.
“The best way to do that is to bring the various parties to the table to help solve it — whether that is the ad agencies, the ad tech firms, the publishers, or the platforms to help define what those signals are — and then work on how we can build technology to detect them faster. Again, use that human skill to consolidate and verify. What I’m after is how do we choke off money supply to this entire bracket.”
Storyful has the story to back up its words. After all, it’s Storyful’s fundamental business to tell what’s accurate from what’s fake, misattributed, made-up, or masquerading in one form or another. As I’ve talked to some of the nearly 200 companies (including The Wall Street Journal, the BBC, The New York Times, YouTube, U.K.’s ITN and Channel 4 News) that it serves as high-end video-plus fact-checker, I’ve only heard positive reviews. Both CEO Chopra and Mandy Jenkins, its head of news, are well respected in the craft. News Corp bought Storyful in December 2013 for about $25 million and brought in Chopra a year later to head it up.
If you remember back half a decade, as the Arab Spring and other uprisings occupied the news and filled social media with video, we heard a fair amount about inaccuracy. Some old footage of a demonstration might be touted as current; a slew of other concerns arose. As top news companies have turned to Storyful and its peers for professional, real-time vetting, we’ve heard far fewer of those issues. So it makes a lot of sense to turn to professional fact chasers to figure out a protocol that could work in this elusive world of misinformation.
Identifying faker — along with “extremist” sites, the ISISes of the world, Chopra says — is one thing. Curtailing their power is another.
That’s where Storyful’s partner Moat comes in. Moat focuses on ad analytics for top brand advertisers and “premium” publishers. Its founders — Jonah Goodhart, Noah Goodhart, and Michael Walrath — came out of Right Media (bought by Yahoo in 2007) and they are well connected in the industry.
So they can serve as the conduit getting Storyful’s blacklist of fake sites — which Chopra says will first be ready within months, and then added to over time — to ad buyers. Then, the idea goes, those ad buyers, in order to both protect their brands and do the right thing, can avoid placing buys on suspect sites. While direct buying is a part of the puzzle, it’s the application of this “Open Brand Safety framework” within programmatic, machine-driven buying that will make the difference in whether the plan succeeds.
At the outset, both ad/public relations giants GroupM (“the world’s leading media investment firm”) and Weber Shandwick are on board. Chopra says many conversations among advertisers and their agencies have already been had, and he expects good participation.
How about Google and Facebook? “We’ve had conversations and we’re continuing to have conversations on what role they play where it’s still to be defined,” Chopra told me. “Look, we’re a company that has very close relationships with both. We’ve always been able to figure out ways to partner and work with them. I’m hoping that this another example of one where we’ll figure it out as well.”
Since Google and Facebook aren’t doing the actual buying — instead acting as the greatest middlemen the ad industry has ever seen — their exact role in this initiative may not yet be clear. Chopra agrees: “I think that’s why we’re going to the programmatic shops in ad tech.”
Chopra considers the blacklist Phase One of the process. That may be gnarly enough, as the idea of such a list will no doubt draw plenty of contention. Storyful’s history of fair decision-making helps, but it could get bogged down in controversy.
What would Phase Two be about? “We really want to get to understanding the spread of information from closed networks to the open web. That’s the other [project] that we’re working on, that we’re going to start working on with Jeff. If you look at how content is spreading today, whether it be extreme content or fake news, very often it’s starting in subgroups on closed networks before it ever hits the social world or open web. We’re trying to understand that spread of information and that spread of content.”
In other words, coming to grips with the impacts, good and otherwise, of the Dark Social that Alexis Madrigal identified five years ago.
Chopra says Storyful has long had an interest in both better identifying fake news sites and understanding closed networks. Neither, though, is core to its business, so the funding allows it to put resources toward the tasks.
Anyone in the business knows that industry protocols and standards usually take years to develop and to be well accepted. In this case, the participants all feel the pressure of their fast-tracked approach, given the wider societal stakes in voters making their decisions more on facts than fictions. In early 2017, this may seem like an uphill battle, but it looks like the right combatants are on the field.
“When Jeff and us started talking about this initiative, we started to think, ‘Okay, what are the things that we can immediately greenlight or work together on? We are big fans of Jeff at Storyful, and of CUNY and the School of Journalism. We take interns from there and recruits from there, and it’s a phenomenal program. We do quite a bit with them.”
As part of the initiative, CUNY is establishing an internal journalism team dedicated to identifying web domains that knowingly present fiction as news.
Says Jarvis: “My long-term hope is Storyful and Moat will support a flight to quality, helping advertisers and platforms not only avoid fraudulent content but support credible and trustworthy media.”