Communications measurement in the age of Fake News
“Lying is Bad”
“Well, Congresswoman, I think lying is bad,” offered Facebook CEO Mark Zuckerberg in a testy exchange with Rep. Alexandria Ocasio-Cortez recently, as he testified before the House Financial Services Committee in Washington, D.C.
The subject being addressed was Facebook’s position on fact-checking political advertising. It doesn’t do this currently, though the following week hundreds of the company’s employees wrote to Zuckerberg suggesting that this needs to change.
The Facebook issue – which is unlikely to disappear quickly, particularly given Twitter’s recent developments in this regard – stands alongside deepfakes, purported “fake news” and others as examples of the types of technology-driven issues that challenge our society and our notions of free speech and censorship in new ways.
Why is this issue a concern for marketing and communications executives, and what can we do about it?
In 2014, the Association of National Advertisers (ANA) partnered with the cybersecurity company White Ops to determine the scale of the problem of bots and fraudulent impressions (namely, advertising ‘seen’ not by humans but rather by bots); the key finding was that in 2015 $6.3 billion dollars would likely be wasted on fraudulent impressions.
This important study and subsequent iterations, in tandem with advertiser efforts including Procter & Gamble’s declarations on how it is approaching rationalization of its digital ad spend, have resulted in a decrease of the estimated spend lost to bots to $5.8bn in 2019, despite double-digit growth in digital ad spend, year-on-year.
In summary, progress is being made in the advertising space; how is the issue affecting the world of communications?
Bringing the Outside In
One of the key roles of today’s communication leader is to bring the “outside-in” view of the company.
Specifically, this means determining what is being said about the company, its competitors, and its sector, in order that the communications function can determine the narrative for the company and its executives, to enable the most productive connections both internally and externally, and to mitigate risks.
Now there’s a new challenge in the mix: what happens when some of what is being said isn’t actually being said by real people, but by bots?
While Twitter seems to be tackling this issue, a San Francisco-based startup calculated that for certain issues the percentage of bot-driven content on Twitter can be as high as 60%. Granted, this figure comes from looking at a ‘hot-button’ political and social issue, namely the extent of migration from Central America to the United States. This figure is likely to be lower for less polarizing subjects, and lower still for the brand- and company-specific commentary.
That said, as companies evolve their communications and risk functions to reflect not just the concerns of their shareholders but a broader group of stakeholders including their customers – seen most prominently in the latest pledge of the Business Roundtable – this will necessarily expand their communications sphere of operations into territories that are also the province of attitudes, values, opinions…and bots.
Consider just two issues: the extent to which diversity and inclusion now feature as key tenets of many organizations’ objectives; and how innovation is an important metric by which organizations assess their performance.
These are both areas where opinions can be polarized and are very important politically; as such they are the very types of the issue where bots are likely to be active, and where this fraudulent activity can potentially distort communications measurement. This is different from the ad fraud situation.
Fraud in advertising costs corporation money because its advertising is not seen by the people who its marketing team wants to see it; fraud in communications, however, means that bogus content is seen by the people who you’re trying to persuade and influence.
Make the Real Stakeholder Count
The central question for the communications leader, then, is how to deal with this bogus data. Ultimately, this content is out there in the public domain, potentially being seen by your stakeholders. So how should you treat it?
The right approach – a “forcing function” to validate your data – is to make sure you’re only tracking the opinions of your actual stakeholders, whether these are customers, employees, the media, politicians, regulators, or investors. There are a number of ways to do this, including manual validation and a range of automated techniques including down weighting overly ‘prolific’ sources.
The perils of not doing so are clear: in one situation, you might be using online content to inform a broad communications strategy; without focusing only on your stakeholders, you may be inferring successful strategies based on an unfiltered dataset, including bot-driven content.
Similarly, if you’re looking to establish the white space in your sector, by identifying the themes that are underserved by your peers and competitors, not focusing on stakeholders could mean that you’re going to think that a given theme is more important than it actually is.
Finally, if you’re not tracking how different stakeholder groups feel about different aspects of your company’s activities, you’re unlikely – independent of the question of bots – to be unpacking your reputation to the degree that you need to in order to make the most effective and compelling connections both internally and externally.
In conclusion, the fundamental question to ask your measurement supplier is this: are you able to tell me what my key stakeholder groups think about my company? The very act of focusing on stakeholders and validating them makes sure that you’re only making decisions based on real opinions.
Be part of the Connected Intelligence community