

Discover more from Risk & Progress
Risk & Progress| A hub for essays that explore risk, human progress, and your potential. My mission is to educate, inspire, and invest in concepts that promote a better future for all. Subscriptions and new essays are free and always will be. Paid subscribers gain access to the full archives.
Historically, the emergence of new modes of communication, such as radio and television, initially led to social upheaval before society learned to adapt. In this era, the emergence of social media has given rise to a new “post-truth” era, where the lines between opinion and facts are blurred, and misinformation, disinformation, and half-truths prevail. We are now in the adaptation phase, but much work remains. What is needed is a scalable means of sorting truths from untruths in the information superhighway.
Why is Misinformation So Prevalent?
In a sense, the emergence of the internet and social media has “democratized” news and information sharing. Anyone, anywhere in the world, so long as they possess a smartphone, can be a “journalist.” Along with that democratization, however, has come an erosion in the quality of news and information disseminated.
In the bygone era, journalists had to protect their reputations and were incentivized to ensure that they were, at least, meeting the bare minimum standards of integrity when delivering information to the public. Today’s online masses, who can change their “handles” on a whim and have no fear of losing a career, have no such binding incentives. Further, the internet, as an open platform, is vulnerable to exploitation by malicious actors who purposely seed disinformation and half-truths to further their nefarious agendas.
As I illustrated here, the core problem lies in Brandolini’s Law, which notes that it is an order of magnitude more difficult to disprove a false claim than it is to make it. Brandolini’s Law suggests an inherent asymmetry, an imbalance, between truth and lies. Countering misinformation online, therefore, requires that we tilt that balance back in the other direction.
The Limitations of Fact-Checking
In recent years, “fact-checkers” have emerged as one of the premier methods of countering online misinformation. On popular social networks, users and/or AI flag articles or posts that they suspect are false or misleading, a select few of which are referred for analysis by professionals. Fact-checkers are usually trained specialists, who review articles for their veracity. Their assessment may result in the removal, downranking, or addition of context/warnings to an online posting, which is shown to dramatically reduce impressions.
The challenge, however, remains that aforementioned asymmetry. A few fact-checkers are simply unable to review but a tiny fraction of online postings. Counterintuitively, this may actually worsen misinformation via an “implied truth effect” as articles that are not flagged are implied to be “true,” when in actuality, they were merely not reviewed.
In addition, the other problem facing fact-checkers are accusations of bias, as some contend that these professionals seek to push an agenda. Together, the lack of scalability and general distrust greatly limits the usefulness of fact-checkers as a means of countering misinformation.
The Wisdom of Crowds
There has long been a bias against laypeople in the context of making complex decisions. You can see this bias in modern politics, for example. People can vote for their representatives, but they are not trusted with direct self-rule. It is the elected representatives that make the actual decisions, ostensively, on behalf of their constituents.
It has long been held that the “experts” would be less biased and more competent when tasked with these difficult decisions. Recent research has suggested that this thinking is, at least partly, flawed. In fact, a “crowd” of laypeople has a certain “wisdom” of its own that can match or exceed expert judgment.
In the classic game of guessing the number of jelly beans in a jar, for example, individual answers are often noisy and very far from the correct total. But a large number of guesses from a large number of individuals, when averaged out, is often fairly close to the actual total. This suggests a kind of “aggregate” wisdom, known as the “wisdom of the crowd.”
Each individual brings some knowledge to the table. That knowledge is often messy and imperfect on its own, mostly data “noise.” But when properly aggregated alongside the knowledge of many others, the signal emerges from that noise, and the Wisdom of the Crowd can often outperform the “experts.”
A Scalable Solution
Therein lies the potential for scalable fact-checking to counter misinformation. Instead of relying solely on professionals, perhaps the laypeople themselves can rate the veracity of online postings. At first, this idea may sound mad, but maybe it isn’t so.
To see if this concept is in any way workable, researchers Jennifer Allen, Antonio Arechar, Gordon Pennycook, and David Rand, gathered over 1,100 laypeople via Amazon’s Mechanical Turk. They sought to examine how well laypeople categorized news articles as “false,” “misleading,” “true”…etc, relative to professional fact-checkers.
Crucially, they were to do this with only two pieces of information: the headline and the lede. This is ideal for three reasons. One, it makes for more rapid analysis in the spirit of scalability. Two, it protects news articles that may have “sensationalized” headlines but accurate ledes. Three, since the majority of people only read headlines before sharing online, headline accuracy is uniquely important.
The study found that even with a relatively small (<26) “crowd” of laypeople, the correlation between their averaged ratings was not significantly lower than those of three professional fact-checkers. The results demonstrate that laypeople can be just as consistent in their categorization of news as professionals. Notably, however, the study did not evaluate the accuracy of fact-checkers or laypeople to truly delineate what is true from what is not. It only looked at the relative consistency of categorization.
The Future of Online Media
This research suggests that it may be possible for social media companies to engage in crowdsourcing to mitigate misinformation. False or misleading stories could be proportionally downranked based on those crowdsourced scores. Downranking reduces the total number of impressions that a story gets, preserving the “freedom of speech” element, but restricting the “reach” of false information.
Additionally, because they are now freed from the time limitations of a few professional fact-checkers, many more online postings can be scored. Therefore, crowdsourcing could greatly mitigate the “implied truth” effect by enabling a much broader swath of postings to be reviewed.
Crowdsourced truth is not a perfect, nor a complete answer to today’s misinformation challenge. Instead, it could be one piece in a puzzle that likely also includes a mix of professional fact-checkers and artificial intelligence/machine learning. Nevertheless, the sooner we can slow the onslaught of misinformation, the more prosperous the future will be.
You also may like…