US-based companies categorize information through biased lenses
When confronted with basic information such as ”Boston is in Massachusetts,” for instance, humans readily limit feedback to “true” or “false.” This binary framework is simple and intuitive, but the world is far more complex, and humans require more nuanced categorical frameworks to determine the truth.
When technology companies and researchers go through the process of building and constructing frameworks for information categories, they are often prone to bias. Yet, companies obscure these decisions from ordinary citizens who eventually consume this information every day and ultimately shape their lives around it.
In Africa, where mis- and disinformation campaigns often go viral online, especially during politically charged periods, US-based tech and social media companies become arbiters of the “truth.” Millions of Africans use search engines and platforms like Google, Twitter and Facebook, who filter information through their own biased lenses.
Read more: Writing toward freedom: Politics and digital rights in Africa
To complicate matters, these tech and social media platforms, who act as main gateways for news and opinions — often adopt dissimilar frameworks to categorize information.
For example, Google, the giant of the search industry, has developed a feature to check viral claims by categorizing news using a scale of False, Mostly False, Half True, Mostly True, and True. Facebook uses a similar rating but instead focuses on False, Partly False or False rating in reports submitted to them. Meanwhile, Twitter recently announced its framework that has three categories Misleading Information, Disputed Claims, and Unverified Claims.
Fact-checking organizations differ radically in their categorizations.
For example, Politifact invented what is called Truth-O-Meter. While it follows the standard “true, mostly true, mostly false, false” rating, it also uses a “pants on fire” category for statements that are “not accurate and makes a ridiculous claim.” Snopes uses a very different method with 14 categories.
By contrast, Africa Check uses eight categories. To guide their decisions they rate statements of fact, leave the burden of proof with the speaker, focus on significance, use the best evidence available at the time and commit to updating information and clarifying mistakes when “new or better evidence appears.”
Meanwhile, Full Fact, a UK-based fact-checking organization, uses no categorization at all — leaving it for the reader to judge.
However, tech platforms who launch partnership programs with fact-checking organizations usually demand that their partners adhere to their categorization framework. For example, Facebook expects its fact-checking organizations to rate reviewed content according to its Facebook published framework. Over time, this results in the dominance of a single world view, making Facebook a central arbiter of truth.
Many other categorizations are published without sharing the rationale behind them, yet everyday users are asked to adopt and adhere to them. When platforms change these categorizations, based on the views of a few experts or engineers, society is expected, again, to adapt and update their worldviews accordingly.
Researchers have attempted to find solutions to the convoluted problems of information categorization. Claire Wardle, a known scholar in the field, created a categorization of seven types: satire or parody, misleading content, imposter content, fabricated content, false connection, false context and manipulated content.
And Bill Adar, a professor at Duke Reporters Lab, has taken a unique approach to information categorization with his MediaReview Taxonomy, which engages a democratic deliberation process to categorize information through public feedback.
But categorization is not a mere technical issue — it deeply influences how citizens think and reason about the world. Cognitive scientists have shown the importance of categorization, with some who argue that “to cognize is to categorize.”
Others warn of the danger of categorization. Derek Cabera, professor at Cornell University, wrote an essay called, “The dark sides of categorical thinking.” Bart de Langhe and Philip Fernbach penned a Harvard Business Review article titled, “The danger of Categorical Thinking.”
The categorization of information is the real authority — but in disguise. Those who hold the power to categorize information can, unwittingly, impose their own perceptions of reality onto citizens.
Categorization of information is an irrefutable necessity, but citizens must remain vigilant in terms of who holds the power to do it. The process of categorization must be transparent, and indeed, subject to constant rational and open scrutiny by society.