Creating Information Is Easier Than Validating It
It’s just gotten several times easier

Believe none of what you hear
And only half of what you see
— Edgar Allan Poe
Garrett Hardin never seems to rest.
Even after death, he continues to preach to the masses. In Filters Against Folly, he introduces us to three filters. Literacy is one of them, and he speaks of it passionately, because he knows how words cannot only convey thought, but prevent it.
One good example is that of signs outside buildings housing institutions that are supposed to be respectable. They would read: This is a Corruption-Free Zone. Is it really? That is my question. Planting such an announcement visibly to anyone who enters the building clears them of any corruption that festers within.
However, the best example I could think of from a local perspective is the tanks carrying water in blue metallic containers. Conveniently, they are painted blue and have a white inscription on them confirming their contents to be clean. Allegedly. The writings read “Clean Water”. Nothing would have been further from the truth.
We used to stay in Syokimau for some time, where access to water can be an issue. Worse still, clean water. My hunch is it was all planned. Local goons cut the water access, then make money after artificially creating the demand. I never had the time nor the resources to investigate whether my guess was true. That’s also how easy it is to create information without validating it.
Information manipulation has been used by political leaders to sway the masses during times of crisis. During the Kenyan elections and the recent nationwide uprisings, propaganda was a common thing. Anyone and everyone could make an inflammatory post, and as algorithms tend to do, they distribute it widely. Social media would have spread what’s curious, but it leans on two emotional points — sympathy and rage. Confirmatory content gets buried in the heap of comments, reshares, and false information. Some of it even found its way to my mother’s feed, my innocent mother, who doesn’t know how to screen herself from the fake online world.
Now let’s zoom out of Kenya and focus on the whole world. Jesus is a perfect example. Nobody knows how he looked. Perhaps the best sample we have is from the Shroud of Turin, which may also be suspect, but nobody, let alone myself, would want to attack a religion my mother is heavily invested in. What does man do instead? They create an image for themselves, embellished with the traits we think Christ would have had. The sign of benediction becomes pretty popular. Movies replicate it. Jesus, who would have had dark skin, does not shine as a Hollywood suggestion. Who can confirm that the idea was not generated as a manipulation gimmick?
Making informed decisions
Leadership, especially in large corporations, requires the making of informed decisions. Rash actions can decimate years of hard work. But how can one make informed decisions if information is easily created with little incentive to validate it?
What if you could take a trusted profession, say medicine? Doctors are the paragons of trust. An oath after completion of medical school binds one’s actions. There’s also the idea that one would not wish to be treated by a quack. Critically ill patients trust doctors to pull them from the strong tide of death.
Enter AI.
Creative internet users can employ deepfakes, take the identity of a popular doctor, and create promotional videos. Naïve consumers will purchase the product. Trust is an easy badge of approval to use. Doctors have been used for centuries because they ought to embody trust. Toothpaste ads have been mentioning four out of five doctors, but I have yet to encounter a doctor who was included in their survey. Maybe I don’t have enough dental surgeons as friends. Plus, the same promotional team has never told us why, among the five, one disagreed. They were likely dismissed. Wouldn’t their reason prove more valid to the public, if at all that person existed?
Here’s a video doing just that, taking a doctor’s face, making deepfake videos, and selling ‘alternative medicine’:
Too focused on the effects social media has on my mother, I forgot the effects AI could have on children. If information can easily be created without validation, who would children trust? Some conversations are difficult to have with your parent. AI, thus, intervenes.
writes:Children may be especially vulnerable to the harms of AI, of course. This is something that really worries me: what happens when the socialization process occurs largely through sycophantic AI? What happens when the point of reference for interpersonal communication is AI rather than real humans? And what happens when kids ask AI questions that they wouldn’t ask their parents — e.g., about sex, suicidal ideations, and racism?
I recently attended a quiz event where I was pleasantly arguing with one of the teams. And what was their argument? How can I argue with AI? I laughed. Politely. AI had already convinced the young adults that they were correct. The debate regarded fresh water lakes, and they flagged the first result on Google, the summary AI presents. I was arguing with someone who didn’t understand the concept of AI hallucination.
AI hallucination may not be common knowledge among the common folk, who are its commonest users. Whenever I move from department to department, confirming some patient results, I encounter students making notes straight from ChatGPT’s outputs. My advice hardly lands. Incentives are mixed. I want them to get the right information, but it is taxing, so they ask: Why not simply make a request and have it presented in simplified and point form?
AI has eroded trust among its most savvy and technical, but confirmed it from its commonest users.
I have previously argued that automation will worsen our human interactions. Presently, you can attend a physical event, and everyone will be on their phones. It may even be that they are reading. Admittedly, I have all my books on my phone and laptop. But I can control my urge to always read. Automation, however, will increase fragmented communication. If it’s easier to make friends by clicking a button, this option will be chosen over approaching them in person and striking up a conversation.
Increasing reliance on AI results in increased dependence. All known forms of dependence work against users. Users is the term used by consumers of drugs and content. Dependence makes it costly to switch. Convert most of the world into dependent consumers, and you’ve created a fresh market. Those wishing to detach need rehabilitation. Ready market. GPT-4o was better at therapy-speak than GPT-5. After increasing dependence on it, GPT-5 was launched, disappointed, and the previous version is now accessible at a regular paid subscription. Ready market.
Those who yield to the pressure become easily predictable and are converted into a target market for ads. Social media has been doing that for a while. Nothing new there. But AI will make user accounts highly predictable, because it has a recorded history of your prompts and interests. They have the advantage of tapping not just individual sympathy or hatred, ala social media, but curiosity. A powerful leverage point. Creating silos one individual at a time, one person becomes pitted against another. Trust in AI skyrockets, while trust in individuals erodes.
Trust is the glue of civilizations. It’s already under attack through the AI slop. The hailed system of validation in peer-reviewed journals never had a control group. Ideas that could upend fields are dismissed, and those that slightly nudge them are accepted. Making significant contributions in a field does not demand the highly specific objectives we’re always taught when pursuing our scientific interests. Grants also ask the same of its applicants. Scientists spend less time exploring nature’s secrets and more time securing their tenure and ensuring they conform to journals’ instructions to authors.
Science’s hailed process is tarnished. Even paying the reviewers doesn’t take away the problem.
Now let’s combine the two.
The trust the world had in various professions was eroded. The COVID-19 crisis was a propaganda fest. Where did the virus arise? What are the effects of vaccines? Why do they want everyone to key in their personal details, confirming that they have received their shots? How do we ascertain that the steps are all taken with good intent? Incentives don’t align. Suspicion mounts. Trust fades.
The world has always wondered why we don’t have Einsteins and Russells. The leading reason has always been the inability, or more accurately, the disincentivized need to scale aristocratic tutoring. The other is because we’re flooded with content that doesn’t incentivize the need for validation. We can’t keep up. There’s no victory award for validating content. Mostly shame. Acclaim comes through the generation of information, of content. Eventually,
’s recent article highlights, we become the slop.Human content may continue to get produced, but it gets buried under AI sloppageddon. The army of generals against AI may not easily win the battle because validating information would take them back to content they may yet again become suspicious of. The cycle continues. My only hope is that we discover this in good time and pull ourselves from both the capture of AI and, by extension, social media. It is possible, but presently, only in theory.
Just like that, I have done the very thing I have advocated against in this very article. I have generated content, but without evidence to validate my claim, because it exists in the possible futures of our kind.
Welcome to the validation crisis.
What I’m trying to say is…
Creation of information is easier than validating it.
Biological evolution is slow. Cultural evolution is fast. AI slop is about to bury us in unreliable, manipulable, and hallucinated information. It has emerged so fast that our brains are not able to keep up.
J. Cole raps:
I seen babies turn fiends, addicted to the screen
Their dad shares cashiers replaced by machines
Don’t buy, subscribe so you can just stream
Your content like rent, you won’t own a thing
Before long, all the songs the whole world sings’ll
Be generated by latest of AI regimes
As all of our favorite artists erased by it scream
From the wayside, “Ay, whatever happened to human beings?”
We may see a time when making informed decisions will now demand that we don’t get informed.
The irony.
This song inspired some of the lines used in this article. Source — YouTube