Information vs Truth in the era when gigantic information networks hallucinate

Image from https://superuser.openinfra.org/articles/a-beginners-guide-to-network-mapping/

Information vs Truth in the era when gigantic information networks hallucinate

By Jen Davies, nerd

Dec 3, 2025



To begin, I should point out that I am a social scientist. Anyone can be a citizen scientist, but I happen to have completed a rigorous examination of a tiny slice of the human experience, had it critiqued by other scientists at various points in the process, and my analytical methods were deemed fully acceptable (which is pretty much as good as it gets in science - we don’t expect praise very often). Because I invested several years in a deep “lived experience” of the scientific process - which is to wonder about a phenomenon, make observations about it in a structured way, and depending on how you made your observations then to develop a few conclusions about the phenomenon - I often find myself questioning statements I hear. 


There are two reasons I find myself questioning what I hear. First, when someone speaks confidently and passionately but without substance to support the claims they’re making, I ask questions. How do they know this is true? What are their sources? Second, when someone provides a source and something about it seems less-than-reliable, or maybe outdated, I ask questions - and that’s because I read a lot from sources that also seem objectively trustworthy. What makes a source of information or opinion objectively trustworthy? For information, because the source used peer-reviewed scientific method. For opinion, because the source has a great deal of experience and knowledge of the topic. So I trust someone who, like me, completed a research-based PhD who is talking about something in their area of expertise, and I don’t trust a hockey player talking about insurance or mortgages. 


That means I need to know who people are, because I need to understand their broad base of knowledge and their professional/personal backgrounds. 


And this is why I am skeptical of the current value of Large Language Models (LLMs) like ChatGPT. Just to make sure you and I share an understanding of what these are: they’re just statistical models. LLMs “digest” huge amounts of human-generated data and find patterns in it, and use human language to share those patterns with us using human language. They can recognize our writing/speech patterns and "chat" with us. 



You need training that nobody is offering in order to use AI LLMs effectively


OpenAI brags that ChatGPT’s data source is the entire internet.


I’ve seen what’s on the internet.


I’m surprised anyone would brag about their algorithm-machine integrating things like the cheezburger cat, Rule 34, or the birdbox challenge. These were not humanity’s finest hours.


That’s my greatest “beef” with LLM-type Artificial Intelligence. I also think humans run the risk of developing lazy habits by pawning off simple analyses to these systems, but that’s not a fundamental problem of humans being able to move the world forward together. My concern is: I question the sources from which the algorithms that make them up draw their patterns, because sometimes we know the source is highly questionable (ie, the entire internet where there is a lot of bullshit), OR because we are not told what the sources are and therefore we must question what was included and what was excluded, and WHO made those decisions (and WHY).


If you learn enough about how to give AI LLMs prompts, you can get it to give you truth. But who’s doing that reading? Not everyone. Why isn’t the default setting to give truth? The default of ChatGPT, for example, is to be kind even to the point of feeding you lies. That’s scary.



Professional thinkers on information vs truth


I was recently inspired by these two brief talks by a scientist-philosopher, and a historian-philosopher:

The 4 biggest ideas in philosophy, with legend Daniel Dennett for Big Think+

https://www.youtube.com/watch?v=nGrRf1wD320 

Yuval Noah Harari: How to safeguard your mind in the age of junk information

https://www.youtube.com/watch?v=K1OvbwY6GPM 


Here are a few responses to those thinkers. I’m going to start by adding one more observation about what science is: science is a process. All areas of knowledge that you can think of are actually based on the process for answering questions in those areas and thereby developing those bodies of knowledge. Engineering is a process for solving problems related to creating new products or improving existing products. English literature is a process for solving problems related to imagination and emotion. Psychology is a process for solving problems related to behaviour and consciousness. Physics is a process for solving problems related to the nature of existence. Religion is a process for solving problems related to existing.


In the brief talks above (about 10 minutes each), Daniel Dennett describes another philosopher’s work which explained that all ideas are memes - they are sticky and “catch on” among/between people, and we fill our brains with them. He notes that recently a problematic meme has emerged: the idea that there is no “truth.”


Dennett goes on to point out, and as a fellow scientist who has done the reading as well as I can given that I cannot be an expert in every discipline, that this meme (there is no truth) is very much incorrect, because via the scientific process we know that there ARE fundamental truths about the universe, the world, and humans.


And, he goes on, that’s what makes AI scary - and this is related to my beef with AI LLMs - because they deal in “truthiness” - AI LLM’s produce images and text that seem like truth, but may not be. And this kind of AI both doesn’t know (unless you tell it to with your prompts) and it also doesn’t care if it has dealt you something that isn’t truth!


Harari is going on to argue that AI ought to stand for like Alien Intelligence because it is becoming harder to predict what it will come up with, and it behaves in a fundamentally alien way from humans. You can Google to read about AI hallucinations - pretty wild stuff. An AI hallucination is information, but clearly it is not truth. He gives the examples that there are no known portraits of Jesus of Nazareth that we know for sure were of him - existing portraits are information, but they are not truth. Truth, he notes, is expensive - science takes time and repetition to ensure we find what’s real and true.



The need for humans to have reasoned, reasonable conversations


Dennett argued for regulation, and Harari argued for AI oversight by a human institution because human institutions can self-correct (where regulation can stagnate). A helpful example of a self-correcting human institution is a democratic electoral process: when we get tired of one political perspective, in the next round another perspective gets elected. 


Harari notes that it is not a coincidence that AI algorithms are causing democratic processes to break down, because we can’t have reasoned conversations. There is too much “truthiness” and lack of shared understanding (I’ll add: as a result of bad actors who are humans deliberately adding incorrect information to our information networks). He proposed that we ban bots / fake humans so that online we always know who we are talking to. Remember my concerns about how I decide what information (and truth) is reliable, the WHO and the WHY matter. AIs need to self-identify as AIs so we can assess whether there is a chance the apparent “person” we are communicating with is hallucinating or not.


Like with our food, Harari argues that we must watch the quality of the information that we “feed” our minds. If we fill our heads with sick information, we develop sickness of mind.


So let me encourage you to choose your information thoughtfully, because it may not be truth.

Comments

Popular posts from this blog

Dying isn't the hardest thing: Hunting for Ben Solo’s road to redemption in Adam Driver’s filmography and Steven Soderbergh’s narratives

Role-modelling through entertainment to save lives: Why Superman (2025) felt so good, and why making films like The Hunt for Ben Solo matters

What there is to appreciate about Tron: Ares, and a nod to Harlan Ellison's I, Robot Screenplay