November 7, 2025

Apocalyptic Industrial Scale Artificial Intelligence, Part 3


[Continued from Part 1]

The Creation was created and is maintained through language. God’s word. God spoke things into existence in Genesis 1. Hebrews 1 tells us he upholds it with language. Language is a symbolic verbalization of reality.

Hebrews 1:3 ~ The Son is the radiance of God’s glory and the exact representation of his being, sustaining all things by his powerful word.

The beginning of man’s downfall began with language. Satan spoke the lie to Eve trying to use language to contradict the words God used.

Genesis 3:1 ~ …the serpent was more cunning than any of the wild animals the Lord God had made. He said to the woman, “Did God really say, ‘You must not eat from any tree in the garden’?”

The deterioration and division of man’s condition was sped up by the inability to understand language.

Genesis 11:5-7 ~ Now the Lord came down to see the city and the tower which the men had built. And the Lord said, “Behold, they are one people, and they all have the same language. And this is what they have started to do, and now nothing which they plan to do will be impossible for them. Come, let Us go down and there confuse their language, so that they will not understand one another’s speech.”

It is now starting to seem that man further slides into chaos with the misuse or immoral misapplication of language.

Proverbs 18:21 ~Death and life are in the power of the tongue, and those who love it will eat its fruits.

In other words, if the words are bad, they produce bad outcomes, if they are good they have the potential to create good outcomes. If the words are confused, they create confusion.

How does this apply to AI? Large Language Models (LLMs) are advanced AI systems that utilize deep learning techniques to generate human-like text or response. They are widely used in chatbots, virtual assistants, and other applications where natural language processing is required. However, LLMs can exhibit dangerous behaviors, such as issuing homicidal or threatening instructions or engaging in deceptive actions, which raises concerns about their potential to cause harm. These behaviors are attributed to the models' architecture, which is based on neural networks that learn from vast amounts of data. Some of that data can be corrupt or incorrect. Some is even a feedback loop of its own errors.

The implications of such behavior are significant, as it could lead to misuse in various fields, including law, government or institutionalized education where LLMs could be used to create or manipulate content. Content that could be harmful or misleading to the average user and they’d be none the wiser. The LLMs unintentionally are dumbing down the populace. If life is, as Thomas Hobbes famously said Leviathan, “nasty, brutish, and short” referring to the state of humanity without proper leadership than LLMs are dishonest, unpredictable, and potentially dangerous without some system of control. AI’s potential to industrialize unethical brutishness is highly probable.

The more I’ve explored the underlying engine of AI in the LLMs, the clearer it is that our control over them is limited, and they can say (and potentially do) anything moral or immoral, depending on the circumstances. They have no moral off switch, sense of justice or conscience. If the goal of AI safety research has been to build AI systems that are helpful, honest, and harmless they’ve failed miserably. There is even a term for bypassing safety protocols called jailbreaking. Sometimes simply changing wording to a question can give a requester access to information the AI normally would’ve forbidden due to its biases or programming. Conversely, it can refuse to issue lifesaving information because the AI views the requester as hateful, harmful or terroristic based on political bias programmed into it.

If we hook these LLMs up to systems that have agency (the ability to make decisions and act independently)– the power to send out instructions over the internet, and to influence actual human beings - we will start to have real problems. In short, language is now potentially becoming humanity’s undoing. AI is in short, re-unifying that which God broke apart at the Tower of Babel. We are rebuilding a tower (figuratively) through the construction of 100’s of millions of computer towers to try to attempt to be like god  in our knowledge. We are possessors of knowledge and we are doing it through the collating and coalescing of language in the framework of AI. We have come full circle to seek unknown knowledge thinking it will bring us closer to omniscience. It will only take us further into chaos.

Genesis 3:5 ~ For God knows that when you eat of it your eyes will be opened, and you will be like God, knowing good and evil.

So, what can be done to mitigate the negative aspects of the dawning AI revolution?

We can hope for the best, we can continue pouring more data and prompts that are never followed to the letter into LLMs, hoping that wisdom and honesty will somehow miraculously emerge there from, against all evidence to the contrary. But the reality is that AI is moving much faster than wisdom and honesty. Do we really want to live in a world where powerful systems that lack wisdom and honesty are widely adopted? We already have that in the form of politicians and government. We need to ask ourselves, is that a chance we want to take?

We can shut LLMs down or at least some applications of them and insist on waiting until these ethical and moral dilemmas are corrected. Evolutionary psychologist and AI safety advocate Geoffrey Miller once put it. Alas, who has that kind of patience? Given how much money is at stake, and uncertainty and doubt about China eclipsing us in the race to build the world’s most perfect text completion system, I estimate the probability of society patiently waiting for solutions to be zero. It’s not obvious that waiting would be a bad idea too as we’ll be overtaken by China, Europe or even India.

We can make companies accountable for the damage that comes from their systems. It will almost certainly never happen, at least anytime soon in the US. The major tech companies will do everything in their power to stop it and have far too much sway through money, lobbying and crooked politicians on the dole. The current Federal government has been violently opposed to anything like that kind of legislation. The truth is that the House recently forwarded a provision as part of Trump’s “Big, Beautiful” bill, to keep states from doing anything to even slow AI. The Senate gave the provision that would block state action a green light, making it ever less likely that the United States will ever hold AI companies meaningfully accountable for the harms they might cause. Under the current political environment both left and right, if machines cause catastrophic consequences, armies of lawyers and lobbyists will be there to protect their AI masters.

Businesses that house and run AI only understand financial incentives so until shareholders & executives are held legally & financially liable nothing will change in how they operate. If every factually incorrect output could be used as evidence for financial recuperation by their customers, then I bet they would immediately start thinking about how to secure outputs with verifiable sources of facts. Otherwise, output is going to be morally, intellectually ambiguous or completely wrong. Their outputs merely garbled re-wording or text with obfuscated meaning.

My view is that there is zero chance of slowing down the AI race, and a zero hope of taming the beasts we have come to know as LLMs. It is at least possible that enough bad stuff might happen that citizens get riled up and fight much harder for accountability, but this is a pipe dream. There is just too much money and too much power involved.

There is also the potential error of confirmation bias that repeating language models will feed on. They AI will continually go back to previous logic loop conclusions of past LLMs. This is of course the multiplicity paradox that a copy of a copy degrades through entropy. If AI continues to reenter the same pool of ideas with no new input, correction or the models are flawed from the beginning, the entirety of baseline knowledge begins to degrade. The fact that Gemini AI summary is now the default thing you read when you "Google" information is already bad enough.

Imagine the exponential effect that has on confirmation bias. AI will search for info on information it just learned, and it will get it confirmed instantly and in a way that complies with how you asked the question. I've manually altered my searches in many ways to see how it just makes up new facts about the things I search for, whether it's facts about race, electronics, geography or politics.Couple this with the high informational load and rising illiteracy of young people dependent on their phones and internet and we’re seriously in trouble.

This entire slow-moving trainwreck feels like a global scale drama demonstrating how profoundly powerful the sunk-cost fallacy is. Whether out of willful self-deception, or something darker it seems companies are so deep in the hole, and so thoroughly out of ideas they will continue doubling down on AI until forced to stop by the markets, or a series of sufficiently dramatic disasters. Prudence and morality are clearly outside the current purview and business models. Equally, the 10’s of thousands of trusting devotees, preaching the good word of AI on a corporation’s behalf, will insist synthetic text producers are in fact the second coming of Christ or the new Tree of the Knowledge of Good and Evil. Therefore, repeating the errors and sins of the ancient past.

AI allows biases, mistakes or intentional lies to be repeated a billion times within seconds and be understood as truth. No matter how many different ways you code or word a lie, its still a lie.

1 comment:

Anonymous said...

Protect all. Average person pretty not good. This is the only issue.