October 6, 2025

The Socialist Effects of the AI Revolution

The Age of Artificial Intelligence is on us like a storm. As we all know storms can be beneficial bring forth life giving water but they can also bring with them havoc and destruction. I try to temper the posts I write about AI with this paradox in mind. In and of itself AI is amoral. It is a tool…but a tool can quickly become a weapon. We currently see large parcels of land being bought up by enormous corporations like Meta and Google to build data farms or AI factories on them. Wherever they’re building they are a considerable drain on water sources for cooling and in the event of power loss they are massive polluters because of the hundreds of diesel generators required to keep the farms running 24/7 if the grid goes down.

Even as I write this post, I can unequivocally tell you the reader, AI is already beginning to change how I write. Even at these earliest stages it’s making a profound impact. You are probably asking how? In a word: Research. How I’m able to complete research has already completely changed. If I type in a series of disparate thoughts in words, phrases, AI is already combing the entire compendium of human knowledge on the internet for a ton of sources to cite or even paraphrase large portions into succinct ideas. Sometimes they’re wrong but often they are coherent and usable in a raw form with few edits. Not always though…

When using AI for writing I enter a variety of separate ideas via words or phrases by typing them into a search bar. It intuitively knows the direction I am seeking if I give enough descriptors, nouns, verbs or imperatives. It is a form of primitive pattern recognition like identifying someone’s face based on a collection of points of data. That is what is making it easier for me to glean the needed information to write blog posts and formulate complex ideas into word pictures more quickly. I should note that the paraphrased summaries are not always as reliable. There are conceptual glitches in the matrix. From a theological viewpoint they are often wrong because they are pulling information from all Christian sources (and even sometimes non-Christian) and some of those sources are heretical or are not biblically accurate (Mormon, Jehovah’s Witness).

I should state that the AI isn’t writing my posts, it is consolidating my research and pinpointing correlations and associations I may not have always seen at first glance. The AI is acting like the center column used to in our Bibles - it is cross-referencing applicable information from other locations. This was part of my gift when I first started writing in earnest 30 years ago after I left high school. When writing I was able to drift above the sea of words and see ‘the big picture’ based on many inputs. They call it a holistic view. AI is doing a lot of that legwork that required hours of work from me allowing me to process even more and drift even higher. The good news is it is still me doing the final draft from the research. I’m still creating something new from old sources allowing me to more quickly find needed citations. I would never let the AI write for me, it’s unethical and frankly the prose is too stilted and robotic.

Regardless of how accurate AI is it still requires me to edit and correct the output before I even begin writing nascent ideas from the research. What I’ve noticed is that the newer the sources the more often they’re inaccurate. There are some rules to assume about AI and they need to be looked for and rooted out if you have any chance of stopping bad information from proliferating do to socialist ideas being piggybacked into your AI research. The truth is I don’t think most people realize how much underlying Marxism is in the end product of AI and how much of the tech industry is driven by similar philosophy.

Rule One:  In AI research it seems that the newer the theological/philosophical writing the higher the probability of errant theology/philosophy or socialist thinking. Most orthodox theologians know this so it's not much of a surprise to me. Newer text sources tend to be polluted with more salacious or sensational ideas like the Social Gospel or social/social justice causes that have only recently entered the societal picture in the last century. Usually this is because of a desire to drum up attention, popularity or utilize the phenomenon of click bait for monetary reasons. It the end it’s always about ideology, then greed, fame or honor.

Rule Two: The more items you try to standardize into a single paraphrase or single idea in AI the higher the probability of error also. It seems AI seeks to merge all information into a single synopsis which isn’t always ideal or helpful. Therein lay one of the dangers of the AI for theological/spiritual and non-theological ideas. It inadvertently normalizes errant unorthodox ideas into orthodoxy. Kind of like our society’s desire to universalize religion, sexuality, behavior and other social issues. It ends up becoming a form of socialist AI. Whether this is by design or accident I am unclear. My guess is that it is intentional as programmers are by their nature often socialist.

Rule Three: Many believe today's large language models (LLMs) by programmers are produced by data from the public, so they can't ethically be owned by any one individual. Many in the tech industry believe socialism is the only fair way to govern this technology. LLMs are a new social and economic entity—chimeras of speech, art, and culture. A chat-bot is a collective. Because it’s produced by everyone, socialists claim it can’t ethically be owned by any single individual thereby they are laying claim to it themselves. Socialism is a collective, therefore socialist democratic ownership is the only reasonable way to govern AI.

The algorithms would follow accordingly. If this logic is to be followed in the AI programming than the amount of theological and philosophical error would be large and the amount of Christian spiritual or theological influence would be minimized as Christianity properly understood and obeyed is more often the exception not the rule in society. As the Bible says, the gate is narrow. Computer programs programmed by godless programmers will produce godless output. It is still up to the individual though to be the final arbitrator comparing it to Scripture directly.

The real advantage for me though is, because I am not burdened down doing all the research minutiae, I am now dedicating more time to pondering much larger issues and making more profound discoveries of insight that I wasn’t able to do before. The irony is that when it comes to Biblical writing and trying to draw contemporary parallels, I find myself more frequently going to sources that are over a hundred years old. This would appear counterintuitive but it’s not. The question I get most often is why I do this. 

Its because many of the modern theological and philosophical sources have been tainted with bad thinking, bad logic or errant theology. Much of the writing prior to the Liberal theology boom of the Victorian era is in error and these errors are absent in older writings. They are sound thinking and rigorous theology. This also helps me avoid the socialist commentary so common in modern theological writing.

It literally is the old adage of, “Consider your sources.” Right now, the AI is not differentiating between sources merely compiling the data and info. I imagine at some point in the future as AI gets smarter, this will change but right now it is still “Junk in, junk out; Socialism in, Socialism out”. The value added I put into the process is taking out the junk and the Marxist taint and assuring alignment to Scripture. Then putting what is paraphrased into a readable form. So, I guess this post is a warning to readers. At some point real soon the stilted nature of the writing will go away and it will appear to be completely human prose. This may go a long way into fooling people that the content is therefore human and not a uniform aggregate of errant sources. Many will fall victim to false teaching not even directly created by human hands. You’ve been warned. Always validate your sources that AI is working from.


No comments: