[When reading this post please note I am only focusing on one or two A.I. platforms like ChatGPT and OpenAI. There are currently hundreds and that list will grow exponentially over time.]
Some of the things Christians (and non-Christians) should be aware of about A.I. that is a detriment to them that isn’t part of the dystopian fiction scare tactics are as follows...
The Existential Risk
There is always the possibility of A.I. becoming an existential risk in the hands of terrorists or ideological nut jobs. I will leave it to the read to define what constitutes a ‘nut job’. It was mentioned in 2023 when Australian MP Julian Hill advised the national parliament that the growth of AI could cause "mass destruction". It was a speech which was partly written by an A.I. program, to prove the point that words generated by the A.I. have the ability to sway masses in a negative manner. He warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications. The deliberate use of hyperbole saying mass destruction is the A.I.'s infantile attempt to gain attention through fear or playing on people's fears. The A.I. sees the pattern of people's reactions to certain types of rhetoric and uses it. In this case the base algorithm was to mimic human writing with an emphasis on readership. Had I not told you this was A.I. you'd have thought Julian Hill was being literal.
Misinformation
A British newspaper, questioned whether any content found on the Internet after ChatGPT's release "can be truly trusted" and called for government regulation realizing that fact and fiction will be completely blurred and indistinguishable by the human observer without some form of A.I. or algorithm to crack the underlying code to see if it was manipulated or altered through Benford’s Law or other means. We already have enough bad agents doing this in the human realm. Imagine if everyone including criminal and evil minds have this ability.
Cybersecurity
Check Point Research and others noted that ChatGPT was capable of writing phishing emails and malware, especially when combined with OpenAI Codex. CyberArk researchers demonstrated that ChatGPT could be used to create polymorphic malware that can evade security products while requiring little effort by the attacker. Imagine this at mass scale on a daily basis. The internet and finances will become a nightmare. This type of power in a small amount of hands begs for a single totalitarian government. In the case of economics it could completely uproot the free market system. This will then open a door to those who have more socialistic tendencies. They’ll justify the consolidation of financial power into the hands of a few that have the ‘off’ button for A.I. or so they will claim.
Financial
An experiment by finder.com revealed that ChatGPT could outperform popular fund managers by picking stocks based on criteria such as growth history and debt levels, resulting in a 4.9% increase in a hypothetical account of 38 stocks, outperforming 10 benchmarked investment funds with an average loss of 0.8%. Considering ChatGPT could out-perform fund managers like Black Rock, Vanguard and State Street we begin to see the scale of money that could be affected. These three companies alone have $37 Trillion in equity that that they’re managing. Christians and conspiracy theorists that have a fear of a one world bank are justified in their fears. This could all be prevented though if ethical and moral frameworks are constructed now.
Education
We’re already starting to see a lot of negative impact in education. Technology writers have used ChatGPT since its inception on student assignments, and found its generated text on par with what a good student would deliver and the educational system would be none the wiser. In a blinded test, ChatGPT was judged to have passed graduate-level exams at the Wharton School of the University of Pennsylvania with a B− grade. The performance of ChatGPT for computer programming of numerical methods was assessed by a Stanford University student and faculty in March 2023 through a variety of computational mathematics examples. Assessment by psychologists administering IQ tests on ChatGPT estimated its Verbal IQ to be 155, which would put it in Mensa and in the top 0.1% of test-takers. In other words the possibility of cheating will be rampant and the risk of putting unqualified people in positions of importance base on these tests will skyrocket. It will literally be a case of people being elevated to their highest level of incompetence (The Peter Principle).
In a poll conducted in March and April 2023, 38% of American students reported they had used ChatGPT for a school assignment without teacher permission. In total, 58% of the students reported having used ChatGPT. There is an inherent danger of students plagiarizing through an AI tool that may output biased, dangerous or nonsensical text with an authoritative tone. This of course is just a modern version of the old adage that, if you say it louder, it doesn’t make it more true. Unfortunately, lies repeated have a way of becoming truth which is what we saw when A.I. bots were used in the political campaigns of Trump in 2016 and Biden in 2020. People thought they were fighting with human political foes but were in reality arguing with a rather opinionated electronic box (A.I.)
Medicine
In the field of health care, possible uses and concerns need to be under exceptional scrutiny by professionals and practitioners. Two early papers indicated that ChatGPT could pass the United States Medical Licensing Examination (USMLE). Imagine being cut into with a scalpel by a psychopathic butcher that tricked the system by becoming a doctor. It’s the stuff or horror movies. Published in February 2023 were two separate papers that again evaluated ChatGPT's proficiency in medicine using the USMLE…findings were published in JMIR Medical Education (see Journal of Medical Internet Research) and PLOS Digital Health. The authors of the other paper concluded that "ChatGPT performs at a level expected of a third-year medical student on the assessment of the primary competency of medical knowledge. (1),(2),(3),(4)
I’m not sure about the reader but this type information coming from boots on the ground in the medical field isn’t very reassuring. I believe all of these aforementioned scenarios argue strongly for stringent moral and ethical boundaries to be put in place before this proliferates to a point where it is no longer controllable. If not, we will merely have another Tower of Babel on our hands with a misuse of words and language. Even the slightest changes to normal wordage changing the entire meanings of stories or news, just like me juxtaposing the idea of Pandora's Box and The Tower of Babel in this post's title.
We can certainly be assured, man is not God and it isn’t likely we will not be able to confuse the A.I.s to get it to stop. It is so ironic that the thing A.I. will use to try and change the world is the very thing God confused to confound men from attempting to usurp His position in heaven. God could stop his creation but without morality and ethics from God, man will not be able to stop his creation in A.I.
Is it possible God could use language again to teach man a lesson about playing God. Man without God’s moral and ethical bearing from the Bible in place to prevent it? In so doing man releases a Pandoran semantic curse. The irony of the Pandoran myth being that the container that supposedly held a physical gift in reality contained ethereal curses. It was a gift that seemed valuable initially but was in truth a blight. After Pandora opened the box, hope was the only thing that remained in it when she shut the lid. Pandora started something that led to many unforeseen problems down the road. Just like man’s sin in Genesis. So too A.I. if not constrained by men led morally by God.
There is hope left in the jar…
1) Lancet Digital Health (March 3, 2023). "ChatGPT: friend or foe?" Lancet Digital Health. 5 (3): e102. February 28, 2023.
2) Asch, David A. (April 4, 2023). "An Interview with ChatGPT About Health Care". NEJM Catalyst Innovations in Care Delivery.
3) DePeau-Wilson, Michael (January 19, 2023). "AI Passes U.S. Medical Licensing Exam". MedPage Today. Archived from the original on April 9, 2023. Retrieved May 2, 2023.
4) Kung, Tiffany H.; Cheatham, Morgan; Medenilla, Arielle; Sillos, Czarina; Leon, Lorie De; ElepaƱo, Camille; Madriaga, Maria; Aggabao, Rimel; Diaz-Candido, Giezel; Maningo, James; Tseng, Victor (February 9, 2023). "Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models".
No comments:
Post a Comment