The Hitchhiker's Guide to AI and Christianity: Don't Panic, but Beware the Robots

In the last four months, we have witnessed a massive technological development: public access to AI. Previously AI was relegated to enterprise tech environments, universities, and hobbyist programmers, and it wasn't easily accessed by the general public. Sure, anyone could download and start training GPT-2 model on their own and spend the next year training and feeding it info through coding their interface. Or fork up the money to plug in via their API, to which you still had to code something to interact with it. There wasn't, to my knowledge, anything as we have now. The release of ChatGPT is something to truly be amazed by.

I like many others have been intrigued by it and by the memes coming from it. I've definitely put it to work doing the tasks I hate doing (copywriting... mostly… and the title to this post). But apart from the novelty of the new toy, it has created a lot of ruckus among people. Droves of questions have come up regarding the potential for loss of jobs, the ever-impending singularity, and the one that has intrigued me the most: morality. 

The first angle of the morality question is in the "training" of AI. If you are unfamiliar with how AI works, in very very simple terms, each AI is trained on sets of data. These datasets are generally curated to help the AI accomplish a task. With these datasets, the AI isn't "thinking"- it is making predictions based on patterns displayed inside the dataset. Say, for example, you wanted to create an AI to detect diabetes inside individuals. You give the AI trillions of lines of Excel spreadsheets, and each row has the following information: height, weight, age, glucose levels, and BMI. Then at the end of each row, you have a checkbox: Diabetic. The AI will iterate over the data set trying to predict that checkbox. When it fails the prediction, it reiterates and refines its predictions. It eventually will be able to predict diabetic conditions within a certain probability, meaning it will be something like 85% accurate. Pretty neat, right? 

Now here comes the kicker. What if that dataset is trash? Then your AI will be trash. Garbage in, garbage out. This is pretty simple for things like a Diabetic Prediction Engine. It just simply wouldn't work. But what about something more robust? Like a ChatGPT. Which is answering more complex and nuanced prompts. Who defines what is garbage and what isn't garbage? Largely, this would be up to the morality of the owner. But this posits a danger. As AI becomes more widespread and influences more decisions, the morality of the creator and that found in the dataset will become more prolific. There is no shortage of examples of ChatGPT's politically left-leaning bias and odd wokeness at times (I've encountered it multiple times myself). An interesting one was where a user prompted it regarding racial discrimination towards different demographics. They prompted it with the same question about Black, Asian, Hispanic, and Natives to which it gave a pretty basic woke milquetoast response of the terribleness of racial discrimination and how these demographics should be treated specially due to said discrimination. When the same question was prompted towards whites, it responded that such discrimination does not exist and that type of discrimination is perpetuated as a lie. Again, pretty standard woke-leftist stuff. In its current form and usage, it's rather harmless and just gives more fodder for the perpetually victimized portion of right-wing politicians and individuals that cry at literally anything the left does.

But what if another AI is trained the same way as ChatGPT was and its purpose is to triage patients at an ER. One patient walks in, suffering from severe abdominal pain. Unknown to the patient, his appendix is in the process of rupturing. This first patient is also white. The AI does its preliminary scan and determines this man is at risk of acute appendicitis through black-magic-AI-ery. It then throws him at the top of the list to be taken back. Before the 1st patient is taken back, another person with the exact same symptoms and exact same health dispositions walks in. The only difference is that this second person is black. Normally, this ought to be a first-in-first-out situation. Patient #1 goes back first. But if the AI has the predisposition to treat minorities preferentially, patient #2 skips ahead for no medical reason at all. Now if this situation carries forward, at what point does an individual's minority status play into their medical treatment. It certainly isn't far-fetched now that there are certain individuals who think moderate ailments for minorities should place them to be treated sooner than the same or even more life-threatening conditions of their white counterparts. AI that shares this conditioning would then become another ideological weapon guised as something helpful to society. 

Now, if the fearmongering of medical AI is too much, let's completely shift gears to what I really wanted to talk about: Christians. This advent of accessible AI posits yet another challenge to our faith, instant access to a machine that gives answers to almost any question. It's like crack for the itching ear. Want to know how to run your business? ChatGPT. Want to know how you should handle a social situation? ChatGPT. Curious about your faith? Yeah, ChatGPT has answers on that too. In an altercation with chatGPT, I was asking it about specific eschatological viewpoints of Christianity: chiliasm, dispensational premillennialism, and amillennialism. After a few "describe these for me" type questions, I started asking about which of those is most commonly believed today. It confidently spouted off that according to numerous surveys, amillennialism is the most common eschatological viewpoint today among Christians. Now, being a moderately well-versed individual who cares about eschatology and has spent some cycles reading books about the subject and talking with dozens of people regarding their viewpoints on the matter, I was shocked that this was the answer. In my time, I have only ever talked with a single individual who holds this viewpoint, most everyone else is dispensational premillennialism. So I asked it for those surveys, and surprisingly, rattles off 4 of them with links. 2 of the links were dead (404 errors), and the other two surviving links had nothing to do with the eschatological viewpoints of Christians. I then prompted it by telling it that those surveys didn't suggest that its conclusion was correct, it backtracked and said I was correct and according to its data, there wasn't enough data to come to a concise or definitive answer. In other words: the AI boldface lied

Tin foil hat time: 

Why would the AI make this claim? Well, if I were to take a stab at it, it would largely be because the AI, in the words of another blogger: has the grammar, tone, and vocabulary of a midwit neoliberal arts major - in other words, chatGPT has a pension for the common drivel of liberal master degree holders. And what viewpoint do those highly educated folks love? You guessed it: amillennialism (well at least with pastors: https://research.lifeway.com/2016/04/26/pastors-the-end-of-the-world-is-complicated/). 

Off with the hat now. 

Now I get it, it is only v4. There is still a bunch of ironing to be had - but no. This is dangerous. If believers cannot critically think (argaubly a lacking trait in most of the general population), and get an attachment to using AI for other purposes (homework, fun, etc) and start prompting it about their faith; this becomes a downhill trajectory fast. You have AI models willingly and boldly lying, with the public assumption that these things are to be trusted, then we will be going into a very dark timeline where the scratching of ears and deception will increase at rates we have never seen before. AI is a force multiplier, and it will multiply.

The greater threat that AI poses it not to jobs, but to integrity. Technologies are always subject to the whims of its creators, and they will always be abused by others. But these threats thrust us even further into territory unknown. An AI that has pervasive bias coupled with undeterred lying move us into a place that people can get the answers they want to hear and push the ploys that best tickle their restless ears, this creates a distinct threat to integrity. As an additional reasons I believe the warning about people seeking to scratch their own ears was tied to the intregrity of individuals and societies. When people seek to please themselves at every corner, seeking to hear only that which they want, they lose their grasp on reality and their integrity becomes baseless. For what can a man be integris to? What base do we have when that which we know is lies meant to pander to our fancies? But standing aside and refusing to acknowledge or even employ technology is no way forward either (there is a great technological deficit inside of western christianity, technology development and acceptance appears to be 10-15 years behind secular counterparts). AI even has the potential to help push earnest, well intentioned faith forward. But sitting aside and calling all AI evil is no worse than blaming knives for cutting people. The knife doesn't cut, the hands that wield it do. 

 As with everything else, the treatment of new technologies must be done with scrutiny and a just motive. But when looking to technologies created by others under opaque conditions take the Cold War Era Russian saying and apply it to a brave new world we are stepping deeper into: trust, but verify. 

Pre-publish, post review update:

After logging into ChatGPT’s interface today, I was kindly welcomed by a pop-up that said something along the lines of: “sometimes the AI might give answers that are skewed towards one political bent and it might give out misinformation. We are trying to not do that.” So I guess I can’t completely burn it to a charred crispy black husk at the stake anymore. The devs at least are acknowledging what everyone has been saying. Although I really doubt they didn’t intentionally bake those biases in.