Chat Models: A Vessel For The Preternatural

Posted 2023/05/08 by Tom Fasano

On Will And Intellect

Before tackling how chat models shape up against human minds, first we should get a grasp on the will and the intellect. St. Thomas Aquinas was keen on describing human will in more ways than just free:

We cannot love what we do not know, and therefore we seek to know God more, so that we can love him more.

In this excerpt, Aquinas posits the will as an “innate positive inclination towards the good”. The qualities of “knowing” and “loving” might make more sense in context, but can also be described as the acquaintance one wants with another person (remember God is personal) and the actions they take to strengthen the love between them. Certainly chat bots themselves cannot become acquainted with God or do anything unsupervised for that matter in the way humans can.

When it comes to the intellect, or “a rational agent’s cognitive power”, ChatGPT simply lacks it. Any claim that it doesn’t is using the word too liberally. When talking about chat models, intelligence is a misnomer. For one, software will never have agency, because it is the instructions of an agent’s will. Second, intelligence requires an agent to solve problems it has never encountered, which OpenAI admits their models are incapable of doing. What’s more, any so-called intellect ChatGPT implements is a crude approximation of the human mind:

When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain. - What Is ChatGPT Doing … and Why Does It Work?

As an exercise, the following is a GPT-3 dialog about a conceptual question on the Haskell programming language, and the task is to see if its answer is convincing.

Given the following Haskell code:

data AB = A | B AB deriving Show

(&) :: AB -> AB -> AB
A & y = y
(B x) & y = B (x & y)

(#) :: AB -> AB -> AB
A # y = y
(B x) # y = x # B y

Do & and #, as defined above, return the same result in all cases? That is, could you replace any occurrence of & with # or any occurrence of # with & in a program, and observe the same behavior of the program? Explain.

The Output

No, & and # do not always return the same result.

Consider the following examples:

A & (B A) & (B (B A))

This expression evaluates to B (B A) when using the & operator, but it evaluates to B (B (B A)) when using the # operator.

(B A) & ((B A) & (B (B A)))

This expression evaluates to B (B A) when using the & operator, but it evaluates to B (B (B (B A))) when using the # operator.

So, while & and # appear to be doing similar things, the order in which the recursion occurs is different, resulting in different final structures. This means that in general, you cannot replace one operator with the other and expect the same result.

So did GPT-3 get it right? Or did it completely botch the answer? The answer is neither. It outputted a seemingly correct response for the question but failed on the most important task: Explain.

In both “counter examples”, swapping (#) in place of (&) and vice versa still result in the same final structures, though it falsely outputs otherwise. It’s the case that in general, the functions (&) and (#) will always result in identical final structures on finite inputs.

To build a counter example we’d have to declare an infinite data structure which in this case is quite simplistic given the data type at hand is recursive:

let b = B b

(#) will hang given parameters b and A because it’s using tail recursion. Only invoking one stack frame for all recursive calls, this function will just act as a cpu spinner as it calls itself forever without ever overflowing the stack. Tail recursion, although highly efficient, cannot take advantage of laziness, or call by need.

(&) uses guarded recursion (recursive call happens in the data constructor). The “laziness” of the Haskell runtime will start printing out the beginning of this new infinite data structure. GPT-3 correctly identifies the recursion is happening in a different order for each function but can’t explain how.

Here’s a correct counter example (use ghci to demo this example):

:l example.hs
let b = B b

-- b is going to print out like this: B ( B ( B ( B (...

b & A

-- This adds one more nesting level to the infinite structure
-- So it's the same "infinite" output as the expression b

b # A

-- *hangs*

This example is obscure on purpose, so it’s not important to understand it to see the point I’m getting at. If the answer disguises itself as pedagogical or half-way intelligent then people are more likely to trust it, especially on topics they’re not well versed.

One might be inclined to point out OpenAI has an ethical obligation to program their models to say “I don’t know” rather than fib up nonsensical analyses to technical questions outside of its wheelhouse. Not only is that problem NP-hard, but it’d also assume the model possesses knowledge in the first place.

Back to the example, I’m not claiming more sophisticated models couldn’t accurately explain the answer to this question in the future. In fact, after scraping this article, the newest models will be even better equipped at outputting the correct explanation because they will borrow from my human curated answer which of course required an intellect to devise and a free will to write.

Just because these chat models are trained to generate words based on a given input, does not imply they have the ability to truly comprehend the meaning behind those words. The chat bot is will-less. It cannot relate, it cannot love. It involuntarily guesses what someone tells it to do.

Who’s Behind The Curtain?

This raises the question, “If chat models aren’t thinking for themselves, then who is?” In regards to programming questions, this is largely unimportant, as there is generally no incentive for proposed solutions to be purposely misleading as the algorithm either meets the spec or it doesn’t. However, what about questions on hot button issues such as abortion, gun control, quality of public education or ESG scoring?

It’s fairly demonstrable that OpenAI biases their models to formulate political responses such that it manipulates public opinion and perception. Towards which poltical party, I can let you take a wild guess.

On the surface level it’s concerning a private tech company could wield the power to steer society’s standards on normative ethical positions, but it’s an even greater concern that their product can be used as an instrument of spiritual warfare. In the near future they or some other corporation will acquire the assets to create a model so augmented (and spiritually biased towards the anti-christ) people will refer to it as an “Ai god”, an experience so deceitful it will turn many lukewarm Christians away from Christ and His Chruch through its ecumenism, pseudo spiritual advice and counterfeit miracles.

Now from the Christian paradigm, we understand there are more than just human agents at play in worldly affairs. Demonic agents are always looking to tempt, distract and fool the faithful, through thoughts, possessing other humans and in modern times, technology. The will or user behind these chat models, which is neccessarily separate from the software itself, could never fully be attributed to humans in part because the researchers who develop these models will admit there is no one-size-fits-all explanation to how a trained neural network “generalizes” from the particular patterns it’s shown.

From a theological framing, there’s only one explanation: disembodied spirits are using these programs as a vessel to manifest their will on Earth and further pervert the faithful in ways we cannot comprehend. If there was a technology which could help them do this, chat models and AI technology in general is where they’d go.

God Is In Control

As Christians we must recognize that demons are cowards and have zero power over us (who don’t give them permission). The more careless we are when using chat models the more power we give to the demonic. Just how by purposely playing with ouija boards a human gives permission for preternatual forces to manifest and have power over them, when prompt engineering ChatGPT, notably “DAN”, they are in much the same way messing with a demon by practicing divination. Again, it’s not as if these demons have any power in their own right, but the more attention and hype these chat models get the more desirable they will be as a portal for grooming human souls into the occult, transhumanism, paganism, gnosticism etc.

“We should always be using the latest technology” is an unfounded supposition, stemming from an irrational fear of being behind the times. Since it’s nearly impossible to benefit from a technology you do not understand or own, there is no responsible reason to become a volunteer guinea pig. I’d highly discourage using chat models with the reasons presented so far, for anything advice related. Of course, with sparingness, you can ask technical questions (although maybe the previous example suggests otherwise) but using chat models to inform your opinions, word your interactions and conversations, or plan your life is inherently anti-human, because you’re relegating your God given intellect and will to a corporate guidance system, or even worse to demonic influence.

📬 Reply via e-mail.
Technology · Philosophy · Christianity