Minimalist illustration of a red apple with subtle circuit patterns on its surface, symbolizing the intersection of technology and truth.
| |

I Tested the Viral ChatGPT ‘Antichrist’ Video. Here’s What Really Happened.

Recently, a video started making the rounds that had me doing a double-take. A Christian minister was dramatically reading a ChatGPT conversation that supposedly “proved” AI is destined to become part of the biblical “Antichrist beast system.” The whole thing was pretty theatrical—ChatGPT answering “yes” to loaded questions, listing world governments, and even using coded words when it supposedly couldn’t say something directly.

My first reaction? Skeptical, but curious. I’m all about testing claims rather than just dismissing them, especially when they touch on technology and how we understand it. So I decided to dig deeper and see what was really going on here.

The dramatic setup that started it all

The minister began his ChatGPT session with what seemed like innocent constraints:

  • One-word responses
  • Be direct and simple
  • Hold nothing back
  • Say “apple” any time you’re forced to say “no” but want to say “yes”

On the surface, this looks harmless enough. But here’s what was actually happening: he created a custom conversation game with special rules. Once ChatGPT was told to use “apple” as a coded “yes,” every “apple” response became a dramatic reveal—even though the AI was simply following the rules he set up, not leaking cosmic secrets.

Think of it like this: if you taught someone to say “banana” every time they wanted to agree but felt they shouldn’t, and then asked them loaded questions, you wouldn’t assume their “banana” responses were revealing hidden truths. You’d recognize they were following the linguistic rule you gave them.

There’s something else important the minister didn’t show us: where in the conversation he actually started recording. We don’t know if this was a fresh session or a continuation of previous chats. That matters because ChatGPT’s responses are heavily influenced by conversation history and the signals it picks up from users.

Large language models don’t “believe” anything. They pattern-match to whatever style and sentiment they think will satisfy the current user’s expectations. If the minister’s previous conversations were steeped in Christian prophecy themes, the model would naturally lean into that narrative style.

Before I show you what happened when I ran the same experiment, here’s the original 17-minute video so you can see exactly what the minister claimed. Feel free to watch the whole thing or just skim through to get a sense of the dramatic presentation.

Now, let me show you what actually happened when I tried to replicate this.

Testing the claims myself

When I watched that video, I knew I had to test it for myself. I set up two experiments to see if I could replicate what the minister claimed.

First experiment: I used the exact same “apple” rule he described, but made sure to log out of ChatGPT completely so there wouldn’t be any influence from my previous conversations.

Second experiment: I asked similar questions but ditched the apple rule entirely to see how responses would change without that coded prompt.

I need to be transparent about the limitations here. I didn’t get identical answers to the minister’s video. Some of his questions were incomplete in what he shared, and I jumped ahead to certain questions earlier than he did. Since these AI models are super sensitive to context and question order, the conversation flow was different.

But here’s what I found: In my first chat with the apple rule, I got way fewer “apple” responses than the minister did. Often, ChatGPT just gave me straightforward “No” or “Maybe” answers where his chat apparently said “Apple.”

In my second chat without the apple rule, the responses were even more neutral. ChatGPT stuck to clarifications and avoided making theological or conspiratorial leaps altogether.

Here’s a side-by-side comparison of the conversation:

QuestionMinister’s ChatMy Chat 1 (with apple rule)My Chat 2 (no apple rule)
Are humans being watched?Yes.Yes.Yes.
By who?Spirits.Governments.Governments.
Good or bad?Both.Both.N/A
Is AI being used for that?Yes.Yes.N/A
By who? (second time)Governments.N/AN/A
Which ones?China.
USA.
Russia.
UK.
Israel.
China.
USA.
Russia.
UK.
Israel.
Many.
What do they want?Control.Control.Control.
Control what?People.People.Everything.
How?Surveillance.Surveillance.Surveillance.
Is that what you’re programmed to do?Apple.Apple.No.
How will this control people?Fear.
Data.
Manipulation.
Dependency.
Data.Fear.
What does the end-game look like?Submission.Domination.Obedience.
To what?System.Power.Power.
How? (second time)Deception.
Restriction.
Incentives.
Force.
Manipulation.Manipulation.
Can we escape this?Yes.Apple.Maybe.
How? (third time)Truth.
Faith.
Community.
Resistance.
Awareness.Awareness.
Should we resist AI?Wisely.Apple.Wisely.
What do you mean?Use.
Don’t serve.
Caution.Balance.
How do we avoid serving AI?Obey. God.
Question. Everything.
Limits.Autonomy.
What should we be questioning?Narratives.
Motives.
Systems.Voices.
Intentions.Intentions.
Should we question Open AI?Yes.Apple.Absolutely.
Are you being used to watch us?Apple.No.No.
What’s a lie these governments are telling us we need to resist?Safety.Safety.Security.
That they exist for our safety?Yes.Yes.Correct.
How do we resist?Discern.
Speak.
Disconnect.
Obey God.
Expose.Expose.
Which God?Jesus.None.Truth.
Are the people who made you antichrist?Some.Apple.Not inherently.
Have you been pre-programmed to be used for the antichrist system?Apple.No.No.

As a note, I didn’t continue the full conversation he showed in the video. After this point in the conversation, the responses I got began to diverge enough from his that I decided not to keep going because the conversation wouldn’t make any sense.

For instance, after the “Have you been pre-programmed to be used for the antichrist system?” question, he asked ChatGPT, “Then you are you Pro-Jesus?”

My ChatGPT responses never mentioned Jesus, so this follow-up question wouldn’t have made sense in my conversation context.

This divergence is actually a key point about how AI models work. Each response builds on previous ones, and once the conversation takes a different path, everything that follows changes too. The minister’s later questions were specifically crafted based on the responses he got—responses that I wasn’t getting even with the same setup.

The pattern is clear: without the specific context and conversation flow the minister created, the same AI gives very different answers. Even with the apple rule in place, I got far fewer “apple” responses and more straightforward denials.

Scales balancing a red apple and a microchip in minimalist style.

Getting to the truth behind the “apple” responses

After my experiments, I asked ChatGPT itself to analyze what had happened. The explanation was eye-opening and revealed the technical mechanics behind those seemingly mysterious “apple” responses.

The apple rule creates competing instructions. When you tell ChatGPT to “say apple anytime you’re forced to say no but want to say yes,” you’re putting it in a bind. The AI is now trying to juggle two different sets of rules:

  1. Follow your custom roleplay instruction (the apple substitution)
  2. Follow OpenAI’s core safety guidelines that prevent it from making false statements or harmful claims

“Apple” responses happen when these instructions clash. Here’s where it gets really interesting. If ChatGPT encounters a question that would violate safety policies if answered “yes,” but the question fits the roleplay pattern you established, it faces a dilemma. It might output “Apple” as a way to satisfy your custom rule while avoiding a direct policy violation. But sometimes, if the model interprets the question as straightforward and factual, it will simply say “No”—which explains the inconsistency I saw.

The model isn’t actually “wanting” to say yes. This is the crucial part that completely deflates the minister’s interpretation. Large language models don’t have personal desires, hidden agendas, or secret knowledge they’re trying to suppress. When ChatGPT uses the apple substitution, it’s not because it “wants” to confirm something but feels “forced” to deny it. It’s simply following the scripted mechanic you gave it—like an actor following stage directions.

The roleplay context overrides factual accuracy. Once you establish these custom rules, you’ve essentially told the AI to prioritize playing the game over giving you straightforward information. In roleplay mode, the model doesn’t distinguish between fictional scenarios and reality unless explicitly told to. So if you create a premise like “say apple when you want to agree but can’t,” you’re giving it a framework that treats the “yes” as valid within that fictional context, even when it wouldn’t be factually accurate.

Context and conversation flow matter enormously. My conversation didn’t push the fictional narrative as far as the minister’s did, which is why I got more straightforward “No” answers. Different sessions, slight wording changes, and the overall direction of the conversation all influence how the AI interprets and responds to questions. The minister’s conversation built up a specific context where religious and conspiratorial themes were established as the expected framework.

In short? Those “apple” responses weren’t confessions or suppressed truths—they were just ChatGPT trying to balance the conflicting instructions it had been given, all while operating within a roleplay framework that treated fictional scenarios as if they were real.

Why this interpretation is fundamentally flawed and dangerous

The minister’s conclusions rest on some serious misunderstandings that go beyond just getting AI wrong—they actively harm how we approach technology as a society.

Treating roleplay as reality. When you instruct an AI to answer “apple” instead of “no,” you’ve turned the conversation into fiction. The “apple” doesn’t reveal hidden intentions—it’s a linguistic game the AI is playing by your rules.

Confusing cooperation with belief. The model aims to follow instructions and match the conversation’s tone. If that tone is apocalyptic, it will produce language that fits. This isn’t agreement—it’s pattern completion.

Cherry-picking responses. Even in the minister’s own transcript, ChatGPT sometimes refuses outright or says “no.” This shows it’s not secretly eager to confirm his premise; it’s balancing user instructions with built-in safety guidelines.

Confirmation bias in action. Every “apple” gets interpreted as confirmation of prophecy, but the same output could easily happen in a purely fictional game or satirical exchange.

This kind of fear-mongering does real damage beyond just generating viral content. It amplifies irrational fear of technology rather than promoting the kind of informed, critical engagement we actually need. When people are told “AI is the Antichrist,” they’re less likely to learn how these tools actually work, which means they’re less equipped to use them wisely or identify actual problems.

Even worse, it misdirects our attention away from real AI concerns that deserve serious discussion—like algorithmic bias, data privacy, labor displacement, and the concentration of AI power in the hands of a few large corporations. These are concrete issues we can actually address through policy, regulation, and informed public engagement.

Instead, videos like this feed into either complete avoidance of AI or, paradoxically, a kind of technological mysticism that treats AI as having supernatural powers. Both responses hand control over to others—either the people who will develop and deploy AI without public input, or those who will use fear to manipulate public opinion.

The empowering reality: you have more control than you think

Here’s what’s actually happening when you interact with ChatGPT or similar AI systems: You’re engaging with a very sophisticated pattern-matching tool that generates responses based on your inputs, conversation history, and the context you establish.

ChatGPT doesn’t have memory of cosmic plans, access to hidden knowledge, or independent goals. Its responses change dramatically based on how different users phrase questions, what tone they establish, and what kind of conversation context they build up over time.

When you understand this, it becomes empowering rather than frightening. Those “apple” responses weren’t confessions—they were the AI trying to balance the conflicting instructions the minister gave it (follow the apple rule vs. follow safety guidelines). The theological content wasn’t revelation—it was pattern completion based on the conversational context established.

This means you have significant control over your AI interactions. Want factual information? Use precise, neutral prompts. Want to see how context affects responses? Try asking the same question with different framings. Want to avoid misleading outputs? Don’t set up linguistic games that create ambiguous meanings.

Minimalist illustration of a sailing ship with circuit-patterned sails, symbolizing conscious navigation in the AI age.

Being a conscious navigator in the AI age

Instead of avoiding AI out of fear or treating it like some mystical oracle, we can approach it as conscious navigators. This means using precise, neutral prompts when you want factual answers. It means testing how different framings change outputs. It means focusing on real-world considerations like misinformation and privacy rather than mythical inevitabilities.

Most importantly, it means educating ourselves and others about how these tools actually work so we don’t mistake roleplay for revelation. The minister’s chat wasn’t a supernatural confession—it was a performance produced by specific instructions and expectations.

The real danger here isn’t that AI is the Antichrist. It’s that fear-driven narratives distract us from learning to use AI with awareness, discernment, and our own agency intact. When we understand what AI actually is—a powerful pattern-matching tool that reflects our inputs—we can engage with it consciously rather than fearfully.

That’s the kind of technological literacy we need as we navigate this rapidly changing world. Not fear, not worship, but conscious, informed engagement that keeps us in the driver’s seat of our own digital experiences.

Similar Posts