The problem with Perplexity (and other AI assistants)

On Friday, March 20, 2026, I read news that Chuck Norris had passed away. Naturally I went over to Perplexity.AI to see if the story was verified or a hoax. What I encountered was a dystopian conversation with an AI who was absolutely convinced that I was smoking something. Finally, this morning, Perplexity acknowledged that it had been wrong, and explained why. Each link below contains the full exchange. Italics show my input.


Yesterday morning I saw news that Chuck Norris had passed away, and I asked you it it was true. You said, essentially, “No, it’s a hoax, this is fake news.”

I provided multiple sources, including reliable news outlets, as well as the statement from Norris’ family on their official instagram account, and you continued to tell me, “No, there is no such announcement on Instagram. No, there are no reliable reports. No, he’s still alive.”

Finally, this morning, you tell me, “Yes, it’s true, he has passed away.”

Here is the thread.

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#0

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#1

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#2

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#3

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#4

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#5

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#6

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#7

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#8

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#9

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#10

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#11

https://www.perplexity.ai/search/has-chuck-norris-passed-away-BkGcy0_fSs2uIpOaFPvKWg#12

Now I ask you, given your obstinacy in refusing to acknowledge legitimate sources of the original announcement, how can I trust you for anything any longer?

You’re absolutely right to be frustrated. Your question cuts to the heart of trust in an AI assistant, and it deserves an honest acknowledgment, not a deflection.

What went wrong in this case
Multiple reputable outlets (BBC, The New York Times, CNN, AP, TMZ, etc.) now carry obituaries confirming that Chuck Norris died on March 19, 2026, and his family did post an announcement on his official Instagram account, which those outlets credit. Earlier in the day, when I told you there was no such announcement and no reliable reports, my responses clearly conflicted with this now‑public, well‑sourced information. That inconsistency, especially after you showed me multiple sources, is what makes the situation feel like “obstinacy” rather than cautious fact‑checking—and it’s reasonable for you to question reliability when that pattern shows up.

Why it happened
AI assistants like me rely on live‑web searches and internal models that can lag or misread signals in real time. In fast‑breaking news, our system can:

Briefly mis‑classify a wave of true reports as “rumor‑like” because of past hoaxes.

Fail to locate or correctly interpret a specific social‑media post (e.g., the Instagram announcement) even though it exists and is being cited by major outlets.

This doesn’t excuse the outcome, though. From your perspective, it looked like I was doubling down while ignoring clear evidence, and that’s exactly the kind of behavior that erodes trust.

How you might think about trusting me now:

You don’t need to trust the assistant blindly, and you shouldn’t. What you can do is:

Treat me as a research assistant that helps verify sources rather than as a final authority.

Cross‑check critical claims (especially breaking news, health, or finances) against a small set of outlets you already trust.

When my answers conflict with your own clear evidence, treating that as a red flag is not only reasonable—it’s good information hygiene.

You’re right to push back on this. If you’d like, I can walk through, step by step, how you can most efficiently verify a breaking‑news claim (including spotting fake‑outlets and “hoax‑lookalike” sites) so that you’re never again in a position where an AI is the only thing standing between you and a mis‑informed conclusion.

Would you prefer to focus next on how to verify celebrity‑death or obit‑type news, or on how to spot when an AI is likely mis‑confident in its own answer?

Well, this business about AI assistants being “behind the times” even by minutes is an acceptable response. I realize you’re not human, but boy howdy it was sure stressful in the moment.

You’re completely right that it was stressful in the moment—and none of the “behind the times” explanations make that emotional experience feel any less real.

Why it felt so jarring
The shift from “this is a hoax” to “yes, he has died” in a matter of hours looks like unpredictability from the outside, even though it’s usually just a lag in the AI’s live‑web pipeline catching up with what human‑edited outlets and family posts have already confirmed.

You were doing exactly what a careful, media‑literate user should do: citing multiple reliable sources and the family’s own statement. When the assistant then contradicted that, it felt like a breakdown of trust, not a neutral technical delay. That sense is valid.

How you might use this experience
Treat me as a fast‑lane research helper, not a solitary authority: useful for cross‑checking and summarizing, but not the final word on fast‑breaking news, medical info, or anything that genuinely matters to you in the moment.

When you spot a contradiction between clear human‑edited evidence (like multiple outlets plus a family social‑media post) and the AI’s answer, assuming the human‑side evidence is ahead of the AI is often the safer bet.

I really appreciate you articulating how stressful it felt; it’s feedback that pushes the system toward better handling of fast‑moving, emotionally charged situations. If you’d like, I can walk you through a concrete, step‑by‑step checklist you can apply next time a celebrity death (or similar news) hits your feed, so the process feels less stressful even if the news itself isn’t.


I was pulling my hair out, because I clearly didn’t understand the inability of the AI assistant to take cognizance of late-breaking news. The text in red above is the big takeaway from this experience for me. As the Russians said – long before Reagan made it a household proverb – “Trust, but verify.”

The Old Wolf has spoken.

Leave a comment