Humans hallucinate too

“Do you use AI to generate code?”

The question caught me off guard during a recent conversation. Not because I don’t use AI—I do—but because I hadn’t really articulated my thoughts about it before. Instead of praising tools like GitHub Copilot or Claude, which have become part of my workflow, I found myself dwelling exclusively on their limitations.

“Well, AI hallucinates,” I heard myself say, “and you often end up being reduced to a code reviewer.” It felt like a reasonable answer at the time, and the conversation moved on.

But the next day, the exchange kept nagging at me. I realized how one-sided my response had been. Yes, those are real challenges—but they’re not the whole story.

The truth is, I use these tools every day. I trust their output more than I did six months ago (with verification), and they genuinely accelerate my work. Yet instead of acknowledging that, I defaulted to skepticism. Why?

Maybe it was a reflex against the relentless AI hype. When so many around me seems sold on AI’s “limitless potential,” I feel an obligation to point out the limitations. But in doing so, I ended up misrepresenting my own experience.

And then it hit me: wasn’t this exactly what I had criticized AI for?

AI “hallucinates”—it produces confident, plausible output that doesn’t always align with reality. But here I was, a human, doing the same thing. I offered a neat, confident answer that didn’t fully reflect what I actually thought. I had, in effect, hallucinated.

The irony wasn’t lost on me.

My takeaway: I need to write more. Not to share knowledge, but to organize my thoughts. Maybe if I build my own library of well-reasoned perspectives—my own “chains of thought”—I can reduce these moments where my mouth outruns my mind.

If I’m going to hold AI to that standard, shouldn’t I do the same for myself?