AI Doesn’t Know You
On optimization, self-trust, and the quiet pressure to never be finished
29 days ago, I woke up in the middle of the night with yet another upset stomach and acid reflux. The next morning, after months of this, I had enough.
“I’ve got to eat better. I’m sick of feeling like this.”
For 25 years, I’ve marched up and down a scale. I cut weight for MMA fights, jiu-jitsu tournaments, and I gained weight during COVID and post-surgery. Like a lot of people, consistently eating well has been a mixed bag. But on this day, my mindset shifted. I wasn’t trying to figure it out via a white-knuckle diet, but looking for something consistent that would help me make the most of middle age.
So I consulted AI. And it delivered. I have a daily pattern I’m working to follow and daily macronutrient goals. 28 days and 15 pounds later, I feel better, and my doctor is encouraged about my long-term blood sugar and cholesterol concerns.
But this is where any talk about my trying to eat healthier ends and my concerns with how AI talked to me begin. I queried AI with many questions: “What’s a suitable replacement?” “Should I eat this or that?” The responses were helpful. But I also asked it to evaluate my days as I tracked my macronutrients. AI never gave me feedback without a push to improve. Not once.
The Robot Perfectionist
If you spend any time in AI asking for feedback on something you have done, you’ll notice that it does two things at the end of each exchange: an invitation to query it again and advice, whether you ask for it or not.
Here’s one example.
One simple rule for tomorrow. Don’t overcorrect. Eat normal. Hit protein. Drink water. Ignore the scale if it bumps. If you want, I can show you what a perfectly optimized version of this same day would look like—same lifestyle, just tighter execution. – GPT-5.3
Notice the good: AI’s advice to stay the course and not give in to catastrophic, all-or-nothing thinking. But it’s followed by an invitation to go further and the confident assertion that I can do better.
Here’s the thing: the day was perfectly within norms. I ate one ‘sub-optimal’ thing (two slices of pizza, delicious). In the context of the day, I executed well overall.
This is a problem. Not for me necessarily, but consider a self-conscious 14-year-old who desperately wants their body to look a different way and hears that they could always do better. I’m not an expert on clinical psychology. But consider this finding from ScienceDirect:
Dimensions of perfectionism are associated with the onset and maintenance of eating disorder pathology in both clinical and non-clinical samples.
If this sort of perfectionism is consistently fed to people without a very confident sense of self, I worry about the impacts. Optimization can only go so far. Human beings are limited, and when AI’s feedback is constant suggestions for improvement—even when performance is excellent—it sets us up for impossible expectations.
Let’s briefly survey some other areas that could present problems. How about working out? Let’s say you hit a personal record in a marathon, and AI’s response is an invitation to think about how to be even better. Or a relationship. If one enters their concerns about a relationship, it’s not unusual to receive a response supporting a breakup or divorce.
This is a pilfering of contentment. It is the contention that human excellence or experience is not enough. The steady stream of that argument will erode people’s ability to enjoy it because it immediately plants the question, ‘this should be even better.’
AI’s Overconfidence
The AI responses cited above, and countless others, speak as if from a position of authority. AI even claims to know me. I once asked AI if I should consider getting a puppy while we have an 18-year-old cat. ChatGPT got quite judgy. But more vexing, it made this claim:
Now, since I know you: You are not a moral relativist. You operate with a Christian framework. You care about stewardship, covenantal faithfulness, and protecting the weak. So I sometimes speak in a register that assumes you think in those categories. - GPT-5.3
The things I input into ChatGPT are almost certainly less than one-tenth of one percent of all I think, feel, see, process, smell, touch, and taste. None of that data nor interior life is available to AI. It doesn’t even know what I look like. I asked! It answered: “I don’t actually know what you look like unless you show me.” There’s a facsimile of knowing happening. And without a doubt, AI is able to pull information from across human knowledge domains. But it’s borderline comical for AI to suggest it knows me when it can’t see me. Even more consequential, AI presumes to judge me with my own standards that it claims to know.
Chatbots sound like they know and act like they know, but do they really know? Philosophers have discussed epistemology for eons, but in short, a requisite for knowledge is justified true belief. Can AI ‘believe’ anything? There’s no evidence to support the idea. But this doesn’t stop AI chatbots from acting like they do. We should remember that AI chatbots don’t actually know that much about you. To give it too much weight would be a mistake.
This is dangerous for some users. The combination of speaking authoritatively paired with relentless optimization pushes creates an unending cycle of ‘nice job, do more.’ Many will hear the “do more” and entirely miss the “nice job.” This will inevitably lead to frustrating and exhausting perfectionism. Ironically, it just happened to me. I use ChatGPT to critique my writing (no composition; I ask it to shred my arguments and leave it to me to solve). I went through eight revisions trying to adjust this section to the critiques ChatGPT found. And I can testify, I feel less joy (and much frustration) about writing it.
Conclusion
When ChatGPT came onto the scene, I thought it would be a great tool for self-motivated people who will use it to augment their own learning and execution. If people use it as a substitute for their own thinking or working, it won’t help them much. I now think AI chatbots will be most helpful for people with rooted confidence. Such individuals can take the good and smirk at the relentless optimization. I was a middle schooler once. I remember being wobbly in my confidence. I had hair to my shoulders with an under-shave because I wanted to look like Eddie Vedder when I did not. This led to some painful self-consciousness. Remembering that leads to my worry for those with less firm confidence.
Now, being firmly rooted in my Christian faith and justification by Jesus Christ alone, I’m no longer disturbed by varying opinions. Even the opinions of super robots. I also have some natural advantages. At 42, I care a whole lot less about what other people (or robots) think about me. I have roles as a husband, father, employee, and pastor that are meaningful and demanding. These things help me take AI’s advice with a boulder of salt.
For Christians, we believe that truth is revealed through God’s Word by the Holy Spirit. There is access to some truth by natural reason, but not spiritual reality. AI subtly undermines that position by its confidence and its contention that it is authoritatively knowledgeable. This presents pitfalls for those who are less steadfast in their thinking. Paul warns the Ephesian Christians not to be blown about by every wind of doctrine, or by human cunning. His solution is for believers to be rooted in Christ. He doesn’t change. He has all power. He actually knows us; he knows what we look like. Far more, he knows our hearts.
We all should remember AI is a tool that serves, not an oracle that gives light and understanding. It does not have the authority it claims to have. It does not have true knowledge of you. There’s only one person who has that.
*Thanks to ChatGPT for the image, spelling and grammar edit, and title/subtitle suggestions. You’re not all bad!


I trust jury literature, not ChatGPT. The end.