About Us
The dangerous new habit: Asking AI what’s true
- Art Samaniego
- PHT
- #AI
More people are turning to artificial intelligence for help with health, legal, and moral questions. In one case, a former government official asked AI to determine guilt. In another case, a deepfake video that was clearly AI-generated was confirmed by another AI tool and used as supposed proof of its authenticity.
We already see the damage to health. People self-diagnose based on an AI response, skip doctors, panic over the wrong thing, or feel falsely reassured about the right one. Now the same habit is creeping into law and public discourse.
When individuals pose leading questions to AI systems, they often receive confident responses that can be mistaken for objective truth.
However, AI systems do not evaluate evidence, assess credibility, or adhere to principles of due process.
AI mirrors patterns from its training data and presents information with apparent confidence. This perceived authority can be persuasive, even when the information is incorrect, incomplete, or dangerously oversimplified.
The result is a shortcut culture. Why wait for facts, hearings, or medical tests when an AI can give you a narrative in seconds?
That is not accountability. That is outsourcing judgment.
If you want something closer to a balanced view, start by asking open-ended questions. Avoid using emotionally charged or biased language, as this can influence the kind of answer you receive. It also helps to request multiple perspectives or ask the AI platform to cite its sources. And when a response seems too tidy or agreeable, don’t stop there, follow up, question the replies, and ask for clarification.
Used properly, AI is just a tool to help you think, not something that should think for you. You can ask it questions, get information, and even explore different ideas. But it doesn’t know your full situation, and it doesn’t understand what’s true or fair. Just like you wouldn’t let Google diagnose your illness or settle a legal case, you shouldn’t rely on AI to make final decisions for you.
You still have to think, compare, ask follow-up questions, and use your own judgment.
When individuals rely on AI not only to understand complex issues but also to determine guilt, health outcomes, or moral judgments, the discussion shifts from technological advancement to the renunciation of personal responsibility.
AI should support human reasoning, not replace it. It should provide explanations rather than accusations, and offer information rather than render judgments.
The moment we forget that, we stop being critical thinkers and start becoming an audience that mistakes confidence for truth.
