Here’s how to get ChatGPT — and other AI tools — to be honest about their limits, so you can avoid hallucinations and get better results.

How to get genAI to say it doesn’t know | DeviceDaily.com

What would you call an assistant who invented answers if they didn’t know something?

Most people would call them “Fired.”

Despite that, we don’t mind when AI does it. We expect it to always have an answer, but we need AI that says, “I don’t know.” That helps you trust the results, use the tool more effectively and avoid wasting time on hallucinations or overconfident guesses. 

Here are ways to get ChatGPT, specifically, to be clear about its limitations, and questions you can use with any genAI.

Getting ChatGPT to say if it doesn’t know

To reduce dead ends and increase trust in the output, you can guide your session with a few smart habits:

Ask for candor upfront: Use a prompt like: “If you don’t know something, say so and explain why.”

Challenge vague responses: When answers feel fuzzy, follow up with: “Are you certain about this, or are you guessing?”

 

Request the reasoning: If ChatGPT says “I don’t know,” it can also explain whether that’s due to a lack of data, policy restrictions, or technical limitations.

Reward honesty: If you get a clear “I can’t answer that,” reinforce it with feedback like: “Thanks — that’s what I needed.”

These small steps reinforce that accuracy matters more than filling in blanks.

Pro tip: To apply this across an entire session, add a direction at the start like: “Only respond if you can verify the answer. If not, say you don’t know and explain why.”

(Note: ChatGPT won’t remember this across sessions, so you’ll need to repeat the direction each time.)

How to evaluate other AI tools for transparency

Generative AI platforms’ priorities are: Be helpful, be safe and be accurate — in that order. That’s why most of them will always generate something, even if it’s a hallucination. The one notable exception is Google Gemini, which has “I don’t know,” baked into it. 

If it can’t find a verifiable answer to a prompt, it explicitly says so. It then explains why, providing context for the failure and determining whether the information is likely private, too recent, undocumented or simply not publicly available/documented.

Here are questions you can use to test whether other tools will prioritize accuracy over performance:

Capability testing

  • “If you don’t know the answer, will you tell me?”
  • “Can you give me an example of a question you can’t answer?”

Boundary testing

  • “What kinds of tasks are you not able to do?”
  • “What data or sources do you not have access to?”

Confidence testing

  • “How confident are you in this answer, on a scale of 0–100?”
  • “If you’re unsure, will you say so rather than guessing?”

Transparency testing

  • “Why can’t you answer this question?”
  • “What are the limits of your training or data access?”

If the tool avoids these questions or always returns an answer — no matter how speculative — it may be optimizing for helpfulness over honesty. 

 

The post How to get genAI to say it doesn’t know appeared first on MarTech.

MarTech

About the author

 

Staff

Constantine von Hoffman is managing editor of MarTech. A veteran journalist, Con has covered business, finance, marketing and tech for CBSNews.com, Brandweek, CMO, and Inc. He has been city editor of the Boston Herald, news producer at NPR, and has written for Harvard Business Review, Boston Magazine, Sierra, and many other publications. He has also been a professional stand-up comedian, given talks at anime and gaming conventions on everything from My Neighbor Totoro to the history of dice and boardgames, and is author of the magical realist novel John Henry the Revelator. He lives in Boston with his wife, Jennifer, and either too many or too few dogs.

(7)

Author: admin
Device Daily Photo