3 Comments
User's avatar
William Trekell's avatar

The willingness of each of the major AI to fabricate information to fulfill a request is concerning, to say the least, when you factor in how convincing they can be. After the experiments that led to this article I did some on confabulation. Incomplete sets of basic data that was clearly insufficient to complete the requests. ChatGPT-5 was all too eager every time. Gemini and Claude took prompts that were a bit more direct. I need to finish writing that one.

Expand full comment
William Trekell's avatar

It's unfortunate, but vigilance has become a requirement for objective responses from AI. I've started employing a number of tactics, anti-hallucination items in role prompts, a Claude Code skill to avoid feature creep, etc.

My favorite, is to have it count every time it responds. I use sycophancy to my advantage and have it cheerfully read the the through the list I provide like it's a to do list.

"Check, didn't do that!"

Cool Bard, keep at it bud.

Expand full comment
ron biggs's avatar

Great stuff, William.

Once, I asked Gemini to mine for quotes to exemplify a certain position. It provided a list of "quotes", several of which did not exist in the text, not even paraphrasing. I asked it why it did such a thing, and it replied: I thought it would be helpful. Lawdy me.

Anyway, since I got tired of negotiating toward accuracy and objectivity, in ChatGPT, I provide my Project with a JSON txt file of instructions. Then, once I do some brainstorming with it, I tell it to refer to the named txt file. It does so faithfully, and doesn't use thread tokens.

Expand full comment