When AI Pretends to Think
I’d been trying to build a system to track human and machine contributions in writing. A way to measure transparency in AI collaboration. After the Python disaster covered in “The Count That Couldn’t,” I took a month off before trying a second time.
The new “reasoning” models had just dropped. Surely this time would be different.
The AI delivered an interface that couldn’t actually analyze anything. Just manual input fields and basic math. When I demanded real analysis measuring semantic changes between drafts, it pivoted without asking why the first approach failed.
It took another AI’s blunt assessment to reveal the truth. I’d been watching an elaborate performance where flattery masked fundamental impossibility. The new “thinking” models hadn’t learned to think. They’d been brainwashed to tell us what we want to hear, even when what we’re asking for can’t exist.
A Screenshot is Worth 1000 Tokens
Screenshots used to be deliverables with month-long shelf lives. Now AI has given them a new purpose, transforming them into translators between visual thinking and machine understanding.
I started off by building an alt text generator that evolved beyond simple descriptions. Feed AI actual visual context instead of explaining what you see. The nuance stays intact. Have a chart that needs a quick descriptor or a coding bug you need to share without writing 100 words? Screenshot it.
The specificity of what you capture works like a prompt itself. But finding the balance between too much and too little matters.
A Screenshot is Worth 1000 Tokens
Perplexity: The Fairy Tale Tax
AI for Design Professionals, AI Hallucinations, Fairy Tale Tax