By mid-2023, the noise around artificial intelligence (AI) had become as common as Facebook ads and Twitter trolls. Swarms of LinkedIn posts screamed that “99% of people are using ChatGPT wrong!” while clickbait articles promised “Just add this role, tone, or your favorite candy bar, and you’ll master AI overnight and retire by lunchtime!”
Professionals in my network abandoned it in frustration, while early-career designers faced uncertainty about their future.
Companies rushed to claim their shares in the new online gold rush, throwing AI at everything in hopes of finding meaningful applications.
Hordes of crap-tastic AI pseudo-agents invaded our web browsers like digital squatters, offering mediocre assistance when they worked at all.
The panic and promises of today are just echoes of what we heard in past decades.
The prophets of yesterday's technological apocalypses wrote the song that today's AI evangelists and doomsayers merely remixed as muzak. After countless recycled promises and unrealized catastrophes, I felt about as interested in AI as I am in hanging out in an elevator for the sedated punk rock.
I had survived the ‘death of creativity’ after design applications emerged, the grim prediction of ‘mass unemployment’ as automation entered offices, and ‘the end of personal security’ that we willingly give away for sound bites from inexperienced people in increasingly shorter videos. The pattern felt all too familiar.
Changing Frequencies
Everything changed when I listened to The Knowledge Project (episode #168), where Adam Robinson discusses ChatGPT with Shane Parrish:
I love ChatGPT, but maybe not for the reasons people think. So I use it as a thought partner to help me ask better questions. The key thing is not that it can answer questions; it's that it gives me a tool to ask even better questions in the same way that Yo-Yo Ma can coax more beautiful music out of a cello than you or I could.
I had only considered AI as a tool to do things for me, not an assistant to work with me.
Forget training ChatGPT. It's training Adam. Oh yeah, it's training that, but I'm getting better and better at asking questions, going, "Oh, that's right." The way to think about it [is], it's a super-smart, lightning-fast research assistant. It doesn't come up with insights.
Patterns in the Static
I want to be better. A quick, intelligent research assistant with decent writing skills sounded great. If nothing else, it could help process the flood of generative AI articles and surface anything truly useful.
After a year of testing prompting techniques across AI platforms, I found the documentation either too basic or repurposed application programming interface (API) documentation. The social media claims and self-proclaimed expert recommendations clearly shared the same sources. The prompting advice worked, but it was incomplete.
Tuning In
As we enter 2025, AI-generated content stands out without needing a Turing test. You can find it in the repetitive phrases and overused words like “delve” and “demystify.” It lives on the corner of "the intersection of” anything.
Perfect prompts don’t exist. Instead, we have a toolkit of prompt elements that, combined with proper context, can produce "illuminating" results.
Through watching technologies evolve from rocky starts to mainstream adoption, one truth remains: understanding requires genuine effort.
Potential Sound
I started Syntax & Empathy to document what I'm learning along the way: the wins, the failures, and the spectacular mistakes. No manifestos, no magic bullets, and no promises. Just Scotch and honest results from real-world shenanigans.
What are you finding in your AI misadventures?
Tags: Prompt Engineering, UX Design
William Trekell : Linkedin : Bluesky : Instagram : Feel free to stop by and say hi!
I've been thinking a lot about "atificial intelligence" vs. Assistive Inteligence where you have the AI do what it does best, freeing humans up to do what they do best.
Not long ago, I was having a conversation with a 16 year old sitting next to her grandma, who was one of those who thought AI would dumb us down because we offload our thinking and writing processes.
But I argued what you're arguing here: be sybiotic with AI, like in the movie "Atlas".
I like to have extensive conversations with ChatGPT, during which I constantly notice how it can fall into the ruts I make for it. It will also say things that are wrong to some degree or another.
But to recognize that, I have to be thinking and have knowlege about what we're talking about.
From those converations, I also learn a lot about my weaknesses in logic and blindspots in assmptions.
It's awesome!
Love, love, love it! It's so much fun watching you evolve.