The How and Why of Problem Solving with AI
“Your career sounds great. But the truth is, with AI, no one does any of that manually anymore. AI does everything from start to finish. It’s more important to be trained in the tool than the craft.”
I’ll not lie; there have been moments where I’ve almost subscribed to this view. And you’ll never catch me arguing with the second half of that statement.
For the last couple of years, AI has been my partner. Not a day goes by that I don’t use it to accelerate my workflow. It is the “NOS button” on my creative engine; I hit it, and the ideas in my mind become streaks of light. It is fast, and it is powerful.
But we have to ask:
Does a faster car make you better driver?
There is a narrative that AI is “replacing” roles. In many sectors, that is objectively true. Tasks suited for pure automation—the repetitive, the rote, the predictable—are being claimed by the machine. But communication is not a closed loop; it is a human bridge.
Years ago, when I worked on a high-volume email response team, we had a comprehensive database of pre-drafted responses. It was a “plug-and-play” solution built just as the term was finding its place as a corporate cliché. We had the automation, yet we still went through a rigorous, month-long training program before we were allowed to touch a single customer inquiry. We had to learn how to smooth the transitions, how to personalize the tone, and—most importantly—how to know which “plug” went into which “socket.” The training and the database gave us the How, but the human provided the Context.
Training can teach you the ‘How.’ It can teach you to prompt, to iterate, and to output. But it struggles to teach you the ‘Why.’
A client might insist they need to “snazz up” their internal communications because employees aren’t engaging. A student might say they need more vocabulary because they “cannot speak.”
An AI will happily generate a snazzy newsletter or a list of 500 new words in a heartbeat. It fulfills the request perfectly. But discerning the Why of a problem is where a human with years of “boots-on-the-ground” insight has an unfair advantage. It takes that experience to realize that the lack of engagement isn’t a design problem—it’s a culture problem. Or that the student’s silence isn’t a vocabulary gap—it’s a confidence gap.
The machine solves the request. The mechanic solves the problem.
AI can build the solution in a jiffy. But you still need people like me to ensure you aren’t building a magnificent bridge to the wrong destination.


Makes sense? What do you think?