Intuition vs Thinking Part 2
A few days ago, I wrote a post on intuition versus thinking. I’ve been dwelling on it quite a bit since then, and while that first post was written succinctly, I realized it needs concrete examples to really land. The most concrete examples I have come from my day-to-day life as a software engineer.
The Limits of “What” vs. the Power of “Why”
Current models are incredibly good at writing lines of code and describing exactly what those lines do. However, they struggle—and often fail—to answer the fundamental questions of the user experience:
• How is this going to make the user feel?
• What is actually useful to the person on the other side of the screen?
• What parts of this interface will be frustrating, broken, or otherwise undesirable?
There is something I can tell a model that it can’t quite grasp itself. I can provide the instructions for what to do, but behind those instructions, there is the why of what to do. In many ways, this bridges my heart and my head in terms of what actually matters for the customer of whatever software we’re building.
Why This is a Risk for Vibe Coding
One of the things that makes me a little bearish, frankly, on the vibe coding concept is that the skill of understanding human impact is one that takes a long time to build. It’s a skill I’ve only ever seen humans truly possess. While I’ve seen the models get a little bit better at guessing what a specific use case might look like, they are definitely nowhere near getting the full thing done.
Furthermore, these advancements don’t actually necessarily speed me up as a developer anymore. I think the biggest speed increase I got was when GPT-4 came out. It could knock out almost perfect code; I could provide a few files, give specific instructions, and it would write the code pretty much bug-free as long as I was descriptive enough about exactly what I was asking it to write.
Today’s models are basically that, except with agentic harnesses; they are better at exploring codebases and understanding where to make changes, so I can be a little less specific in my descriptions. But the thing I’ve realized is that if I don’t understand how the product works myself anyways, I have a really hard time actually creating the experience that is necessary for end users.
The Plateau of Usefulness
I don’t know exactly where we go from here, but my intuition is that the models themselves are starting to slow down in terms of their usefulness, especially to engineers. They might still be quite useful to many other disciplines going forward, especially as other people ramp up on them.

