The Real Danger of AI-Assisted Coding
2025-10-11
TL;DR: You stop learning.
Lately, I’ve been conducting engineering interviews—ranging from junior to senior levels—and I’ve noticed a worrying trend: the ability to write good code without AI assistance has dropped dramatically in the past two years. Many candidates mentioned that “doing all the coding” was a thing of the past, that modern engineers were embracing what I can only describe as vibe coding.
One candidate even asked if he could use AI during the interview because that’s how “real engineers work now.” Fair point. In principle, I agree that we should embrace tools that boost productivity. I use AI myself—though I limit it to lightweight code completion in Neovim. So why not allow it in interviews?
The real question becomes: can a candidate demonstrate that they can leverage AI effectively while still adding real value—expertise, judgment, and that human spark that turns a functional result into an excellent one?
Some argue the danger lies in AI hallucinations slipping through unnoticed. And yes, that can happen. But let’s be honest: for small, focused coding tasks, the latest Claude and GPT models are remarkably good. I rarely see bizarre or broken output anymore. The technology has become genuinely impressive.
But that’s not the real danger.
The true risk of AI-assisted coding is that it slowly numbs your ability to think deeply, to debug, to learn from trial and error. It turns off the part of your brain that wrestles with complexity, and that’s where real learning happens.
I’ve felt this firsthand. Recently, I started practicing LeetCode problems, especially those involving dynamic programming (DP), a topic I’ve barely touched in my 20 years as a professional developer. DP is notoriously difficult to grasp unless you work in a specialized domain. It forces you to think differently: instead of breaking problems down linearly, you must think recursively and in overlapping subproblems. It’s a full mental rewiring.
And it’s hard. I struggled a lot, and still do. My first instinct when I got stuck? To ask AI for hints. That’s when I realized something important: my brain was giving up. Years of fast, intuitive coding (lately supercharged by AI) had rewired me to avoid slow, painful learning.
That realization hit hard. I had become dependent—not on AI to write code, but on it to think for me whenever things got uncomfortable.
So now, I’m forcing myself to struggle through DP problems without help. Not because I expect to need DP at work (it’s mostly an interview skill) but because I want to retrain my brain to handle deep, frustrating, non-linear learning again.
That, in my opinion, is the real danger of AI-assisted coding:
it can quietly erode your ability to think deeply and grow.