AI: From Imposter Syndrom to Dunning-Kruger Effect

When I started writing code as a professional programmer, I believed that everyone else was writing better code than me, that they were better programmers, and that they were smarter. Maybe that was true to some degree, there will always be someone better than you, but the reality was that I was experiencing Imposter Syndrome.

Imposter Syndrome is persistent self-doubt, the fear of being exposed as a fraud, and the inability to internalize success. It often drives high achievers toward burnout, and it is a condition that many people in the tech industry suffer from.

Before Google, programmers could get stuck on a problem for a long time. They had to search through books, collaborate with others, and think outside the box to find solutions. This process could break a programmer, but at the same time, it built a strong foundation of knowledge and experience.

When Google and the Internet arrived, the core process remained mostly the same. The difference was speed and accessibility. You could search from your computer, ask questions in forums, and chat with other programmers. Still, at the end of the day, you had to write the code yourself. Even if you copy-pasted a solution, you had to read it, understand it, and adapt it to fit your needs. That effort mattered.

These processes made programmers doubt their knowledge, stay humble, and work harder to compete with others. This environment fostered Imposter Syndrome, but it also pushed programmers to improve. Today, however, Imposter Syndrome seems to be fading, replaced by the Dunning–Kruger effect.

The Dunning–Kruger effect is a cognitive bias in which people with limited knowledge or competence greatly overestimate their abilities. AI tools can amplify this illusion. They can make you believe you are an expert programmer, even when writing in languages you have never used, solving problems you have never truly understood, and building applications you have never experienced developing.

This lack of real effort, combined with overconfidence, risks producing junior programmers who struggle to fix bugs, read and understand code, or develop strong problem-solving skills. Experience and critical thinking are slowly replaced by blind trust in the “magic” of AI.

Experienced developers may be more resilient, if they resist the temptation of overly fast and easy workflows that companies will inevitably push to keep up with competition. Otherwise, software quality will decline, bugs will multiply, and years of hard earned development practices may be lost as everyone chases the AI rabbit down the rabbit hole.

Yet within this potential decline lies a glimmer of hope. When programmers and companies finally realize that using AI tools exclusively to build products, to “vibe-code” entire systems, is a factory for generating problems, they may begin to use AI the way it should be used: as a tool. A powerful one, but still just a tool, meant to support programmers, products, and companies, not replace understanding, effort, and responsibility.