A few words about the use of AI at work (where by work we mean not only coding), although we will start with coding - specifically with what is often its final stage: code review, and the fact that delegating this process to AI might already be a slight exaggeration.
Checking code during the process of actively working on it, even before pushing it to GitHub as a PR, done by AI in a situation where AI also wrote that code, is rather normal. One agent plans, another writes, yet another reviews. Many people work like this and it’s fine - it works in most cases. The problem starts when, after the agents finish their work, we take what they produced, run git push (or kyrieleison, use an AI skill to push it because why push without burning tokens?), open a PR and click request review without actually reviewing it with our own thinking.
Just reviewing such code is unfortunately very problematic. Doing Code Review - or rather doing it well - has become more important than ever before.
Once upon a time (the ancient era of a year ago and earlier), one of the bigger problems a programmer could face was technical debt (though it usually wasn’t the main one, although there were projects where it could be). Alongside that there were things like communicating with the client, understanding their needs, inventing solutions, designing software and responding to the needs of its users.
Generally, problems related to software engineering were most often difficult not because of the engineering itself in the sense of technological issues, but rather because of cognitive problems. Understanding how things work, how they should work, and how they could be implemented in the best possible way.
These kinds of problems are called cognitive debt. In other words, we focus so much on finding answers that we forget how to identify the real problem and its root cause. That understanding - the absence of such debt - usually led to finding the best and simplest answers, and software created without cognitive debt was easy to maintain, extend, and modify. If we understand what the problems actually are, we also know how to find the answers.
Today, more and more coding - but also planning and testing - can be delegated to AI, and in most complex cases it will do it faster than we can. When we do this without much thought, we put aside the process of acquiring new project and technological knowledge. We delegate understanding the problem and finding answers to AI, thereby creating cognitive debt, and possibly technological debt as well, because we understand less and less, and it becomes harder to recognize what will be a problem in the future and what will not.
We are particularly vulnerable to this when everything from planning to review and deploy is handed over to LLMs, while in the meantime we jump to the next task, or switch between several tasks mid-way. Even before AI, anyone who frequently jumped between unfinished tasks and tickets should know that excessive context switching creates many problems. For AI this is difficult - but for us it may be even harder, because we can’t easily do /clear.
The consequences are obvious: the programmer understands the project less and less, notices potential problems less often, fails to understand real expectations, misses the best solutions, and loses cognitive engineering skills. The ability to design solutions and architecture independently declines, which ultimately drives a vicious cycle: by continuing to use AI as the main work tool, one becomes worse at using it effectively as a tool. As a result AI produces worse results while costing more - both financially and in the debt it creates.
In my opinion AI will not replace good engineers, but it may result in there being fewer of them.
Even when writing good code and solving problems well with LLMs, we can still create new problems - not only technological ones. Those have actually become less severe now that understanding new technologies is easier with AI.

So that this doesn’t sound like empty words - there are studies and examples behind these phenomena. This also isn’t something new that we’re only discovering today because of AI. The most popular example is GPS.
When GPS devices became widely available in pockets - and even before every phone had it - a very similar phenomenon appeared and can still be easily observed today.
Someone who regularly uses Google Maps and similar tools has noticeably worse spatial memory and cognitive abilities related to hippocampus-dependent mapping compared to people who navigate using maps, experience, or simply observation. Someone who frequently uses GPS typically has much worse orientation skills than someone who doesn’t.

Inuit communities where younger generations replaced traditional - difficult but reliable - navigation skills with GPS experienced many problems because of it. They were unable to use the technology in a way that didn’t lead to atrophy of their own skills, and once dependent on it, they sometimes died when they found themselves in difficult situations where the technology suddenly became unavailable or insufficient.
This isn’t new - the fact that technology gives something while taking something away has been discussed for over 40 years. I highly recommend Borgmann’s work on the topic from 1984.

The main lesson from that reading (for me) is this:
Never stop thinking about the details of how we achieve goals through means (technology - AI in this context), and never simply use devices blindly (use Linux). Many people perceive modern technology purely as a means to an end. That’s a very bad approach.
Technology and the Character of Contemporary Life

The same risk exists with AI if you include it in absolutely every element of work simply because you can - not because it’s actually the right tool at that moment for you to perform the work. Work that you understand and can justify yourself.
This is not a new problem, but it is becoming increasingly visible. Here’s the first article that appears when you google “AI cognitive debt”. It’s actually quite good.
How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt
An example from the project where I work, where I’m the tech lead of my team: I get a GitHub notification - “Józio (anonymous) has requested your review”. Nothing unusual, because everything that gets merged must have approval.
I open the PR and read the ticket. It sounds very simple - maybe 1–2 hours (without AI) and that’s how it was estimated (1 story point). You need to add one event handler, one command, one command handler. There are also two tests. Very simple things, because all of these are absolute basics in the project (CQRS, Rails Event Store). We have hundreds of these and the developer who submitted this PR has done it dozens, maybe hundreds of times already because he’s been in the project longer than I have.
Then I look and see:
- All tests are red (6500) - finding the cause takes less than a second because it’s the same for every test: wrong import of the command bus, the component used to trigger commands. The same thing is done in the project hundreds of times in hundreds of files.
- There is no command or command handler. The event handler performs actions that in CQRS should be done by commands. The first such instance in the project (as it turned out, the second - because when I was sick the same developer pushed the same thing when I was away).
- The test mocks EVERYTHING.
- Every commit is co-authored or authored by Claude.
- The PR description is 30 lines long but contains zero information explaining what was broken or why conventions were violated - only pointless explanations of things visible at a glance in the “files changed” tab.
This isn’t the first case like this and it’s not the only developer doing this. As a lead I have to deal with it and explain the same things over and over again. It’s tiring and sad, because I try to teach them something but instead I only see their level as programmers and engineers decline.


So how do we reduce cognitive debt? How do we prevent technology from taking over our skills - just like the monsters stole the talents of NBA players?
The book mentioned above (Technology and the Character of Contemporary Life) offers good philosophical-level advice, so I recommend it. I’ll share what I apply to myself - maybe it will help someone else too.
Of course, this also includes simply not doing the bad practices described above.
With the acceleration of development and code production, I try to slow down other areas that previously often required rushing (or perhaps simply had to be fast) to compensate for time-consuming coding. That means spending more time on planning, reviewing, and testing. This way I familiarize myself with what we - both I and others - are creating.
Code is cheap to write, but not cheap to understand.
Previously programming was roughly 80% reading code and 20% writing it. Now it’s more like 90–10 (for some it may look different regarding writing code manually, but that 10% is exactly what we should fight for). When both reading and writing start to fade away in favor of AI, in my opinion that’s bad and increases cognitive debt.
For now I believe most of the work remains the same, but we can focus on different aspects to achieve the same results - usually faster. However, development speed has never been the main metric of good software. The number of lines of code produced daily does not equal business value, just like 100% test coverage does not mean everything is truly tested (not even tested well - simply tested; mutation testing says hello).
I spend more time reading code than before, and more time doing manual testing.
Questioning initial solutions is also useful. If code becomes cheap and fast to produce, it’s worth disagreeing with the first solution proposed by AI - even after planning - just to explore ideas and make sure we truly understand what we’re building (to criticize something you need to understand it).
So in short:
- Slow down architectural reviews - do them twice.
- Never submit something to Code Review without fully understanding the code yourself.
- More tests, including manual testing (I think QA is becoming even more valuable now because it requires deep understanding of processes and critical thinking).
- Spend more time reviewing code (both your own PRs and those of others).
- Disagree with and challenge AI more often - sometimes just on principle. With good guards preventing your AI from being a “yes-man”, this can produce very good results. Ask yourself: how would you do it? Why did Claude choose this approach? What are the drawbacks? Present those to it and see what happens (though you also need a properly configured AI so it doesn’t just agree with every criticism).
- I also believe it’s important to practice coding without AI. I have my own pet projects where I do this and try to make them as different as possible from what I do at work. There I use AI only to better understand things, such as a new language.
And that’s it. I hope this gives someone something to think about and helps you not only maintain your level but continue developing.
It’s a difficult topic but also a very interesting one - we’re essentially entering territory that once belonged only to science fiction. So if someone disagrees or has something to add, I invite you to discuss it.