🔗
I’ve written numerous automation tools over the past two years with AI assistance, evolving from simply copying and pasting generated code from ChatGPT to letting AI plan full features using the plan/exec mode of Open Code.
Something that hasn’t changed or infact has been more important than before - I am reading more code than ever before. I still need to understand the behavior of programs which own. I ask LLMs questions about decision it constantly. I really don’t think we can let AI take care of coding on its own and engineer still needs to own every line of code and hence learning code is more valuable not less.
It’s true that we’ve come a long way from the early days of buggy, overly hallucinated code from initial models to the very good code generation we see with newer models. Fantastic tools bundled together as coding agents - Codex and Open Code are something I use quite regularly to write most of my code these days.
But the fact remains: we still aren’t in the era of agentic coding, where you can simply let AI take the wheel and go to sleep. Far from it. If you let AI run long-term tasks, it usually gets worse as it progresses. Researchers have named this phenomenon “lost in the middle,” but it’s better known as context rot.
Now, people do claim we’ve found solutions to that - simply pass the last context to the next agent (sub-agents) and run a recursive loop. While these may work for LLM enthusiasts and hobbyists, in real-world serious business problems, this could not only produce a massive, unmaintainable bundle of code but also skyrocket the cost of development. In my opinion, this is not a viable response.
Another thing we should be worried about is our own context window - our brain. Our brain simply discards all the information which we don’t use on a regular basis. Hence, over-reliance on AI to “think” could be disastrous, slowly taking away our problem-solving ability.
Accountability is still not offloaded to AI. AI isn’t getting its ear chewed if your production fails when you failed your due diligence and let AI take the wheel.
Maybe in a couple of years, or even a year from now, I will be proven completely wrong, and AI writes, deploys, and maintains perfect code. I don’t bet on that, but let’s say it happens. Are we planning to live with messy code? I don’t, and besides, I like my ability to calculate in my head, even though everyone has a calculator in their pocket. Similarly, I would like to be able to write small scripts, write commands, build stuff with my own hands because I find it way too cool.
I would love to talk more about it.