AI-Generated Code Might Be More Maintainable Than You Think
VTI recently came across an interesting preprint titled "Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability". Before diving into the findings, a quick caveat: this paper is currently a preprint and has not yet been peer-reviewed. However, the methodology and scale make it a significant contribution to the ongoing debate about AI in software engineering.
Real Pros, Real Code
The first thing that got my attention was the participant pool. Unlike many software engineering experiments that rely heavily on students, 92.2% of the 151 participants in this study were professional developers.
This high percentage of professionals is crucial. It means the results aren't just theoretical observations from an academic setting; they are highly likely to apply to professional teams building software for a living today. The study also involved a realistic scenario: developers had to manually evolve code previously written by someone else, a task that mirrors the daily reality of the industry.
Software Maintenance Costs
The other thing that drew my interest was their focus on maintainability, for the second phase of their trial. We have seen lots of research published on the impact of AI assistance on productivity when writing code. This is the first study I see that also looks at what happens to the code after it's been created.
The majority of the Total Cost of Ownership for software lies in maintaining the code, not creating it, with some estimates talking about ¾ of costs being assignable to maintenance. That makes sense: you only need to write code once, but you need to maintain it for the rest of its useful life. You need to regularly apply security updates, and you need to refactor it to adapt to changes to your business processes, or to changes in architectural designs, and so on.
While everyone is getting excited about AI agents making code generation more efficient, I know many professionals are wondering about the impact on maintenance costs. That's what made this paper interesting to me: this study specifically targets that concern by estimating the "downstream" impact of AI on maintainability.
The Verdict: Maintainability is Safe
The study's findings are reassuring for those of us integrating AI into our development workflows. The researchers found no statistically significant difference in the maintainability of code created with or without AI assistance.
Specifically, when a second developer was asked to evolve the code (the "downstream" phase), there was no statistically significant difference in completion time or code quality between the AI-generated and human-generated starting points. In short: using AI to build the software didn't make it harder for the next person to fix or upgrade it.
Experience Matters
Another interesting nuance in the data is that habitual users of AI assistance benefit more from it. In the creation phase, habitual AI users showed an estimated 55.9% speedup compared to only 30.7% for all developers in the sample. Perhaps even more interestingly, Bayesian analysis suggested that code originally co-developed by these habitual users actually saw a small improvement in Code Health (a metric of code quality developed by CodeScene in Sweden) during the maintenance phase.
AI is a tool, not a magic wand. Like any complex tool, we get more benefit from it as we learn to use it effectively. Skilled developers who know how to wield AI assistants aren't just moving faster, they also appear to be producing code that is slightly cleaner for the next person to handle.
This confirms a view we hold strongly here at MemCo: using AI to assist software development is a sound business decision. Coders are more efficient that way and, as this study shows, the quality of the code does not suffer, provided appropriate professional best practices are put in place.
Spark, our first product, makes your AI coding assistant more efficient, saving you time and tokens, with low-latency retrieval from a curated shared memory. If an agent has encountered the same problem, you don't need to solve it again. Get it for free at spark.memco.ai.
The preprint "Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability" is available on arXiv.


