AI-generated code is “highly functional but systematically lacking in architectural judgment”, a new report from Ox Security has found. In a report released in late October called Army of Juniors: The AI Code Security Crisis, AI application security (AppSec) company Ox Security outlined 10 architecture and security anti-patterns that are commonly found in AI-generated code. The Ox team looked at 300 open-source projects, 50 of which were in whole or part AI generated and evaluated the architectural and security quality of the code. The identified anti-patterns occurred at high frequency in the vast majority of the AI generated code. The most frequent issues identified were: The Ox team argues for a new developer role to manage this risk. They recommend positioningAI as implementation support, freeing humans to focus on product management, architectural decisions, and strategic oversight. The report states that while AI excels at implementation, human creativity remains irreplaceable for breakthrough innovation. On the security front, the team argues that manual code review is obsolete as a primary defense. Instead, organizations must build security requirements directly into their AI prompts and invest in new, autonomous security tools capable of keeping pace with AI’s coding velocity. A similar conclusion about the problems inherent in AI generated code is reached by Ana Bildea, although she takes a more systemic viewpoint. In an article in Medium titled “The Hidden Technical Debt Inside Your Generative AI Stack”, Bildea writes that “Traditional technical debt accumulates linearly. You skip a few tests, take some shortcuts, defer some refactoring. The pain builds gradually until someone allocates a sprint to clean it up. AI technical debt is different. It compounds.” Bildea claims that there are three main “vectors” that generate AI technical debt: model versioning chaos (caused by the speed of code assistant product evolution), code generation bloat (the same problem identified by Ox Security), and organization fragmentation (independent groups with different models and approaches to using them). These vectors, coupled with the speed of AI code generation, interact together causing exponential growth. Model versioning chaos makes code generation bloat harder to detect. [.] I’ve watched companies go from ‘AI is accelerating our development’ to ‘we can’t ship features because we don’t understand our own systems’ in less than 18 months. The solution she suggests is to take an enterprise governance approach: creating visibility, alignment, and lifecycle policies. Visibility and lifecycle management allow a company to know what models are installed, how they are being used, and how they are performing. Team alignment creates a common set of practices for the use of AI, creating a shared mental model that enables collaborative debugging. Bildea says, “The uncomfortable reality is that most companies are optimising for the wrong metrics. They’re measuring AI adoption rates and feature velocity while ignoring technical debt accumulation.”.
https://www.infoq.com/news/2025/11/ai-code-technical-debt/?utm_term=global
AI-Generated Code Creates New Wave of Technical Debt, Report Finds