
Most reports comparing AI models are based on benchmarks of performance, but a recent research report from Sonar takes a different approach: grouping different models by their coding personalities and looking at the downsides of each when it comes to code quality.
The researchers studied five different LLMs using the SonarQube Enterprise static analysis engine on over 4,000 Java assignments. The LLMs reviewed were Claude Sonnet 4, OpenCoder-8B, Llama 3.2 90B, GPT-4o, and Claude Sonnet 3.7.
They found that the models had different traits, such as Claude Sonnet 4 being very verbose in its outputs, producing over 3x as many lines of code as OpenCoder-8B for the same problem.
Based on these traits, the researchers divided the five models into coding archetypes. Claude Sonnet 4 was the “senior architect,” writing sophisticated, complex code, but introducing high-severity bugs. “Because of the level of technical difficulty attempted, there were more of these issues,” said Donald Fischer, a VP at Sonar.
OpenCoder-8B was the “rapid prototyper” as a result of it being the fastest and most concise while also potentially creating technical debt, making it ideal for proof-of-concepts. It created the highest issue density of all the models, with 32.45 issues per thousand lines of code.
Llama 3.2 90B was the “unfulfilled promise,” as its scale and backing implies it should be a top-tier model, but it only had a pass rate of 61.47%. Additionally, 70.73% of the vulnerabilities it created were “BLOCKER” severity, the most severe type of bug, which prevents testing from continuing.
GPT-4o was an “efficient generalist,” a jack-of-all-trades that is a common choice for general-purpose coding assistance. Its code wasn’t as verbose as the senior architect or as concise as the rapid prototyper, but somewhere in the middle. It also avoided producing severe bugs for the most part, but 48.15% of its bugs were control-flow mistakes.
“This paints a picture of a coder who correctly grasps the main objective but often fumbles
the details required to make the code robust. The code is likely to function for the intended scenario but will be plagued by persistent problems that compromise quality and reliability over time,” the report states.
Finally, Claude 3.7 Sonnet was a “balanced predecessor.” The researchers found that it was a capable developer that produced well-documented code, but still introduced a large number of severe vulnerabilities.
Though the models did have these distinct personalities, they also shared similar strengths and weaknesses. The common strengths were that they quickly produced syntactically correct code, had solid algorithmic and data structure fundamentals, and efficiently translated code to different languages. The common weaknesses were that they all produced a high percentage of high-severity vulnerabilities, introduced severe bugs like resource leaks or API contract violations, and had an inherent bias towards messy code.
“Like humans, they become susceptible to subtle issues in the code they generate, and so there’s this correlation between capability and risk introduction, which I think is amazingly human,” said Fischer.
Another interesting finding of the report is that newer models may be more technically capable, but are also more likely to generate risky code. For example, Claude Sonnet 4 has a 6.3% improvement over Claude 3.7 Sonnet on benchmark pass rates, but the issues it generated were 93% more likely to be “BLOCKER” severity.
“If you think the newer model is superior, think about it one more time because newer is not actually superior; it’s injecting more and more issues,” said Prasenjit Sarkar, solutions marketing manager at Sonar.
How reasoning modes impact GPT-5
The researchers followed up their report this week with new data on GPT-5 and how the four available reasoning modes—minimal, low, medium, and high—impact performance, security, and code quality.
They found that increasing reasoning has a diminishing return on functional performance. Bumping up from minimal to low results in the model’s pass rate rising from 75% to 80%, but medium and high only had a pass rate of 81.96% and 81.68%, respectively.
In terms of security, high and low reasoning modes eliminate common attacks like path-traversal and injection, but replace them with harder-to-detect flaws, like inadequate I/O error-handling. The low reasoning mode had the highest percentage of that issue at 51%, followed by high (44%), medium (36%), and minimal (30%).
“We have seen the path-traversal and injection become zero percent,” said Sarkar. “We can see that they are trying to solve one sector, and what is happening is that while they are trying to solve code quality, they are somewhere doing this trade-off. Inadequate I/O error-handling is another problem that has skyrocketed. If you look at 4o, it has gone to 15-20% more in the newer model.”
There was a similar pattern with bugs, with control-flow mistakes decreasing beyond minimal reasoning, but advanced bugs like concurrency / threading increasing alongside the reasoning difficulty.
“The trade-offs are the key thing here,” said Fischer. “It’s not so simple as to say, which is the best model? The way this has been viewed in the horse race between different models is which ones complete the most number of solutions on the SWE-bench benchmark. As we’ve demonstrated, the models that can do more, that push the boundaries, they also introduce more security vulnerabilities, they introduce more maintainability issues.”