A study by METR found that experienced software developers took 19% longer to complete tasks while using AI coding tools. Despite anticipating a 24% speed increase, developers perceived a 20% improvement after experiencing a slowdown. The study involved 16 developers who tackled 246 real-world coding issues, primarily through tools like Cursor Pro and Claude Sonnet. Developers accepted less than 44% of AI-generated code, often needing major revisions, and spent an estimated 9% of their time cleaning AI output, revealing inefficiencies in the tools' application.
When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts.
This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
The study showed developers accepted fewer than 44% of AI generated code, with most of those taking part in the study saying they had to make major changes to clean up code.
Roughly 9% of developers' time was spent reviewing or cleaning AI output, indicating significant inefficiencies in their workflow when using these tools.
#ai-in-software-development #developer-productivity #coding-tools #research-findings #task-management
Collection
[
|
...
]