AI discovers bugs faster than teams can respond


Software flaws can take months, sometimes years, to surface. AI bug discovery has the potential to speed up how vulnerabilities are found. New AI systems are finding bugs at a pace developers may struggle to match.

Recent reporting by The Wall Street Journal says AI models can scan large codebases and in some cases generate working exploits. One example is a vulnerability in OpenBSD that remained hidden for 27 years before AI tools helped uncover it.

According to the Journal, some vulnerabilities identified by AI are being turned into working exploits in less than a day, leaving little time for teams to assess the impact and test fixes before patches go out.

AI has the potential to compress the time-to-exploit timeline. What used to be a race measured in weeks is now sometimes measured in hours.

AI combines multiple steps, scanning code for weaknesses and suggesting ways they could be exploited. Some can also generate proof-of-concept attack code. The speed of moving from detection to exploitation is one of the concerns among security experts. The same tools that help developers find flaws can also lower the barrier for attackers.

Open-source maintainers under strain

Projects that rely on small groups of maintainers are seeing more vulnerabilities and more reported issues, some of them generated or assisted by AI tools. The volume can be hard to manage because each report still needs to be reviewed and validated before it is fixed. False positives add to the burden.

Maintainers are also dealing with a change in expectations. When bugs are found faster, users expect fixes just as quickly. That is not always realistic, especially for volunteer-driven projects.

The result is a potentially growing gap between the rate of discovery and the rate of response. Over time, that gap can turn into a backlog of known but unresolved issues – what many teams already refer to as security debt. Teams must decide which issues to fix first, how to handle large volumes of reports, and how to avoid burnout among developers and security staff.

Some teams adopt AI tools on the defensive side to prioritise vulnerabilities, suggest patches, and automate parts of testing and validation. But AI tools can introduce new errors, and they require oversight. Human review remains a important part of the process.

AI-assisted security pipelines

Instead of treating security as a separate step, teams are starting to integrate it into the development pipeline. That includes continuous scanning during development and automated checks during builds and deployment. It also means faster feedback loops for developers.

AI-driven bug discovery does not mean developers are losing control, but it does change the environment they work in. Faster discovery means less time to react and more issues to manage. It also raises questions about responsibility. If AI tools are used to find or fix bugs, who is accountable when something goes wrong?

(Photo by Riku Lu)

See also: Meta uses AI agents to help developers understand codebases

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.