AI coding tools move into performance tracking at enterprise level


A quiet shift is taking place inside large engineering teams. Writing code is no longer the only expectation. Using AI coding tools to help write that code is starting to matter just as much.

At JPMorgan Chase, that shift now appears to be part of how developers are assessed. The bank has begun pushing its engineers to use AI coding tools such as GitHub Copilot, not as an option, but as part of their day-to-day work. Internal systems track how often these tools are used, and developers are grouped into categories based on their usage.

According to Business Insider, engineers are labelled as “light” or “heavy” users depending on how much they utilise AI tools. The report suggests that this data may feed into internal performance tracking, implying that AI usage is starting to shape how developer output is assessed.

AI coding tools shift from optional to expected

This marks a shift from earlier phases of AI adoption in software teams. Over the past two years, tools such as Copilot have been introduced to speed up coding and reduce routine work, as well as to help junior developers get started. Many teams treated them as optional aids. That line is starting to blur.

At JPMorgan, developers appear to be under growing pressure to build skills with these tools. The report notes that the bank has set internal goals tied to AI usage, pushing teams to increase adoption rates. In effect, knowing how to work with AI may be becoming part of the job, not an added bonus.

The move reflects a wider change across large companies. AI tools are no longer being tested at the edges of development workflows. They are moving into the core process, including areas like version control, testing, and deployment.

Faster output, but more issues to fix

That shift brings gains, but it also creates new pressure. Research cited by ITPro suggests that AI coding tools may speed up deployment cycles by around 45%. The same report says 69% of developers surveyed reported more issues in production when using AI-generated code, while another 58% raised concerns about the risks tied to that code.

These figures point to a trade-off that many teams are still trying to manage. AI can help developers write code faster, but it may also increase the time spent reviewing and fixing that code later. In many cases, the bottleneck shifts rather than disappears.

Inside a large bank like JPMorgan, that trade-off carries more weight. Financial systems deal with strict controls and audit trails, as well as security checks. Code quality is not just a technical issue, as it ties directly to risk.

By pushing AI usage and tracking adoption, the bank appears to be making a bet. The assumption is that developers who use these tools more often may produce more output or move faster. Whether that leads to better systems over time is still an open question.

New pressure on developers

There is also a human side to this shift that is starting to show. Tracking tool usage adds a new layer of oversight. Developers are no longer judged only by what they build, but also by how they build it. Internal dashboards that show AI usage can act as a form of soft pressure. This may push teams to rely on AI even when it is not the best fit.

This raises questions about autonomy. Developers have long had some control over how they write and structure code. Standard tools exist, but there is often still room for personal workflow. When AI usage becomes a metric, that flexibility may shrink.

There are also concerns about how skill is measured. Heavy use of AI tools does not always mean deeper understanding. In some cases, it may hide gaps in knowledge, especially if developers rely on generated code without fully reviewing it. Over time, that could affect how teams maintain complex systems.

At the same time, ignoring these tools is not a simple option for most teams. AI-assisted coding is improving at a steady pace. It can handle routine tasks, suggest fixes, and help developers move through large codebases. It can also help developers move through large codebases. In large teams, even small gains in speed can add up.

This leaves developers in a new position. They need to learn how to use AI tools well, but also when to step back and review what those tools produce. The skill goes beyond writing code. It includes managing a mix of human and machine input.

The move by JPMorgan may be one of the clearest signs so far that this balance is becoming part of formal engineering practice. What started as an experiment is now being tracked, measured, and, at least in some cases, may be shaping how teams view performance.

Other large firms are likely watching closely. If this model leads to faster delivery without a rise in critical issues, it may spread. If it leads to more bugs or harder-to-maintain systems, teams may need to adjust how they use these tools.

For now, one thing is becoming clear. AI coding tools are moving out of the “nice-to-have” category. In some environments, they are becoming a baseline expectation. That change is not just about tools. It is about how software is built, how developers are assessed, and how teams define productivity in the first place.

(Photo by Shamin Haky)

See also: When AI writes the code: Productivity gains and production pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.