Insights from Dr Elizabeth Maxwell, Global Director of Mainframe Modernisation at BMC and Dr Xavier Eraso, Mainframe DevOps Expert at BMC
Integrating mainframe systems into modern CI/CD pipelines helps to accelerate application delivery cycles while reducing operational governance risks. Historically, these central architectures operated in relative isolation, prioritising stability above all else.
“The pervasive factor is probably that the mainframe is a centralised system with a reputation for high reliability, availability, and serviceability (RAS),” explains Dr Elizabeth Maxwell, Global Director of Mainframe Modernisation at BMC. Because of this hard-earned reputation, engineering departments viewed procedural alterations with high suspicion.
This cautious mindset drove an approach to application development termed Waterfall, mandating sequential, heavily gated progress to mitigate any potential downtime.
“Waterfall meant that application innovations were only deployed after a highly considered process that took considerable time to achieve. Each stage needed to be completed to a high specification; this created drag on the release cycles,” notes Dr Maxwell.
Consequently, rolling out new capabilities was a slow endeavour. Stakeholders perceived the final output as high quality, but releasing new features to the market was undeniably slow. Today, business units demand faster innovation cycles to remain competitive, rendering these legacy workflows a liability when speed is required.
“But today, much faster innovation is required, and so the legacy approach is not ‘fit for purpose’ to maintain the speeds the business needs,” says Dr Maxwell. Recognising this necessity, early innovators began addressing process modernisation over a decade ago.
Bridging the generational engineering divide
A major factor accelerating this modernisation is a demographic transition occurring across enterprise development teams. As legacy skills leave the workforce, companies are forced to adapt their operational models to suit the incoming talent pool.
“Then, as the demographics of the population supporting mainframe application development started to change – with baby boomers retiring and a new mindset being established – we found that the way we shared information became more collaborative. As a society influenced by the ‘Internet Revolution,’ we saw a change in mindset occurring,” observes Dr Maxwell.
Younger engineers now enter the workforce with vastly different expectations regarding tooling and workflows. They are educated in a decentralised approach, writing smaller modules and using open-source toolchains to automate development. Updating pipelines requires carefully balancing these two distinct groups to maintain operational harmony.
“If we look at the adoption of CI/CD on the mainframe, we need a blend of technology, process, and culture,” says Dr Maxwell. She points out that this is sometimes called ‘the power struggle’ and is a modern business dynamic causing internal friction.
To resolve these internal dynamics, companies are urged to take an iterative path rather than attempting sweeping overhauls. “We see that organisations adopting an agile approach – taking small steps, regularly assessing progress, and ‘failing fast’ with timely adjustments – are able to achieve positive outcomes,” Dr Maxwell states.
Success depends heavily on mutual respect across the engineering department. “Having the early-in-career and the later-in-career employees working in collaboration, and each respecting each other’s superpowers, is important for success,” she notes.
“We term this the ‘Generational Bridge’,” explains Dr Maxwell. Later-in-career developers understand the legacy code and why it was designed a particular way in the context of the other applications, while early-in-career developers are experts at rapid iterations and modern efficiency practices. Combining this deep contextual knowledge with the rapid iteration skills of early-career developers offers a practical way forward for the entire engineering department.
Standardising the open ecosystem
The industry is currently experiencing a cross-pollination of distributed software practices into legacy environments, enabling faster testing without sacrificing quality. This integration is largely because the software ecosystem has opened up via webhooks and APIs to include applications not traditionally associated with central architecture. This new operational paradigm is what experts call the ‘Open Ecosystem’ today.
“Yes, companies can standardise testing, security, and quality checks across mainframe and distributed platforms,” says Dr Xavier Eraso, Mainframe DevOps Expert at BMC.
However, technical architects are cautioned against oversimplifying the distributed landscape when planning these integrations. “Distributed systems are not a homogeneous technology stack. They span a wide range of platforms, programming languages, and architectural patterns, each driven by different business needs and constraints,” explains Dr Eraso.
“For this reason, it is essential to distinguish practices from tools,” he states. The methodology, rather than the specific software, is what drives consistent results across environments.
“Many of the practices commonly associated with distributed environments – such as extensive automated testing – are not primarily the result of superior tooling, but rather a consequence of high-frequency delivery models,” Dr Eraso points out. “Teams delivering changes daily or weekly simply cannot rely on manual testing without introducing unacceptable risk,” he continues.
This operational reasoning applies equally to core business applications running on legacy infrastructure. “The same principle applies to mainframe environments. By adopting agile and iterative practices, mainframe applications naturally create a demand for stronger testing capabilities and faster feedback loops,” notes Dr Eraso.
The technical barriers to this integration are largely eliminated in modern environments. Modern mainframe toolchains already expose the CLIs, APIs, and webhooks necessary to integrate seamlessly into automated CI/CD pipelines alongside distributed platforms.
“The limiting factor is rarely technology; it is a matter of mindset, organisational alignment, and agile maturity,” Dr Eraso concludes. Once these foundations are securely in place, central systems can seamlessly adopt the proven practices that drive reliability in distributed environments.
Preserving operational resilience and ROI
When accelerating deployment through new toolchains, preserving system stability remains a top priority for any platform engineering lead. “Having the right process, with transparency built-in, supports maintaining reliability and operational stability,” says Dr Maxwell.
Automating tedious elements introduces stronger governance into the development lifecycle and frees up valuable engineering time. By building pipelines that automatically handle monotonous checks, programmers can spend their time developing and testing code rather than conducting human-to-human quality checks and peer reviews. This automation directly benefits unit testing, establishing a clear audit trail for code coverage and ensuring consistency across all releases.
“In freeing the developer to employ their experience and intellect on high-cognitive tasks, we not only cover monotonous tasks, but we also increase the testing coverage and so go faster, while ensuring the optimal testing is occurring with the least human intervention,” highlights Dr Maxwell regarding the compound benefits.
Adopting these technologies frees engineering departments from tasks that automation can easily handle. Implementation mechanics must remain practical and intuitive, ideally encapsulated within an IDE to reduce friction. Tooling that actively removes friction from the day-to-day workflow is where automation proves most valuable.
“Another essential factor in successfully adopting new practices is relying on proven, well-established technologies,” advises Dr Eraso. Upgrading workflows should avoid the unnecessary destruction of current, functioning assets.
“Moving from manual to automated processes should not be synonymous with a ‘rip and replace’ approach,” he notes. Every company possesses specific internal expertise and foundational legacy systems that should be leveraged to meet automation goals. Enterprise modernisations must consistently deliver return on investment and meet concrete business expectations.
Breaking silos for shared accountability
Modern orchestration requires strict alignment between application development and IT operations to function effectively. “In addition, adopting CI/CD pipelines at the enterprise level requires closer collaboration between AppDev and Ops teams,” highlights Dr Eraso.
Historically, these teams operated entirely apart and interacted only when resources were required or during active system incidents. “CI/CD moves this dynamic toward building together and sharing accountability,” says Dr Eraso. Both departments must participate actively in the design, administration, and maintenance phases of the pipeline. This close collaboration ensures high reliability while vastly boosting production efficiency.
The technological boundaries isolating central systems from the rest of the IT estate have eroded. “What has changed is that the mainframe is no longer a closed ecosystem; it can now participate in an open CI/CD ecosystem alongside distributed platforms, relying on common patterns such as CLIs, APIs, and webhooks,” explains Dr Eraso.
The ability to utilise Git as a Source Code Manager (SCM) alongside applications like SonarQube for quality assurance bridges the remaining technical gaps between departments. “As a result, mainframe CI/CD can be orchestrated in the same way as distributed CI/CD on modern platforms such as GitHub Actions, GitLab CI, and Azure DevOps,” notes Dr Eraso.
Bringing legacy environments into modern workflows allows companies to systematically review their entire software delivery lifecycle. “Adopting CI/CD on the mainframe can therefore be an opportunity for organisations to reassess and challenge their existing distributed processes,” he says.
As central platforms evolve, the core foundations of early distributed pipelines also require continuous updates and refinements. “Rather than being a challenge, mainframe CI/CD should be seen as an opportunity for organisations to refine and improve their overall CI/CD practices,” Dr Eraso concludes.
To explore how to implement these strategies and observe hands-on demonstrations showcasing how to use the AMI DevX toolset within an integrated pipeline, register for BMC’s upcoming webinar and hands-on workshop here.



