Mainframe data, historically accessed via built-from-scratch COBOL applications, is now more likely to be accessed by newer Web and mobile applications. Developers therefore must constantly modify mainframe code to accommodate these non-mainframe end-user applications. This has resulted in faster, more frequent mainframe development cycles, but admittedly, the mainframe’s culture, tools and processes have not always been the most intuitive and agile, creating obstacles for modern DevOps teams.

One approach is to migrate off the mainframe. However, most organizations have discovered this rarely makes business sense given the expense and time needed to convert their customized applications holding decades of business logic onto a platform that will almost certainly be less reliable, less performant, less secure and less resource-efficient. Given this reality, smart money is reinvesting in the mainframe—especially when it comes to evolving the mainframe for modern DevOps. This can help not only safeguard valuable mainframe resources, but also keep pace with the growing digital demands on the mainframe. What are the keys to doing this?

Mainstreaming mainframe environments: At the most basic level, organizations need tools that will enable the antiquated mainframe “green screen” to fade into the past. In turn, distributed and mainframe development teams need to be able to use the same tools of their choice, across the entire IT stack, whether they’re developing a brand new mobile app, or tweaking the underlying mainframe code that supports it. This can be accomplished when mainframe software is delivered in the same, familiar Eclipse or Web-based environment and smartly integrates with preferred DevOps tools.

(Related: ALM today requires a total effort)

A key advancement enabling this new reality is visualization. Years ago, mainframe developers were solely responsible for one or two application systems, residing on a single mainframe platform. These developers maintained a high degree of familiarity with the inner complexities and idiosyncrasies of these applications over time. Today, these developers are retiring en masse, and their application work is often neither easily understood nor well documented. Modern developers must pick up the baton, advancing applications that span numerous distributed systems before ultimately landing on the transactional mainframe. These developers lack confidence with the mainframe, making it extremely difficult and time-consuming to understand the impact a mainframe code change will have on other dependent applications as well as the overall customer experience.

To address this challenge, visualization capabilities provide developers with intuitive insight into application relationships, runtime behaviors, and potential problems in programming logic flows. In this way, developers can avoid problems by quickly understanding in advance the implications of coding changes they’re about to make—implications that in the past they rarely anticipated and often resulted in the need for recoding and retesting that ate up person-hours and slowed promotion to production.

Adopting source-code management for agile development: Outdated SCM systems were designed to support waterfall coding methodologies popular in the 1970s, 1980s and 1990s. When agile gained popularity in the early 2000s, mainframe teams didn’t participate in the transition, hampered by their SCM and further isolating themselves from being able to operate within agile and DevOps efforts.

However, advances in mainframe SCM systems now feature capabilities like concurrent development on single systems, integration with Continuous Delivery, and support for both mainframe and non-mainframe codebases. Not only do these capabilities allow DevOps teams to achieve unprecedented speed and agility in mainframe development, they make it easier for mainframe and non-mainframe experts to toggle between distributed and mainframe platforms, regardless of which one is their “home field.”

Easing mainframe testing and ongoing performance management: For large enterprises, mainframe applications are absolutely critical to digital business success. QA and testing for mainframe applications must therefore be rigorous and fast. Visualization allows QA and testing professionals to graphically visualize a mainframe application’s runtime activity in production at a high level, and then “drill down” to understand how mainframe applications themselves (as well as dependent applications) are impacted. QA and testing teams can discover and resolve mainframe-specific issues within minutes instead of hours.

One European bank provides an excellent example of the benefits of visualization. This bank noticed that a particular batch process was completing later and later. Had the trend continued, the process would not have completed before the open of business. With visualization capabilities, the bank was able quickly zero in on a database that was being repeated hundreds of thousands of times a night. A relatively simple code fix reduced that egregious performance bottleneck to a single efficient call. The bank avoided a potentially disastrous technology snafu while saving hours of frantic troubleshooting for both development and operations teams.

Keeping an eye on capacity utilization: As more non-mainframe applications tap into mainframe environments, organizations need to focus more on performance and capacity utilization. Every new query and transaction originating outside the mainframe adds to the mainframe workload. Even if one of those non-mainframe/mainframe interfaces is badly designed, it can inappropriately monopolize a mainframe resource, adversely impacting many other critical mainframe (and, by extension, non-mainframe) systems. And, the likelihood of this happening is increasing as more and more distributed-trained developers are writing code to access mainframe programs and data.

Today, new tools enable developers to identify mainframe code that may be unnecessarily consuming too much mainframe computing power. Smartly designed interfaces then enable Ops teams to generate an issue/task that can be sent directly to the developer tasked with resolving the issue, with all the known information about the problem included. Developers can then have everything they need to modify application code in order to reduce MSUs (million of service units—a measure of mainframe computing power), making more efficient use of resources. This is true DevOps in action: developers transcending their silo and working and thinking with a true full-stack ops mindset.

Conclusion
As a concept, DevOps is about breaking down silos between developers and operations teams, uniting them around the shared goal of creating and supporting great applications. With so many great applications depending on the mainframe, any silos around the mainframe need to be broken down as well. New tools are giving DevOps artisans the keys to innovating and driving value on the mainframe.

In the digital business era, big no longer beats small. Instead, fast beats slow. This is a new world order, and organizations using mainframes (often bigger companies) are fearful that their mainframes might slow them down. The truth is, organizations that rely on mainframes can be every bit as nimble as their smaller competitors, and in fact the inherent strengths of the mainframe platform (superior scalability, reliability and security) present a competitive advantage where they can win by being both big and fast.