Infrastructure as Code for the Mainframe

When a critical application update requires new infrastructure, such as additional storage, network configuration changes, or middleware adjustments, how long does your mainframe team wait? In many organizations, what takes minutes in cloud environments can stretch to days or weeks on the mainframe. This is not a technology limitation. It’s a process problem that mainframe infrastructure automation can solve.

The mainframe community has made significant strides in adopting DevOps practices for application development. Build automation, continuous integration, and automated testing are becoming standard practices. But application agility means little if infrastructure changes remain bottlenecked by manual processes and ticket queues.

The solution lies in applying the same DevOps principles to infrastructure that we’ve successfully applied to applications. This means standardization, automation, and treating infrastructure definitions as code. While the journey presents unique challenges, particularly around tooling maturity and organizational culture, the mainframe platform is ready for this evolution. This article explores how to bring infrastructure management into the DevOps era.

Standardization and automation of infrastructure

Like the build and deploy processes, all infrastructure provisioning must be automated to enable flexible and instant creation and modification of infrastructure for your applications.

Automation is only possible for standardized setups. Many mainframe environments have evolved over the years. A practice of rigorously standardized infrastructure setups was likely at the root of the setup. However, cleaning up is not generally a forte of IT departments in many organizations. A platform with a history as long as the mainframe often suffers even more from such a lack of cleanliness.

Standardization, legacy removal, and automation must be part of moving an IT organization toward a more agile way of working.

Infrastructure as Code: Principles and Practice

When infrastructure setups are standardized and automation is implemented, the code that automates infrastructure processes must be created in a similar manner to applications-as-code. There is no room anymore to do things manually. Everything should be codified.

Just as CI/CD pipelines are the code bases for the application creation process, infrastructure pipelines are the basis for creating reliable infrastructure code. Definitions of the infrastructure making up the environment must be treated like code and managed in a source code management system, where they can be properly versioned. The code must be tested in a test environment before deployment to production.

In practice, we see that this is not always possible yet. We depend on infrastructure suppliers for hardware, operating systems, and middleware. Not all hardware and middleware software are equipped with sufficient automation capabilities. Instead, these tools depend on human processes. Suppliers should turn their focus from developing sleek user interfaces to providing modern, most likely declarative, interfaces that allow for handling the tools in an infra-as-code process.

mainframe infrastructure as code

A culture change

In many organizations, infrastructure management teams are among the last to adopt agile working methods. They have always been so focused on keeping the show on the road that the risks of continuity disruptions have prevented management from having teams focus on development infrastructure-as-code.

Also, the infrastructure staff are often relatively conservative when it comes to automating their manual tasks. Usually not because they do not want it, but because they are so concerned about disruptions that they would rather get up at 3 a.m. to make a manual change than risk a programming mistake in the automation infra-as-code solution.

When moving towards infrastructure-as-code, the concerns of the infrastructure department management and staff should be taken seriously. Their continuity concerns can often be used to create extra robust infrastructure as a code solution. What I have seen, for example, is that staff do not just worry about creating infra-as-code changes but are most concerned about scenarios in which something could go wrong. Their ideas for rolling back changes have often been eye-opening toward a reliable solution that not only automates happy scenarios but also caters to unhappy scenarios.

However, the concern for robustness is not an excuse for keeping processes as they are. Infrastructure management teams should also recognize that there is less risk in relying on good automation than on individuals executing complex configuration tasks manually.

The evolving tooling landscape

The tools available for the mainframe to support infrastructure provisioning are emerging and improving quickly, yet they are still incoherent. Tools such as z/OSMF, Ansible, Terraform, and Zowe can play a role, but a clear vision-based architecture is missing. Work is ongoing at IBM and other software organizations to extend this lower-level capability and integrate it into cross-platform infrastructure provisioning tools such as Kubernetes and OpenShift. Also, Ansible for z/OS is quickly emerging. There is still a long way to go, but the first steps have been made.

Conclusion

The path to infrastructure-as-code on the mainframe is neither instant nor straightforward, but it’s essential. As application teams accelerate their delivery cycles, infrastructure processes must keep pace. The good news is that the fundamental principles are well established, the initial tooling exists, and early adopters are proving it works.

Success requires three parallel efforts: technical standardization to enable automation, cultural transformation to embrace change safely, and continuous pressure on vendors to provide automation-friendly interfaces. These don’t happen sequentially. They must advance together.

Start by assessing your current state. Inventory your infrastructure processes and identify which are already standardized and repeatable. Then choose one well-understood, frequently performed infrastructure task and automate it end to end, including rollback scenarios. Most importantly, engage your infrastructure team early. Their operational concerns will make your automation more robust, not less.

The mainframe has survived and thrived for decades by evolving. Infrastructure as code is simply the next evolution, ensuring the platform remains competitive in an increasingly agile world.

The mainframe has survived and thrived for decades by evolving. Infrastructure as code is simply the next evolution, ensuring the platform remains competitive in an increasingly agile world.

Getting started: your first IaC step

  1. Inventory your infrastructure processes
  2. Identify one standardized, frequently performed task
  3. Automate it end to end including rollback
  4. Engage your infrastructure team early
  5. Measure before and after: time, errors, incidents

Are you already on this journey? I’d love to hear where your organization stands— leave your thoughts below.

[Explore more mainframe modernization topics in my book Don’t Be Afraid of the Mainframe →](https://execpgm.org/dbaotm-the-book/)


This topic is covered in depth in my book Don’t Be Afraid of the Mainframe: The Missing Introduction for IT Leaders and Professionals—including practical frameworks for modernizing your mainframe infrastructure and development practices.
Learn more and order here →

IT Resilience in 2025: Mainframe Disaster Recovery, Cyber Vaults, and Cross-Region Failover Strategies

This article is an extension of a presentation on mainframe resilience a colleague and I gave at the GS NL 2025 conference, in Almere on June 5, 2025.

Introduction

In this article, I will examine today’s challenges in IT Resilience and look at where we came from with mainframe technology. Today’s resilience is no longer just threatened by natural disasters or equipment failures. Today’s IT resilience must include measures to mitigate the consequences of cyberattacks, rapid changes in the geopolitical landscape, and the increasing reliance on IT services by international dependencies.

IT resilience is more important than ever. Regulatory bodies respond to changes in these contexts more quickly than ever. Yet our organizations should be able to anticipate these changes more effectively. Where a laid-back organizational ‘minimum viable solution’ approach was taken, the speed of change drives us to more actively anticipate changes, and cater for disaster scenarios before regulatory bodies force us to. And suppose you are not in an organization that is regulated. In that case, you may still need to pay close attention to what is happening to safeguard your continued market position and even the existence of your organization.

In this article, I will discuss some areas where we can improve the technical capabilities of the mainframe. As we will see, the mainframe’s centralized architecture is well-positioned to further strengthen its position as the most advanced platform for data resilience.

A production system and backups

Once we had a costly computer system. We stored our data on an expensive disk. Disk crashes were regularly happening, so we made backups on tape. When data was lost or corrupted, we restored it from the tape. When a computer crashed, we recovered the system and the data from our backups. Of course, downtimes would be considerable – the mean time to repair, MTTR, was enormous. The risk of data loss was significant: the recovery point objective, RPO, was well over zero.

Single datacenter with tape backup - traditional resilience setup

A production system and a failover system

At some point, the risk of data loss and the time required to recover our business functions in the event of a computer failure became too high. We needed to respond to computer failures faster. A second data center was built, and a second computer was installed in it. We backed up our data on tape and shipped a copy of the tapes to the other data center, allowing us to recover from the failure of an entire data center.

We still had to make backups at regular intervals that we could fall back to, leaving the RPO still significantly high. But we had better tolerance against equipment failures and even entire data center failures.

Our recovery procedures became more complex. It was always a challenge to ensure our systems would be recoverable in the secondary data center. The loss of data could not be prevented. The time it took to get the systems back up in the new data center was significant.

wo datacenter resilience setup with tape backup replication

Clustering systems with Parallel Sysplex

In the 1990s, IBM developed a clever mechanism that creates a cluster of two or more MVS (z/OS) operating system images. This included advanced facilities for middleware solutions to leverage such a cluster and build middleware clusters with unparalleled availability. Such a cluster is called a Parallel Sysplex. The members—the operating system instances—of a Parallel Sysplex can be up to 20 kilometers apart. With these facilities, you can create a Parallel Sysplex that spans two (or more) physical data centers. Data is replicated synchronously between the data centers, ensuring that any change to the data on the disk in one data center is also reflected on the disk in the secondary data center.

The strength of the Parallel Sysplex is that when one member of the Parallel Sysplex fails, the cluster continues operating, and the user does not notice. An entire data center could be lost, and the cluster member(s) in the surviving data center(s) can continue to function.

With Parallel Sysplex facilities, a configuration can be created that ensures no disruption occurs when a component or data center fails, resulting in a Recovery Time Objective (RTO) of 0. This allows operation to continue without any loss of data, with a Recovery Point Objective (RPO) of 0.

Parallel Sysplex cluster spanning two data centers with synchronous data mirroring

Continuous availability with GDPS

In addition to the Parallel Sysplex, IBM developed GDPS. If you lose a data center, you eventually want the original Parallel Sysplex cluster to be fully recovered. For that, you would need to create numerous procedures. GDPS automates the task of failover for members in a sysplex and the recovery of these members in another data center. GDPS can act independently in failure situations and initiate recovery actions.

Thus, GDPS enhances the fault tolerance of the Parallel Sysplex and eliminates many tasks that engineers would otherwise have to execute manually in emergency situations.

GDPS automated failover and recovery between data centers

Valhalla reached?

With GDPS, the mainframe configuration has reached a setup that could genuinely be called continuously available. So is this a resilience walhalla?

Unfortunately not.

The GDPS configuration also has its challenges and limitations.

Performance

The first challenge is performance. In the cluster setup, we want every change to be guaranteed to be made in both data centers. At any point in time, one entire data center could collapse, and still, we want to ensure that we have not lost any data. Every update that must be persisted must be guaranteed to be persisted on both sides. To achieve this, an I/O operation must not only be performed locally in the data center’s storage, but also in the secondary data center. Committed data can therefore only be guaranteed to be committed if the storage in the secondary data center has acknowledged to the first data center that an update has been written to disk.

To achieve this protocol, which we call synchronous data mirroring, a signal with the update must be sent to the secondary data center, and a confirmation message must be sent back to the primary data center.

Every update requires a round-trip, so the minimum theoretical latency due to distance alone is approximately 0.07 milliseconds—that is, the speed of light traveling 10 kilometers and back. In practice, actual update time will be higher due to network equipment latency, such as in switches and routers, protocol overhead, and disk write times. For a distance of 10 kilometers, an update could take between 1 and 2 milliseconds. This means for one specific application resource, you can only make 1000 to 500 updates per second. (Many resource managers, like database management systems, fortunately, provide a lot of parallelization in their updates.)

In other words, a Parallel Sysplex cluster offers significant advantages, but it also presents challenges in terms of performance. These challenges can be overcome, but additional attention is necessary to ensure optimal application performance, which comes at the cost of increased computing capacity required to maintain this performance.

Cyber threats

Another challenge has grown in our IT context nowadays: threats from malicious attackers. We have connected our IT systems to the Internet to allow our software systems to interact with our customers and partners. Unfortunately, this has also created an attack surface for individuals with malicious intent. Several types of cyberattacks have become a reality today, and cyber threats have infiltrated the realms of politics and warfare. Any organization today must analyse its cyber threat posture and defend against threats.

One of the worst nightmares is a ransomware attack, in which a hostile party has stolen or encrypted your organization’s data and will only return control after you have paid a sum of money or met their other demands.

The rock-bottom protection against ransomware attacks is to save your data in a secure location where attackers cannot access or manipulate it.

Enter Cybervault

A Cybervault is a solution that sits on top of your existing data storage infrastructure and backups. In a Cybervault, you store your data in a way that prevents physical manipulation: you create an immutable backup of your data.

In our mainframe setup, IBM has created a solution for this: IBM Z Cyber Vault. With IBM Z Cybervault, we add a third leg to our data mirroring setup, an asynchronous mirror. From this third copy of our data, we can create immutable copies. This solution combines IBM software and storage hardware. With IBM Z Cyber Vault, we can make a copy of all our data at regular intervals as needed. Typically, we can make an immutable copy every half hour or every hour. Some IBM Z users take a copy just once every day. This frequency can be configured. From every immutable copy, we can recover our entire system.

So now we have a highly available configuration, with a cyber vault that allows us to go back in time. Great. However, we still have more wishes on our list.

IBM Z Cyber Vault adding immutable backup as third leg to GDPS setup

Application forward recovery

In the great configuration we have just built, we can revert to a previous state if data corruption is detected. When the corruption is detected at time Tx, we can restore our copy from time T0, the backup closest to the moment T1 of data corruption.

revert to a previous state if data corruption is detected. When the corruption is detected at time Tx, we can restore our copy from time T0, the backup closest to the moment T1 of data corruption.

Data between the corruption at T1 and the detection at Tx is lost. But could we recover as much of the data that was still intact after the backup was made (T0) and before the corruption occurred (T1)?

Technically, it is possible to recover data in the database management system Db2 using the image copies and transaction logs that Db2 uses. With Db2 recovery tools, you can recover an image copy of a database and apply all changes from that backup point forward, using the transaction logs in which Db2 records all changes it makes. When we combine this technology with the Cybervault solution, we would need a few more facilities:

  1. A facility to store Db2 image copies and transaction logs. Immutable, of course.
  2. A facility to let Db2, when restored from some immutable copy, know that there are transaction logs and archive logs made in the future, to which it can perform a forward recovery.

That is work, but it is very feasible.

Db2 forward recovery using transaction logs from Cyber Vault

Now, we have reached a point where we have created a highly available configuration, with a cybervault that can recover from a point as close to the point of corruption as possible.

Adding Linux workloads

Most of today’s mainframe users run Linux workloads on the mainframe, besides the traditional z/OS workloads. These workloads are often as business-critical as the z/OS workload. Therefore, it is great that we can now also include Linux workloads, including OpenShift container-based workloads, in the superb resilience capabilities of IBM Z.

Linux and OpenShift workloads included in IBM Z resilience configuration

Challenges

As such, we have extended the best-in-class resilient platform. Unfortunately, we are pushed further to address today’s challenges.

What if your backup is too close to your primary data?

Data Centers 1 and 2, as discussed above, may be too close to each other. This is the case when a disaster can occur that affects operations in both data centers. These could include natural disasters, such as a power outage or a plane crash.

I have called them Data Centers so far. Yet, the more generic term in the industry is Availability Zones. An Availability Zone is a (part of) Data Center that has independent power, cooling, and security, just like a Data Center. When you spread your IT landscape over availability zones, or data centers across geographical distances, you put them in different Regions. Regions are geographic areas, often with different risk profiles for disasters.

The Data Centers, or Availability Zones, are relatively close together, especially in European organizations. They are in the same Region. With the recent changes in political and natural climate, large organizations are increasingly looking to address the risks in their IT landscape and add data center facilities in another Region.

Cross-region failover and its challenges

To cater to a cross-region failover, we need to change the data center setup.
With our GDPS technology, we can cater for this by adding a GDPS Global ‘leg’ to our GDPS setup. The underlying data replication is Global Mirror replication, asynchronous.

Cross-region failover with GDPS Global Mirror asynchronous replication

The setup in this last picture summarizes the state of the art of the basic infrastructure capabilities of the mainframe. In comparison to other computing platforms, including cloud, the IBM Z infrastructure technologies highlighted here provide a comprehensive resilience solution for all workloads on the platform. This simplifies the realization and management of the resilience solution.

More challenges

Yet, there remains enough to be decided and designed, such as:

  • Are we going to set up a complete active-active configuration in Region B, or do we settle for a stripped-down configuration? Our business will need to decide whether to plan for a scenario in which our data center in Region A is destroyed and we cannot switch back.
  • Where do we put our Cybervault? In one region, or in both?
  • How do we cater for the service unavailability during a region switch-over? In our neat, active-active setup, we can switch between data centers without any disruption. This is not possible with a cross-region failover. Should we design a stand-in function for our most business-critical services?
  • We could lose data between our regions. The data synchronization is asynchronous. How should we cater for this potential data loss?

When tackling questions about risk, it all begins with understanding the organization’s risk appetite: the level of risk the business is willing to accept as it works toward its objectives. Leadership teams must decide which risks are best handled through technical solutions. For organizations operating in regulated spaces, regulators set minimum standards.

The bottom line on mainframe resilience

No other computing platform offers the combination of capabilities described in this article: zero RTO/RPO clustering, automated failover, immutable cyber vaults, forward recovery, and cross-region replication, all integrated into a single platform.

The remaining questions are not technical. They are strategic: How much resilience does your organization need? What is your risk appetite? And how does your mainframe resilience strategy connect to your broader IT and cloud strategy?

These are exactly the kinds of decisions explored in my book Don’t Be Afraid of the Mainframe. Chapter 12 covers system management and availability in depth, while the strategic decision frameworks in Part 3 help you translate technical capabilities into business choices.

Learn more and order here →

Modern mainframe application development

Modern mainframe development looks nothing like the traditional waterfall processes most people associate with the platform. Today’s z/OS teams use Git, Jenkins, automated CI/CD pipelines, and agile methodologies. The same tools and practices used on any modern platform. Here is how
it works.

Modern development processes for the mainframe

Requirements for the development process have changed. Applications must be built faster and it must be possible to change applications more often and quicker without impacting the quality of the application. In other words, agile development is needed. The only way to address today business needs into modern agile development processes is to automate all build and deploy processes.

A set of principles can then be derived for modern mainframe develops processes.

  • All application artefacts are managed in the (or a) Source Code Management tool.
  • The build processes for all artefact are automated, and can be coherently executed.
  • A build can be deployed in any environment. A build has no environment or organization-specific dependencies.
  • The deployment process for a build is fully automated. Including the fallback procedure. The deployment process is a coherent process for all application artefacts.

These principles need to be supported by tools and processes that are (re)designed for these purposes. Of course this is not something specific to z/OS applications, but is true for any modern IT solution. But with the background I have sketched in the previous section, there is a legacy of development processes and tools to take into account and in many organizations this implies significant technical and organizational changes.

The modern SCM for z/OS

The modern SCM tool for z/OS needs to support all kinds of application artefacts. For the mainframe this means for one thing that not only traditional MVS-type artefacts must be supported, like COBOL programs, COPYBOOKS and JCL, but also Unix type artefacts like Unix scripts and configuration files in z/OS Unix directories. The tools and processes should allow for EBCDIC type artefacts to be created or the z/OS runtime environment, as well as ASCI, Unicode and binary artefacts.

Modern SCM tools that can manage z/OS artefacts, are ISPW from Compuware, RTC form IBM, and a new option nowadays is Git, or GitHub.

Build automation

The modern DevOps process automates the creation of a build. The build process takes the required versions of the application artefacts from the source code management repository and creates a coherent package of these artefacts. This package, also called the build, is deployed in a (test) environment.

The build could be deployed in any runtime environment, even outside your organizations. This principle not only enforces standardization of processes and infrastructure in your IT organization, it also allows any future deployments in yet unknown environments – for example in the Cloud.

The automated build process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

Build automation on z/OS can be accomplished with a number of tools. Some of these tools are able to handle the z/OS specific needs. IBM has two solutions: Rational build engine and Jazz build engine. Compuware has capabilities in ISPW. As it stands, all these tools still have some gaps to fill in the coverage of the different artefacts that can make up a z/OS application.

Deployment automation

The modern DevOps process for z/OS automates also the deployment of the application build. The deployment process takes all the artefacts in the build, customizes them for the specific runtime environment, for example through the application of naming conventions and runtime aspects, and deploys the artefacts on the different runtime components in an environment.

The automated deployment process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

The most important deployment tools available for z/OS include IBM’s UrbanCode Deploy, IBM Wazi Deploy, and Digital.ai Deploy (formerly XebiaLabs XLDeploy).

Integration in other pipelines

I have indicated above that the DevOps processes described there must be callable, to use the most generic term I can think of. Since we do not just want to automate the individual pieces of a development process, but the entire chain, this requirement is important.

Only a fully automated Development process – a CI/CD pipeline – can provide optimal speed of development. To achieve this, the integration of build and deployment with other processes like infrastructure provisioning, test data provisioning, and testing is key.

Most of the tools mentioned above have API’s or command line interfaces that allow integration with CI/CD orchestration tools like Jenkins, Ansible, and others.

Modern mainframe CI/CD pipeline: SCM, build automation, deployment automation, and infrastructure provisioning for z/OS

Implications

The agile development process sketched here impacts the way we do other things on the mainframe as well. I will mention a few here.

Full deployments versus delta deployments

The traditional DTAP development process is based on the development of delta’s: you only deploy these things that are changed.

To facilitate agile development in z/OS environments, we need to move to a process that supports full application deployments. What the consequence of this change are is fully clear, but I am convinced the old way of working with delta’s will not give of the speed and flexibility we need today.

Other impact:

  • Phasing in of an application that consists of many more load modules than we have today, while remaining active, needs to be supported in the middleware tools on z/OS.
  • Application may need to become smaller. Traditionally applications are defined relative coarse grained on z/OS. We may need to split up applications into smaller distinguishable, more loosely coupled parts. We might need to reuse some of the microservices architecture goodies.

To facilitate agile development drastic changes in our thinking about mainframe applications is necessary, and in principle no single goodie from the past should be exempt from reconsideration.

Infrastructure provisioning

We have talked about application processes so far, but the agile DevOps process must be supported by the runtime infrastructure. In the DTAP model, runtime environments are static, defined once and gradually changed, when this was functionally needed.

In order to support rapid changes in applications, we must also allow rapid changes in infrastructure. Similar to the build and deploy processes, all infrastructure provisioning must be automated to allow flexible and instant creation and modification of infrastructure for test environments. This also means that environments must rigorously standardized. Definitions of the infrastructure making up the environment must be treated like code, and be managed in a source code management system, where it can be properly versioned. 

Currently the tool support for infrastructure provisioning is very limited. As part of z/OS the tool z/OSMF is provided that allows the creation of provisioning workflows for z/OS technology-specific creation of infrastructure.

Furthermore, there is work ongoing in IBM and other vendors to extend this lower level capability and integrate this is infrastructure provisioning tools like Kubernetes and OpenShift. And also Ansible for z/OS is quickly emerging. Yet, there is still a long way to go but the first steps have been made.

Infrastructure provisioning for z/OS is covered in detail in: Infrastructure as Code for the Mainframe →

The bottom line on modern mainframe development

Modern mainframe development is not fundamentally different from development on any other platform. The same principles apply: automate everything, treat infrastructure as code, use CI/CD pipelines, and build for speed without sacrificing quality.

The tools exist. The processes are proven. What holds most organizations back is not technology—it is the organizational change required to adopt these practices.

This topic is covered in depth in Chapter 10 of my book Don’t Be Afraid of the Mainframe, including practical guidance on DevOps transformation, tooling choices, and how to modernize your development practices without disrupting production.

Learn more and order here →

Please let me know your thoughts – always happy to hear from you.