Infrastructure as Code for the Mainframe

When a critical application update requires new infrastructure, such as additional storage, network configuration changes, or middleware adjustments, how long does your mainframe team wait? In many organizations, what takes minutes in cloud environments can stretch to days or weeks on the mainframe. This is not a technology limitation. It’s a process problem that mainframe infrastructure automation can solve.

The mainframe community has made significant strides in adopting DevOps practices for application development. Build automation, continuous integration, and automated testing are becoming standard practices. But application agility means little if infrastructure changes remain bottlenecked by manual processes and ticket queues.

The solution lies in applying the same DevOps principles to infrastructure that we’ve successfully applied to applications. This means standardization, automation, and treating infrastructure definitions as code. While the journey presents unique challenges, particularly around tooling maturity and organizational culture, the mainframe platform is ready for this evolution. This article explores how to bring infrastructure management into the DevOps era.

Standardization and automation of infrastructure

Like the build and deploy processes, all infrastructure provisioning must be automated to enable flexible and instant creation and modification of infrastructure for your applications.

Automation is only possible for standardized setups. Many mainframe environments have evolved over the years. A practice of rigorously standardized infrastructure setups was likely at the root of the setup. However, cleaning up is not generally a forte of IT departments in many organizations. A platform with a history as long as the mainframe often suffers even more from such a lack of cleanliness.

Standardization, legacy removal, and automation must be part of moving an IT organization toward a more agile way of working.

Infrastructure as Code: Principles and Practice

When infrastructure setups are standardized and automation is implemented, the code that automates infrastructure processes must be created in a similar manner to applications-as-code. There is no room anymore to do things manually. Everything should be codified.

Just as CI/CD pipelines are the code bases for the application creation process, infrastructure pipelines are the basis for creating reliable infrastructure code. Definitions of the infrastructure making up the environment must be treated like code and managed in a source code management system, where they can be properly versioned. The code must be tested in a test environment before deployment to production.

In practice, we see that this is not always possible yet. We depend on infrastructure suppliers for hardware, operating systems, and middleware. Not all hardware and middleware software are equipped with sufficient automation capabilities. Instead, these tools depend on human processes. Suppliers should turn their focus from developing sleek user interfaces to providing modern, most likely declarative, interfaces that allow for handling the tools in an infra-as-code process.

mainframe infrastructure as code

A culture change

In many organizations, infrastructure management teams are among the last to adopt agile working methods. They have always been so focused on keeping the show on the road that the risks of continuity disruptions have prevented management from having teams focus on development infrastructure-as-code.

Also, the infrastructure staff are often relatively conservative when it comes to automating their manual tasks. Usually not because they do not want it, but because they are so concerned about disruptions that they would rather get up at 3 a.m. to make a manual change than risk a programming mistake in the automation infra-as-code solution.

When moving towards infrastructure-as-code, the concerns of the infrastructure department management and staff should be taken seriously. Their continuity concerns can often be used to create extra robust infrastructure as a code solution. What I have seen, for example, is that staff do not just worry about creating infra-as-code changes but are most concerned about scenarios in which something could go wrong. Their ideas for rolling back changes have often been eye-opening toward a reliable solution that not only automates happy scenarios but also caters to unhappy scenarios.

However, the concern for robustness is not an excuse for keeping processes as they are. Infrastructure management teams should also recognize that there is less risk in relying on good automation than on individuals executing complex configuration tasks manually.

The evolving tooling landscape

The tools available for the mainframe to support infrastructure provisioning are emerging and improving quickly, yet they are still incoherent. Tools such as z/OSMF, Ansible, Terraform, and Zowe can play a role, but a clear vision-based architecture is missing. Work is ongoing at IBM and other software organizations to extend this lower-level capability and integrate it into cross-platform infrastructure provisioning tools such as Kubernetes and OpenShift. Also, Ansible for z/OS is quickly emerging. There is still a long way to go, but the first steps have been made.

Conclusion

The path to infrastructure-as-code on the mainframe is neither instant nor straightforward, but it’s essential. As application teams accelerate their delivery cycles, infrastructure processes must keep pace. The good news is that the fundamental principles are well established, the initial tooling exists, and early adopters are proving it works.

Success requires three parallel efforts: technical standardization to enable automation, cultural transformation to embrace change safely, and continuous pressure on vendors to provide automation-friendly interfaces. These don’t happen sequentially. They must advance together.

Start by assessing your current state. Inventory your infrastructure processes and identify which are already standardized and repeatable. Then choose one well-understood, frequently performed infrastructure task and automate it end to end, including rollback scenarios. Most importantly, engage your infrastructure team early. Their operational concerns will make your automation more robust, not less.

The mainframe has survived and thrived for decades by evolving. Infrastructure as code is simply the next evolution, ensuring the platform remains competitive in an increasingly agile world.

Let me know what you think about the exciting development and leave your thoughts.

Modern mainframe application development

In the previous DBAOTM article on DevOps I have introduced the traditional development process, which is often still used in a mainframe environment. In this post I will present a modern approach to development on the mainframe.

Modern development processes for the mainframe

Requirements for the development process have changed. Applications must be built faster and it must be possible to change applications more often and quicker without impacting the quality of the application. In other words, agile development is needed. The only way to address today business needs into modern agile development processes is to automate all build and deploy processes.

A set of principles can then be derived for modern mainframe develops processes.

  • All application artefacts are managed in the (or a) Source Code Management tool.
  • The build processes for all artefact are automated, and can be coherently executed.
  • A build can be deployed in any environment. A build has no environment or organization-specific dependencies.
  • The deployment process for a build is fully automated. Including the fallback procedure. The deployment process is a coherent process for all application artefacts.

These principles need to be supported by tools and processes that are (re)designed for these purposes. Of course this is not something specific to z/OS applications, but is true for any modern IT solution. But with the background I have sketched in the previous section, there is a legacy of development processes and tools to take into account and in many organizations this implies significant technical and organizational changes.

The modern SCM for z/OS

The modern SCM tool for z/OS needs to support all kinds of application artefacts. For the mainframe this means for one thing that not only traditional MVS-type artefacts must be supported, like COBOL programs, COPYBOOKS and JCL, but also Unix type artefacts like Unix scripts and configuration files in z/OS Unix directories. The tools and processes should allow for EBCDIC type artefacts to be created or the z/OS runtime environment, as well as ASCI, Unicode and binary artefacts.

Modern SCM tools that can manage z/OS artefacts, are ISPW from Compuware, RTC form IBM, and a new option nowadays is Git, or GitHub.

Build automation

The modern DevOps process automates the creation of a build. The build process takes the required versions of the application artefacts from the source code management repository and creates a coherent package of these artefacts. This package, also called the build, is deployed in a (test) environment.

The build could be deployed in any runtime environment, even outside your organizations. This principle not only enforces standardization of processes and infrastructure in your IT organization, it also allows any future deployments in yet unknown environments – for example in the Cloud.

The automated build process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

Build automation on z/OS can be accomplished with a number of tools. Some of these tools are able to handle the z/OS specific needs. IBM has two solutions: Rational build engine and Jazz build engine. Compuware has capabilities in ISPW. As it stands, all these tools still have some gaps to fill in the coverage of the different artefacts that can make up a z/OS application.

Deployment automation

The modern DevOps process for z/OS automates also the deployment of the application build. The deployment process takes all the artefacts in the build, customizes them for the specific runtime environment, for example through the application of naming conventions and runtime aspects, and deploys the artefacts on the different runtime components in an environment.

The automated deployment process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

The most important deployment tools available on the market for z/OS are IBM’s UrbanCode Deploy and XebiaLabs’ XLDeploy.

Integration in other pipelines

I have indicated above that the DevOps processes described there must be callable, to use the most generic term I can think of. Since we do not just want to automate the individual pieces of a development process, but the entire chain, this requirement is important.

Only a fully automated Development process – a CI/CD pipeline – can provide optimal speed of development. To achieve this, the integration of build and deployment with other processes like infrastructure provisioning, test data provisioning, and testing is key.

Most of the tools mentioned above have API’s or command line interfaces that allow integration with CI/CD orchestration tools like Jenkins, Ansible, and others.

Implications

The agile development process sketched here impacts the way we do other things on the mainframe as well. I will mention a few here.

Full deployments versus delta deployments

The traditional DTAP development process is based on the development of delta’s: you only deploy these things that are changed.

To facilitate agile development in z/OS environments, we need to move to a process that supports full application deployments. What the consequence of this change are is fully clear, but I am convinced the old way of working with delta’s will not give of the speed and flexibility we need today.

Other impact:

  • Phasing in of an application that consists of many more load modules than we have today, while remaining active, needs to be supported in the middleware tools on z/OS.
  • Application may need to become smaller. Traditionally applications are defined relative coarse grained on z/OS. We may need to split up applications into smaller distinguishable, more loosely coupled parts. We might need to reuse some of the microservices architecture goodies.

To facilitate agile development drastic changes in our thinking about mainframe applications is necessary, and in principle no single goodie from the past should be exempt from reconsideration.

Infrastructure provisioning

We have talked about application processes so far, but the agile DevOps process must be supported by the runtime infrastructure. In the DTAP model, runtime environments are static, defined once and gradually changed, when this was functionally needed.

In order to support rapid changes in applications, we must also allow rapid changes in infrastructure. Similar to the build and deploy processes, all infrastructure provisioning must be automated to allow flexible and instant creation and modification of infrastructure for test environments. This also means that environments must rigorously standardized. Definitions of the infrastructure making up the environment must be treated like code, and be managed in a source code management system, where it can be properly versioned. 

Currently the tool support for infrastructure provisioning is very limited. As part of z/OS the tool z/OSMF is provided that allows the creation of provisioning workflows for z/OS technology-specific creation of infrastructure.

Furthermore, there is work ongoing in IBM and other vendors to extend this lower level capability and integrate this is infrastructure provisioning tools like Kubernetes and OpenShift. And also Ansible for z/OS is quickly emerging. Yet, there is still a long way to go but the first steps have been made.

In a future article I will talk a little bit more on infrastructure provisioning.

Please let me know your thoughts. Always happy to hear from you.

How to authenticate to the z/OSMF API with a certificate

This is a brief description of how to use the z/OSMF API with certificate authentication, from a PHP application.

Create a certificate.For example with openSSL:

openssl req -newkey rsa:2048 -nodes -keyout yourdomain.key
 -out yourdomain.csr

Send the certificate to a certificate authority to get it signed.

Add the signed certificate to RACF:

RACDCERT CHECKCERT(CERT)RACDCERT ADD(CERT) TRUST WITHLABEL('yourlabel-client-ssl') ID(your RACF userID) SETROPTS REFRESH RACLIST(DIGTCERT, DIGTRING)

When authentication, the userID in the ID field will be mapped to, and the z/OSMF tasks will run under this userID.

Save the signed certificate on the PHP server in a directory accessible for the PHP server.
The following PHP code will then issue a request with client certificate authentication: 

curl_setopt($curl, CURLOPT_SSLCERT, '/<server>/htdocs/<somesubdir>yourdomain.csr');
curl_setopt($curl, CURLOPT_SSLCERTTYPE, 'PEM');
curl_setopt($curl, CURLOPT_VERBOSE, 1);
curl_setopt($curl, CURLOPT_HTTPHEADER, array(
// this is the basic auth commented out: 'authorization: Basic ' . base64_encode($this->userid . ":" . $this->password),
etc for the reast of the header