Modern mainframe application development

  • Post category:DBAOTM
  • Reading time:9 mins read

In the previous DBAOTM article on DevOps I have introduced the traditional development process, which is often still used in a mainframe environment. In this post I will present a modern approach to development on the mainframe.

Modern development processes for the mainframe

Requirements for the development process have changed. Applications must be built faster and it must be possible to change applications more often and quicker without impacting the quality of the application. In other words, agile development is needed. The only way to address today business needs into modern agile development processes is to automate all build and deploy processes.

A set of principles can then be derived for modern mainframe develops processes.

  • All application artefacts are managed in the (or a) Source Code Management tool.
  • The build processes for all artefact are automated, and can be coherently executed.
  • A build can be deployed in any environment. A build has no environment or organization-specific dependencies.
  • The deployment process for a build is fully automated. Including the fallback procedure. The deployment process is a coherent process for all application artefacts.

These principles need to be supported by tools and processes that are (re)designed for these purposes. Of course this is not something specific to z/OS applications, but is true for any modern IT solution. But with the background I have sketched in the previous section, there is a legacy of development processes and tools to take into account and in many organizations this implies significant technical and organizational changes.

The modern SCM for z/OS

The modern SCM tool for z/OS needs to support all kinds of application artefacts. For the mainframe this means for one thing that not only traditional MVS-type artefacts must be supported, like COBOL programs, COPYBOOKS and JCL, but also Unix type artefacts like Unix scripts and configuration files in z/OS Unix directories. The tools and processes should allow for EBCDIC type artefacts to be created or the z/OS runtime environment, as well as ASCI, Unicode and binary artefacts.

Modern SCM tools that can manage z/OS artefacts, are ISPW from Compuware, RTC form IBM, and a new option nowadays is Git, or GitHub.

Build automation

The modern DevOps process automates the creation of a build. The build process takes the required versions of the application artefacts from the source code management repository and creates a coherent package of these artefacts. This package, also called the build, is deployed in a (test) environment.

The build could be deployed in any runtime environment, even outside your organizations. This principle not only enforces standardization of processes and infrastructure in your IT organization, it also allows any future deployments in yet unknown environments – for example in the Cloud.

The automated build process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

Build automation on z/OS can be accomplished with a number of tools. Some of these tools are able to handle the z/OS specific needs. IBM has two solutions: Rational build engine and Jazz build engine. Compuware has capabilities in ISPW. As it stands, all these tools still have some gaps to fill in the coverage of the different artefacts that can make up a z/OS application.

Deployment automation

The modern DevOps process for z/OS automates also the deployment of the application build. The deployment process takes all the artefacts in the build, customizes them for the specific runtime environment, for example through the application of naming conventions and runtime aspects, and deploys the artefacts on the different runtime components in an environment.

The automated deployment process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

The most important deployment tools available on the market for z/OS are IBM’s UrbanCode Deploy and XebiaLabs’ XLDeploy.

Integration in other pipelines

I have indicated above that the DevOps processes described there must be callable, to use the most generic term I can think of. Since we do not just want to automate the individual pieces of a development process, but the entire chain, this requirement is important.

Only a fully automated Development process – a CI/CD pipeline – can provide optimal speed of development. To achieve this, the integration of build and deployment with other processes like infrastructure provisioning, test data provisioning, and testing is key.

Most of the tools mentioned above have API’s or command line interfaces that allow integration with CI/CD orchestration tools like Jenkins, Ansible, and others.

Implications

The agile development process sketched here impacts the way we do other things on the mainframe as well. I will mention a few here.

Full deployments versus delta deployments

The traditional DTAP development process is based on the development of delta’s: you only deploy these things that are changed.

To facilitate agile development in z/OS environments, we need to move to a process that supports full application deployments. What the consequence of this change are is fully clear, but I am convinced the old way of working with delta’s will not give of the speed and flexibility we need today.

Other impact:

  • Phasing in of an application that consists of many more load modules than we have today, while remaining active, needs to be supported in the middleware tools on z/OS.
  • Application may need to become smaller. Traditionally applications are defined relative coarse grained on z/OS. We may need to split up applications into smaller distinguishable, more loosely coupled parts. We might need to reuse some of the microservices architecture goodies.

To facilitate agile development drastic changes in our thinking about mainframe applications is necessary, and in principle no single goodie from the past should be exempt from reconsideration.

Infrastructure provisioning

We have talked about application processes so far, but the agile DevOps process must be supported by the runtime infrastructure. In the DTAP model, runtime environments are static, defined once and gradually changed, when this was functionally needed.

In order to support rapid changes in applications, we must also allow rapid changes in infrastructure. Similar to the build and deploy processes, all infrastructure provisioning must be automated to allow flexible and instant creation and modification of infrastructure for test environments. This also means that environments must rigorously standardized. Definitions of the infrastructure making up the environment must be treated like code, and be managed in a source code management system, where it can be properly versioned. 

Currently the tool support for infrastructure provisioning is very limited. As part of z/OS the tool z/OSMF is provided that allows the creation of provisioning workflows for z/OS technology-specific creation of infrastructure.

Furthermore, there is work ongoing in IBM and other vendors to extend this lower level capability and integrate this is infrastructure provisioning tools like Kubernetes and OpenShift. And also Ansible for z/OS is quickly emerging. Yet, there is still a long way to go but the first steps have been made.

In a future article I will talk a little bit more on infrastructure provisioning.

Please let me know your thoughts. Always happy to hear from you.

DevOps processes and tools for z/OS

  • Post category:DBAOTM
  • Reading time:5 mins read

In this post I will discuss a traditional view of the DevOps processes and tools for z/OS, and in the follow-on post I will discuss a somewhat futuristic view. The ideal situation for development for z/OS is work for all of us. However, significant progress has been made of the past few years to change the traditional waterfall-oriented processes and tools for development of applications on z/OS into a modern-day agile way of working.

Traditional DevOps process for development

Before we look at modern development tools for z/OS, let’s first have a look at how application development was traditionally done.

The traditional waterfall is a staged approach that is reflected in the processes and tools

The development process of applications on z/OS traditionally goes through a number of stages, typically called Development – Test – Acceptance and Production.

An application is developed in the development stage. It is unit-tested in the Development environment. When that is done the application moves to the Test stage, from which it is integration-tested in the Test environment. When all is well, the application moved to the Acceptance stage, from which it is Acceptance-tested in the Acceptance environment. Finally, for Go-Live in Production the application is moved to the Production stage, reflecting the situation in the Production environment.

What you read from the above simplified process description is that every stage in the process, also has an environment associated with it. The infrastructure setup for the development process, is very much aligned with this waterfall-oriented development process. An application version that has its source code in the Test stage, is using the Test environment to validate correct functioning.

Not only does this create obvious source code management problems with parallel development, it also creates a rigid relation between the development process and the physical infrastructure.

Deployments are incremental – the concept of a build does not exist

What is also different is the traditional development process compared to modern ideas, is that the concept of a build did not exist. A build today, is a collection of all the application artefacts that are needed to run an application in a runtime environment.  To run an application you need an executable, and typically also configuration files, scripts and definitions.

On the mainframe we get an executable program through a compilation process. For a z/OS application to work, there are typically also some runtime definitions required. These are things like JCL scripts, properties files, database definitions, interface definitions, etcetera. All these artefacts together we nowadays call a build.

Most of the processes to create all the z/OS application artefacts that are needed for an application, were disparate, unique processes. Some technologies allowed for standardization of build processes for certain components, mostly for the compilation processes. But most processes were either manual, or automated with in-house created tools, using whatever technology the organization thought best at the time when the need was identified.

In summary, creating an application build as we know it today was impossible, and automation of the development process was very much limited.

Problems with the waterfall model

While the long development processes in the waterfall model existed, this DTAP approach was satisfying most of the needs of for the application development process. Quality problems with this way were definitely identified already, like dependencies on manual processes and lack of standardization. These were tackled in a haphazard manner, through custom-build processes where possible and especially through extremely rigid change processes. And while speed was a concern yet, this was more or less acceptable for the clients of the IT departments.

There are on number of tools available on z/OS that support this traditional development model. Almost all of them support the source code management process for DTAP-based development. Endevor from CA/Broadcom, ISPW from Compuware and Changeman from Microfocus are amongst the mostly used tools for mainframe SCM. IBM had a free tool SCLM but stopped supporting that some years ago. Whilst giving good support for source code management, most of the tools had limited functionality for build and deploy processes.