System management

The z/OS operating system is designed to host many applications on a single platform. From the beginning, efficient management of the applications and their underlying infrastructure has been an essential part of the z/OS ecosystem.

This chapter will discuss the regular system operations, monitoring processes, and tools you find on z/OS. I will also look at monitoring tools that ensure all our automated business, application, and technical processes are running as expected.

System operations

The z/OS operating system has an extensive operator interface that gives the system operator the tools to control the z/OS platform and its applications and intervene when issues occur. You can compare these operations facilities very well with the operations of physical processes like in factories or power plants. The operator is equipped with many knobs, buttons, switches, and meters to keep the z/OS factory running.

Operator interfaces and some history

By design, the mainframe performs operations on so-called consoles. Consoles originally were physical terminal devices directly connected to the mainframe server with special equipment. Everything happening on the z/OS system was displayed on the console screens. A continuous newsfeed of messages generated by the numerous components running on the mainframe streamed over the console display. Warnings and failure messages were highlighted so an operator could quickly identify issues and take necessary actions.

Nowadays, physical consoles have been replaced by software equivalents. In the chapter on z/OS, I have already mentioned the tool SDSF from IBM or similar tools from other vendors available on z/OS for this purpose.  SDSF is the primary tool system operators and administrators use to view and manage the processes running on z/OS.

Additionally, z/OS has a central facility where information, warnings, and error messages from the hardware, operating system, middleware, and applications are gathered. This facility is called the system log. The system log can be viewed from the SDSF tool.

SDSF options
Executing an operator command through SDSF
The system log viewed through SDSF

An operator can intervene with the running z/OS system and applications with operator commands. z/OS itself provides many of these operator commands for a wide variety of functions. The middleware tools installed on top of z/OS often also bring their own set of operator messages and commands.

Operator commands are similar to Unix commands for Unix operating systems and functions provided by the Windows Task Manager and other Windows system administration functions. Operator commands can also be issued through application programming interfaces, which opens possibilities for building software for automated operations for the z/OS platform.

Automated operations

In the past, a crew of operators managed the daily operations of the business processes running on a central computer like the mainframe. The operators were gathered in the control room, also called a bridge, from where they monitored and operated the processes running on the mainframe.

Nowadays, daily operations have been automated. All everyday issues are handled through automated processes; special software oversees these operations. When the automation tools find issues they cannot resolve, an incident process is kicked off. A system or application administrator is then called from his bed to check out the problem.

Manual versus automated operations

Several software suppliers provide automation tools for z/OS operations. All these tools monitor the messages flowing through the system log, which reports the health of everything running on z/OS. The occurrence of messages can be turned into events for which automated actions can be programmed.

For example, if z/OS signals that a pool of disk storage is filling up, the storage management software will write a message to the system log. An automation process that increases the storage pool and sends a notification email to the storage administrator can be defined. The automation process is kicked off automatically when the message appears in the system log.

All automated operations tools for z/OS are based on this mechanism. Some tools provide more help with automation tasks and advanced functions than others. Solutions in the market include System Automation from IBM, CA-OPS/MVS from Broadcom, and AUTOMON for z/OS from Macro4.

Monitoring

System management aims to ensure that all automated business processes run smoothly. For this, detailed information must be made available to assess the health of the running processes. All the z/OS and middleware components provide a wide variety of data that can be used to analyze the health of these individual components. The amount of data is so large that it is necessary to use tools that help make sense of all this data. This is where monitoring tools can help.

Monitoring tools can be viewed on different levels of the operational system. In this section, I differentiate between infrastructure, application, and business monitoring for this chapter.

Figure 42 shows the different layers of monitoring that can be distinguished. It illustrates how application monitoring needs to be integrated to roll the information up into meaningful monitoring of the business process.

The following sections will go into the different layers of monitoring in a bit more detail.

Monitoring of different layers

Infrastructure monitoring

Infrastructure monitoring is needed to keep an eye on mainframe hardware, the z/OS operating system, and the middleware components running on top of z/OS. All these parts produce extensive data that can be used to monitor the health of the tools. z/OS provides standard facilities for infrastructure components to write monitoring data. The first one we have seen is the messages written to the system console. These are all saved in the system log. Additionally, z/OS has a System Management Facility (SMF) component, providing a low-level interface through which infrastructure components can write information in the SMF dataset in a special event log. 

There are many options for producing data, but what often does not come with these tools is the ability to make meaningful use of all that data.

To get a better grip on the health of these infrastructure components, various software vendors provide solutions to use that data to monitor and manage specific infrastructure components. Most of these tools offer particular functions for monitoring a piece of infrastructure, and integrating these tools is not always straightforward.

BMC’s Mainview suite provides tools to monitor z/OS, CICS, IMS, Db2, and other infrastructure components common in a z/OS setup.

IBM has a similar suite under the umbrella of Omegamon. The IBM suite also has tools for monitoring z/OS itself, storage, networks, and middleware such as CICS, IMS, Db2, and more.

Also, under the name ASG-TMON, ASG has an extensive suite of tools for the components mentioned above and more. This software is now acquired my Rocket Software.

Broadcom provides under their Intelligent Operations and Automation suite tools for z/OS and network monitoring.

Application monitoring

The next level of monitoring provides a view of the functioning of the applications and their components.

Off-the-shelf tools or frameworks do not extensively support this monitoring level for application in COBOL or PL/I. Application monitoring and logging frameworks like Java Management Extensions and Log4J are available for Java, but such tools are not available for languages like COBOL and PL/I. Many z/OS users have developed their frameworks for application monitoring, relying on various technologies.

Some tools can provide a certain level of application monitoring. For example, Dynatrace, AppDynamics, and IBM Application Performance Management provide capabilities to examine applications’ functioning. However, the functionality is often not easily extensible for application developers, like it is with the log4j and JMX mentioned above. There remains a need for a framework (preferably open-source) that allows application developers to create specific monitoring and logging information and events at particular points in an application on z/OS.

Business Monitoring

Ideally, the application and infrastructure monitoring tooling should feed some tools that can aggregate and enrich this information with information from other tools to create a comprehensive view of the IT components supporting business processes.

Recently, tools have become available on z/OS to gather logging and monitoring information and forward this to a central application. Syncsort’s IronStream and IBM’s Common Data Provider can collect data from different sources, such as system logs, application logs, and SMF data, and stream this to one or more destinations like Splunk or Elastic. With these tools, it is now possible to integrate available data into a cross-platform aggregated view, as shown in Figure 42. Today, Aggregated views are typically implemented in tools like Splunk or the open-source ELK stack with Elastic or other tools focused on data aggregation, analysis, and visualization. 

Security on z/OS

  • Post category:DBAOTM
  • Reading time:8 mins read

Security has always been one of the strong propositions and differentiators of the mainframe and the z/OS operating system. In this post I will highlight a few of the differentiating factors of the mainframe hardware and the z/OS operating system.

The mainframe provides a number of distinguishing security features in its hardware. In z/OS a centralized security facility is a mandatory and built-in part of the operating system. Also, z/OS exploits the security features that the mainframe hardware provides. This chapter will highlight what the central security facility in z/OS is, and how z/OS exploit unique hardware features of the mainframes.

Centralized security management

The central security management built into z/OS provides a standardized interface for security operations. A few software vendors have implemented this interface in commercial products, thus providing a security management solution for z/OS.

The SAF interface

The main security component of z/OS is the centralized security function called System Authorization Facility or SAF. This component provides authentication and authorisation functions.

The z/OS operating system itself and the middleware installed on z/OS make use of this central facility. With the SAF functions, z/OS and middleware tools can validate access to the resources that the middleware products need to protect.

A protected resource can be a dataset, a message queue, a database table, but also a special function or command that is part of the middleware software. By building in API calls to the SAF interface, the middleware product controls access to sensitive functions and resources.

Security products

The SAF interface of z/OS operating system is just that: a standardized interface. The implementation of the interface is left to software vendors. The SAF interface does not prescribe how security definitions should be stored or administrated.

There are three commercial solutions in the market that have implemented the SAF interface: IBM with its security product RACF, and CA/Broadcom with two different tools: ACF2 and Top Secret. All three software products provide additional services related to security management such as administration, auditing and reporting services. All three products define a special role in the organisation that is appointed to have the restricted ability to define and change the security rules. The security administrator has these special authorizations. The security administrator defines which users and/or groups of users are allowed to access certain resources.

The SAF interface and security products

IBM Enterprise Key Management Foundation

The z/OS operating system in equipped with a tool that IBM calls the IBM Enterprise Key Management Foundation (EKMF). This is a tool that manages cryptographic keys. EKMF is a full-fledged solution for centralized management of cryptographic keys that can be used on the mainframe, but also on other platforms.

Many organizations have dedicated key management infrastructure for different platforms. The EKMF solution allows organization to instead build a key management solution that can be used for all platforms.

Cryptographic facilities on the mainframe

EKMF and other cryptographic features in z/OS make use of the extensive cryptographic functions built into the mainframe hardware. Traditional encryption facilities have since long been a core part of the mainframe hardware. Recently IBM has added innovative features such as pervasive encryption and Data Privacy Passports, now called Hyper Protect data Controller.

Traditional encryption

The mainframe hardware and software are equipped with the latest encryption facilities, that allow for encryption of data and communications in the traditional manner.

What differentiates the mainframe from other platforms is that it is equipped with special processors that accelerate encryption and decryption operations and can enable encryption of high volumes of data.

Pervasive encryption

Pervasive encryption is a new general feature facilitated in the mainframe hardware. With pervasive encryption data is always encrypted: data is encrypted when stored on disk, but also during the communication over the networks end internal connections between systems. This encrypted data can only be used by users that are authorized to the right decryption keys.

Pervasive encryption gives an additional level of security. Even when a hacker has gained access to the system and gained access to the files or datasets, she still cannot use the data because it is encrypted. Similarly, even if you could “snif” the communications between systems and over the network, this is not sufficient because also the data flowing over communications networks is always encrypted.

IBM Hyper Protect Data Controller

Another problem occurs when data that is replicated from the source in the mainframe to other environments, typically for analysis, or aggregation with other data sources. The data that was so well protected on the mainframe, but now has become available in potentially less controlled environments. For this issue IBM has developed the IBM Hyper Protect Data Controller solution.

With this IBM Hyper Protect Data Controller solution, when a copy of the data is needed, the copy is encrypted and in this copy a piece of information is included that administers who is authorized to access that copy. This access scheme can be as detailed as describing who can use which fields in the data, who can see the content of certain fields, and who can see only masked values. A new component on z/OS, the Trust Authority maintains a registry of all data access definitions.

When the data copy is accessed, the so-called passport controller checks the identity of the person requesting the data access, and authorizations of that person for this copy of the data.

Doing so, a copy of the data can be centrally protected, while still it can be made copied to different environments.

Multifactor Authentication

Traditional authentication on z/OS relies on a userID / password combination, that is validated against the central security registry, as we have seen in RACF, ACF2 or Top-Secret.

However, the userID / password authentication is nowadays not considered sufficiently safe anymore. To address this safety issue multifactor authentication in broadly adopted. For the z/OS platform, IBM has developed the product called Multifactor Authentication for z/OS. Instead of using the normal password to logon to z/OS, a user must supply a token that is generated by a special authorized device. This special device can be a SecurID token device, a smartphone with a special app, or otherwise. The key thing is that next to a userID and password, pin code or fingerprint, there is a second thing – the second factor – needed for the user to prove his identity: the special device or authorized app on your phone.

Multifactor authentication on z/OS

Modern mainframe application development

  • Post category:DBAOTM
  • Reading time:9 mins read

In the previous DBAOTM article on DevOps I have introduced the traditional development process, which is often still used in a mainframe environment. In this post I will present a modern approach to development on the mainframe.

Modern development processes for the mainframe

Requirements for the development process have changed. Applications must be built faster and it must be possible to change applications more often and quicker without impacting the quality of the application. In other words, agile development is needed. The only way to address today business needs into modern agile development processes is to automate all build and deploy processes.

A set of principles can then be derived for modern mainframe develops processes.

  • All application artefacts are managed in the (or a) Source Code Management tool.
  • The build processes for all artefact are automated, and can be coherently executed.
  • A build can be deployed in any environment. A build has no environment or organization-specific dependencies.
  • The deployment process for a build is fully automated. Including the fallback procedure. The deployment process is a coherent process for all application artefacts.

These principles need to be supported by tools and processes that are (re)designed for these purposes. Of course this is not something specific to z/OS applications, but is true for any modern IT solution. But with the background I have sketched in the previous section, there is a legacy of development processes and tools to take into account and in many organizations this implies significant technical and organizational changes.

The modern SCM for z/OS

The modern SCM tool for z/OS needs to support all kinds of application artefacts. For the mainframe this means for one thing that not only traditional MVS-type artefacts must be supported, like COBOL programs, COPYBOOKS and JCL, but also Unix type artefacts like Unix scripts and configuration files in z/OS Unix directories. The tools and processes should allow for EBCDIC type artefacts to be created or the z/OS runtime environment, as well as ASCI, Unicode and binary artefacts.

Modern SCM tools that can manage z/OS artefacts, are ISPW from Compuware, RTC form IBM, and a new option nowadays is Git, or GitHub.

Build automation

The modern DevOps process automates the creation of a build. The build process takes the required versions of the application artefacts from the source code management repository and creates a coherent package of these artefacts. This package, also called the build, is deployed in a (test) environment.

The build could be deployed in any runtime environment, even outside your organizations. This principle not only enforces standardization of processes and infrastructure in your IT organization, it also allows any future deployments in yet unknown environments – for example in the Cloud.

The automated build process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

Build automation on z/OS can be accomplished with a number of tools. Some of these tools are able to handle the z/OS specific needs. IBM has two solutions: Rational build engine and Jazz build engine. Compuware has capabilities in ISPW. As it stands, all these tools still have some gaps to fill in the coverage of the different artefacts that can make up a z/OS application.

Deployment automation

The modern DevOps process for z/OS automates also the deployment of the application build. The deployment process takes all the artefacts in the build, customizes them for the specific runtime environment, for example through the application of naming conventions and runtime aspects, and deploys the artefacts on the different runtime components in an environment.

The automated deployment process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

The most important deployment tools available on the market for z/OS are IBM’s UrbanCode Deploy and XebiaLabs’ XLDeploy.

Integration in other pipelines

I have indicated above that the DevOps processes described there must be callable, to use the most generic term I can think of. Since we do not just want to automate the individual pieces of a development process, but the entire chain, this requirement is important.

Only a fully automated Development process – a CI/CD pipeline – can provide optimal speed of development. To achieve this, the integration of build and deployment with other processes like infrastructure provisioning, test data provisioning, and testing is key.

Most of the tools mentioned above have API’s or command line interfaces that allow integration with CI/CD orchestration tools like Jenkins, Ansible, and others.

Implications

The agile development process sketched here impacts the way we do other things on the mainframe as well. I will mention a few here.

Full deployments versus delta deployments

The traditional DTAP development process is based on the development of delta’s: you only deploy these things that are changed.

To facilitate agile development in z/OS environments, we need to move to a process that supports full application deployments. What the consequence of this change are is fully clear, but I am convinced the old way of working with delta’s will not give of the speed and flexibility we need today.

Other impact:

  • Phasing in of an application that consists of many more load modules than we have today, while remaining active, needs to be supported in the middleware tools on z/OS.
  • Application may need to become smaller. Traditionally applications are defined relative coarse grained on z/OS. We may need to split up applications into smaller distinguishable, more loosely coupled parts. We might need to reuse some of the microservices architecture goodies.

To facilitate agile development drastic changes in our thinking about mainframe applications is necessary, and in principle no single goodie from the past should be exempt from reconsideration.

Infrastructure provisioning

We have talked about application processes so far, but the agile DevOps process must be supported by the runtime infrastructure. In the DTAP model, runtime environments are static, defined once and gradually changed, when this was functionally needed.

In order to support rapid changes in applications, we must also allow rapid changes in infrastructure. Similar to the build and deploy processes, all infrastructure provisioning must be automated to allow flexible and instant creation and modification of infrastructure for test environments. This also means that environments must rigorously standardized. Definitions of the infrastructure making up the environment must be treated like code, and be managed in a source code management system, where it can be properly versioned. 

Currently the tool support for infrastructure provisioning is very limited. As part of z/OS the tool z/OSMF is provided that allows the creation of provisioning workflows for z/OS technology-specific creation of infrastructure.

Furthermore, there is work ongoing in IBM and other vendors to extend this lower level capability and integrate this is infrastructure provisioning tools like Kubernetes and OpenShift. And also Ansible for z/OS is quickly emerging. Yet, there is still a long way to go but the first steps have been made.

In a future article I will talk a little bit more on infrastructure provisioning.

Please let me know your thoughts. Always happy to hear from you.

DevOps processes and tools for z/OS

  • Post category:DBAOTM
  • Reading time:5 mins read

In this post I will discuss a traditional view of the DevOps processes and tools for z/OS, and in the follow-on post I will discuss a somewhat futuristic view. The ideal situation for development for z/OS is work for all of us. However, significant progress has been made of the past few years to change the traditional waterfall-oriented processes and tools for development of applications on z/OS into a modern-day agile way of working.

Traditional DevOps process for development

Before we look at modern development tools for z/OS, let’s first have a look at how application development was traditionally done.

The traditional waterfall is a staged approach that is reflected in the processes and tools

The development process of applications on z/OS traditionally goes through a number of stages, typically called Development – Test – Acceptance and Production.

An application is developed in the development stage. It is unit-tested in the Development environment. When that is done the application moves to the Test stage, from which it is integration-tested in the Test environment. When all is well, the application moved to the Acceptance stage, from which it is Acceptance-tested in the Acceptance environment. Finally, for Go-Live in Production the application is moved to the Production stage, reflecting the situation in the Production environment.

What you read from the above simplified process description is that every stage in the process, also has an environment associated with it. The infrastructure setup for the development process, is very much aligned with this waterfall-oriented development process. An application version that has its source code in the Test stage, is using the Test environment to validate correct functioning.

Not only does this create obvious source code management problems with parallel development, it also creates a rigid relation between the development process and the physical infrastructure.

Deployments are incremental – the concept of a build does not exist

What is also different is the traditional development process compared to modern ideas, is that the concept of a build did not exist. A build today, is a collection of all the application artefacts that are needed to run an application in a runtime environment.  To run an application you need an executable, and typically also configuration files, scripts and definitions.

On the mainframe we get an executable program through a compilation process. For a z/OS application to work, there are typically also some runtime definitions required. These are things like JCL scripts, properties files, database definitions, interface definitions, etcetera. All these artefacts together we nowadays call a build.

Most of the processes to create all the z/OS application artefacts that are needed for an application, were disparate, unique processes. Some technologies allowed for standardization of build processes for certain components, mostly for the compilation processes. But most processes were either manual, or automated with in-house created tools, using whatever technology the organization thought best at the time when the need was identified.

In summary, creating an application build as we know it today was impossible, and automation of the development process was very much limited.

Problems with the waterfall model

While the long development processes in the waterfall model existed, this DTAP approach was satisfying most of the needs of for the application development process. Quality problems with this way were definitely identified already, like dependencies on manual processes and lack of standardization. These were tackled in a haphazard manner, through custom-build processes where possible and especially through extremely rigid change processes. And while speed was a concern yet, this was more or less acceptable for the clients of the IT departments.

There are on number of tools available on z/OS that support this traditional development model. Almost all of them support the source code management process for DTAP-based development. Endevor from CA/Broadcom, ISPW from Compuware and Changeman from Microfocus are amongst the mostly used tools for mainframe SCM. IBM had a free tool SCLM but stopped supporting that some years ago. Whilst giving good support for source code management, most of the tools had limited functionality for build and deploy processes.

Integrating z/OS applications with the rest of the world

Many mainframe applications were built in an era where little integration with other applications was needed. Where integrations were needed, this was mostly done through the exchange of files. For example, for the exchange of information between organizations.

In the 1990s the dominance of the mainframe applications ended and client-server applications emerged. These new applications required more extensive and real-time integrations with existing mainframe applications. In this period many special integration tools and facilities were built to make it possible to integrate z/OS applications and new client-server applications.

In this chapter I will highlight categories of these integration tools that are available on z/OS, from screen-scraping tools to modern integrations supporting the latest REST API interfaces.

File interfaces

The mainframe was designed for batch processing. Therefore integration via files is traditionally well catered for and straightforward.

You can use multiple options to exchange files between applications on z/OS and other platforms.

Network File System

Network File System (NFS) is a common protocol that you can use to create a shared space where you can share files between applications. Although it was originally mostly used with Unix operating systems, it is now built into most other operating systems, including z/OS. NFS solutions however are usually not a preferred option due to security and availability challenges.

FTP

The File Transfer Protocol (FTP) is a common protocol to send files over a TCP/IP network to a receiving party, and it is also supported on z/OS. With FTP a script or program can be written to automatically transfer a file as part of an automated process. FTP can be made very secure with cryptographic facilities.

FTP is built into most operating systems, including z/OS.

Managed File Transfer

Managed file transfer is also a facility to send files over a network, but the “Managed” in the category means a number of additional features are added.

Managed file transfer solutions make file transfers more reliable and manageable. A number of additional operational tasks and security functions related to file exchange are automated. Managed file transfer tools provide enhanced encryption facilities, some form of strong authentications, integration with existing security repositories, handling of failed transfers with resend functionality, reporting of file transfer operations, and more extensive API’s.

On z/OS a number of managed file transfer tools are available as separate products: IBM has Connect:Direct and MQ-FTE, CA/Broadcom has Netmaster file transfer and XCOM, BMC provides Control-M  and there are other less commonly known tools.

Message queueing

Message queuing is a generic manner for applications to communicate with each other in a point-to-point manner. With message queuing applications remain de-coupled, so they are less dependent on each other’s availability and response times. Applications can be running at different times and communicate over networks and systems that may be temporarily down. As we will see in the next section, when using alternative point-to-point protocols like web services, both applications and intermediate infrastructures must be available for successful application communications.

The basic notion of message queuing is that an application sends a message to a queue and another application asynchronously reads messages from that queue and (optionally) responds with another message over a queue. Besides the specific asynchronous nature of message queuing, a big advantage is that it can assure message delivery: messages will not get lost, and when the infrastructure is not available, messages remain stored until they can be delivered.

IBM’s MQSeries, or WebSphere MQ as it is called now, is a separate is one of the most well-known and robust solutions for message queuing available on z/OS.

The open API for messaging called Java Message Service (JMS) is implemented by WebSphere MQ and WebSphere Application Server on z/OS.

Applications using Message Queuing

Web services (SOAP, REST)

Web services is the modern technology that enables applications to communicate over the web protocol HTTP, the protocol we also use for browsing the web.

SOAP and REST are two different types of web services. SOAP is a bit older and exchanges XML messages. XML is more resource intensive because handling XML is a complex operation. REST is more modern and lightweight, and today’s API economy is mostly based on REST APIs.

The benefit of integration with web services is that no special infrastructure is needed for applications to integrate, apart from a capable web application server. Integrations are lightweight and can be very loosely coupled.

The downside of web service is that the HTTP protocol does not guarantee message delivery (as opposed to message queueing as we have seen above). Applications using web services have to implement their own recovery and retry mechanisms to cope with situations where connections are lost.

On z/OS today, most modern versions of application middleware on z/OS, like CICS, IMS, WebSphere Application Server, IDMS, and others support REST and SOAP interfaces.

Applications using Web Services

Enterprise Service Bus

Another form of integration can be achieved through Enterprise Service Bus tools. These tools probably give the widest variety of integration options. They can receive and send service requests over a number of different protocols. They can convert messages from and to many formats. And they can orchestrate complex message interactions between multiple applications. Enterprise Service Bus products in the market are Tibco Substation ES and IBM Integration Bus.

ESB solutions can be implemented on z/OS itself, which than often has the advantage of easier integration with the z/OS application side, but also a non-z/OS platform and integrate with z/OS agent software.

Enterprise Service Bus

Adapters

In many situations it may not be possible to refactor your old mainframe applications. The applications may not be designed properly in a layered manner, middleware technology may have limited options, skills may not be available, or the risk of a changing existing applications is too high. Or there may be other reason you do not want to touch the code.

For these situations, application adapters can help in opening up applications. In general, an adapter converts a proprietary middleware protocol like a CICS, IDMS or IMS API into a more common API or generic protocol, like a Java program, a web service or message queueing interface. Some adapters provide the option of converting a proprietary 3270 screen interface into a neat API through screen scraping.

I will highlight a number of the type of tools here.

Generic functioning of an adapter

CICS Transaction Gateway

CICS Transaction Gateway provides an API for Java and C programs to call a CICS transaction on z/OS.

CICS Transaction Gateway provides only a way to call functionality in CICS, but there is no possibility in this tool to reversely invoke an external program from CICS. CTG is only meant for external programs to call CICS.

CICS Transaction Gateway adapter

IMS Connect

IMS Connect provides a Java API through which you can invoke IMS functions form Java programs. Through IMS Connect you can access IMS transactions as well as data in IMS DB (see section Middleware for z/OS). As such it functions quite similar to CTG, although the native interfaces are of course different.

z/OS Connect

A recent product from IBM is z/OS Connect. This tool converts a REST API into one or more proprietary backend protocols, like a CICS or IMS transaction or call to Db2. Also, z/OS Connect makes it possible to call REST APIs from mainframe applications.

Thus, z/OS Connect provides a bi-directional adapter for REST API through which you expose and call RESTful APIs from existing z/OS programs in CICS, IMS, Db2, WebSphere Application Server and MQ.

z/OS Connect adapter

Screen scraping tools

You may have old legacy applications that are built as a silo, have only 3270 user interfaces and no decent program interfaces.

For this problem, screen scraping tools can be a last resort.

The integration problem of an application silo – refactoring is the ideal solution

A screen scraping tool provides the ability to simulate the interaction of a business user behind a screen, with the old application’s user interface. The screen scraper tool automates the workflow of the end-user by filling in the old application screen programmatically. With these automations such a tool can then aggregate and expose these interactions into higher level services. These higher level services can then be invoked through a modern API, such as a web service by other applications in your organization.

Integration with a screen scraping solution

The big problem with screen-scraping integrations is that you end up with more development artefacts that you need to maintain. Not only do you have the old application to maintain, but now also need to manage the screen scraping middleware and logic.

Screen-scraping should be considered a (very) temporary solution for a serious issue in your application landscape. Such a solution should be replaced by a strategic integration or new application as soon as possible.

Products like HostBridge, Rocket LegaSuite and IBM Host on Demand provide screen scraping facilities.

Legacy integration suites

There are many integration tools on the market that provide one or more of the forms of adapters that I have discussed in the above. For example, GT Software and Oracle Legacy Adapter provide functionality to bridge native z/OS interfaces including screen interfaces to and from modern applications.

Database access via JDBC, ODBC

So far, we have discussed application integration through application interactions – applications calling one another.

Applications on non-z/OS platforms alternatively can directly access data in databases on z/OS through the standard data access protocols ODBC and JDBC. All suppliers of database software for z/OS that I know provide drivers for ODBC and/or JDBC.

Integrating with JDBC and ODBC

From an architectural perspective it is not a preferred solution for integrating applications. Applications should manage their own data and access other applications’ data only through service interfaces, and follow the principle of loosely coupling for application architectures.

Programming languages for z/OS

  • Post category:DBAOTMProgramming
  • Reading time:9 mins read

In this post I will discuss the programming languages you find on z/OS, and what they are generally used for.

COBOL

The COBOL programming language was invented 60 years ago to make programs portable across different computers. The language is best usable for business programs (as opposed to scientific programs).

COBOL is a language that must be compiled into executables, load modules.

       IDENTIFICATION DIVISION.
       PROGRAM-ID.
           COBPROG.
       ENVIRONMENT DIVISION.
       DATA DIVISION.
       PROCEDURE DIVISION.
           DISPLAY "HELLO WORLD".
           STOP RUN.                   

PL/I

PL/I was developed in the mid-1960s with the aim to create a programming language that could be used for business as well as scientific applications.

Like COBOL, PL/I programs must be compiled into load modules.

   World: Procedure options(main);
          Put List( 'Hello world' );
          End World;

Assembler

Assembler is still around. In the past business applications were developed using Assembler. Nowadays you should not do that anymore. But there are still a lot of legacy assembler programs around on the mainframe.

In the old days, assembler was often used to implement tricks to achieve things that were not possible with the standard operating system, or other programming languages. This practice has created a problematic legacy of very technical programs in many mainframe application portfolios.

The modern stance is that Assembler program should be regarded as severe legacy, because it is no longer maintainable and Assembler program are a risk for operating system and middleware updates.

Furthermore, we find Assembler programs in modifications to the z/OS operating system and middleware.

z/OS offers a number of points where you can customize the behavior of the operating system. These so-called exit-points oftentimes only have interfaces in Assembler. Like application programs in Assembler, z/OS exits in Assembler are a continuity risk. Not only because nobody knows how to program Assembler anymore, but even more so because these exit points make use of interfaces that IBM may (and wishes to) change at any point in the future.

IBM is actively removing Assembler-based exit points and replacing these where needed with configuration parameters.

The bottom line is that you should remove all home-grown Assembler programs from your z/OS installation.

TEST0001 CSECT               
         STM   14,12,12(13) 
         BALR  12,0         
         USING *,12         
         ST    13,SAVE+4     
         LA    13,SAVE       
         WTO   'HELLO WORLD!'
         L     13,SAVE+4     
         LM    14,12,12(13) 
         BR    14           
SAVE     DS    18F           
         END   

Java

The language invented by a team from Sun in the 1990s with the goal to develop a language that could run on any device. Support for Java on the mainframe was introduced somewhere in the beginning of the 21st century.

Java programs do not need to be compiled. They are interpreted by a special layer that must be installed in the runtime environment, called the Java Virtual Machine.

The execution is (therefore) far more inefficient than COBOL and PL/I. So inefficient that running it on the mainframe would be very expensive (see section Understanding the cost of software on z/OS, MLC and OTC). To address this IBM invented the concept of zIIP specialty engines (see section Specialty engines), which makes running Java on the mainframe actually extremely cheap.

public class HelloWorld {
   public static void main(String[] args) {
      // Prints "Hello, World" in the terminal window.
      System.out.println("Hello, World");
   }
}

C/C++

The C/C++ programming language was added to z/OS in the 1990s as a more mainstream programming language for mainframe applications and tools.

The process of compiling a C source program into a load module is basically the same as it is for COBOL.

#include <iostream>
using namespace std;

int main() 
{
    cout << "Hello, World!";
    return 0;
}

JCL

JCL is the original “scripting” tool for the mainframe. It is hardly a programming language, although it has been enhanced with several features over time.

JCL looks very quirky because it was design for interpretation by punch card reader, which you can still see very clearly. The main purpose of JCL is to start a program or a sequence of programs.

Many of the quirky features of JCL have very little use in today’s z/OS programming but are maintained for compatibility reasons.

I mentioned before that there can be tens of thousands of batch jobs running on the mainframe. You should realize that mean you will easily have thousands of JCL “programs” as well to run these jobs.

Nevertheless, we could do with a more accessible, more modern alternative.

//JOBNME5  JOB AB123,PRGRMR,NOTIFY=MYUSER1,MSGLEVEL=(1,1),
//       CLASS=1                   
//RUN       EXEC PGM=COBPROG <- PROGRAM TO RUN  
//* PROGRAM WAS PUT IN HERE --v           
//STEPLIB  DD DISP=SHR,DSN=MYUSER1.LOADLIB 
//SYSPRINT DD SYSOUT=*                    

Rexx

The story goes that the Rexx programming language was created by an IBM developer, Mike Cowlishaw, who was totally fed up with the only available language for scripting at that time, the CLIST language. In one night he is said to have developed Rexx. When he showed it to his colleges next day, they were immediately very enthusiastic.

On z/OS Rexx fulfils the same role a Unix scripts in Unix environments. It is mostly used by system administrators to automated all kinds of administration tasks.

You can run Rexx interactively under TSO/ISPF, but you can also use it in batch jobs.

Rexx is somewhat similar to PHP, I find. It has the same sort of flexibility (and drawbacks).

/* Main program */
say "Hello World"

Unix shell script

z/OS has a Unix part, which is complying to POSIX standards, and hence also support a command shell like any Unix flavor. With the shell scripting language you can automate all kinds of Unix processes.

Shell scripts can also be ran in batch jobs.

#!/bin/bash
echo "Hello World"

SAS

Many z/OS users exploit the SAS language from the company with the same name. SAS is used for ad hoc programs and reporting, besides its analytical capabilities.

On the mainframe SAS is often used to process the measurement data that z/OS generates, and create all kinds of usage and performance reports.

proc ds2 libs=work;
data _null_;

  /* init() - system method */
  method init();
    declare varchar(16) message; /* method (local) scope */
    message = 'Hello World!';
    put message;
  end;
enddata;
run;
quit;

Easytrieve

The programming language Easytrieve from CA/Broadcom you also find regularly in z/OS environments. This language is used by application support staff to create ad-hoc programs, and by advanced end-users to to create business reports from application data. 

Other languages

There are many other languages available on z/OS. But the ones discussed here are the mainstream languages. Languages like Python and R are emerging for analytical applications, JavaScript for use in in Node.js, PHP for web applications. Rocket Software, the company that supports a ported version of Python for z/OS, also have a supported version of PHP and Perl.

Middleware for z/OS – Database management systems

  • Post category:Db2DBAOTM
  • Reading time:3 mins read

In the previous post I started the first part of describing the middleware tools available on z/OS, kicking off with the available application servers of transaction managers.

In this part I will discuss the database management systems that can run on z/OS.

Db2

Db2 for z/OS is the z/OS version of IBM’s well-known relational database management system. It is a regular high-end RDBMS, except that it exploits the sysplex capabilities of z/OS.

IDMS/DB

IDMS/DB is the network database management system com CA/Broadcom. A network database uses special concept to organize data, namely in the form of a network of relationships. Besides some modelling advantages this way of data access can be extremely fast, but as for hierarchical data models like in IMS, it is more difficult to program for it.

IMS/DB

IBM’s IMS/DB is a hierarchical database management system. Data in such a database management system is not structured in this database in tables like in Db2, but in tree-like hierarchies. In Db2 and other relational databases there is the well-known SQL language to access data, in IMS you have a language called DL/I to manipulate data.

The hierarchical data model has some modelling advantages and also data access is extremely fast and efficient. The drawback of it is that it is more complex to program.

Datacom/DB

Datacom /DB is a relational database management system from CA/Broadcom.

ADABAS

ADABAS is Software AG’s database management system. ADABAS organizes and accesses data according to relationships among data fields. The relationships among data fields are expressed by ADABAS files, which consist of data fields and logical records. This data model is called an inverted-list model.

Middleware for z/OS – Application Servers

  • Post category:DBAOTM
  • Reading time:6 mins read

There is a large variety of middleware tools available on z/OS. Some are very similar to the software also available on other platforms, like WebSphere Application Server and Db2, and some are only available on the mainframe, like IMS and IDMS. I will highlight a number of the main middleware tools for z/OS in this chapter.

Application Servers

Application Servers are tools that make it easier to run interactive applications. Today we call these tools Application Servers. On the mainframe these tools were traditionally called Transaction Managers. A small intermezzo to explain the similarities and get acquainted with the terminology.

Applications Servers and Transaction Managers intermezzo

Despite their different name, Application Servers and Transaction Managers achieve the same goal: make it easy to build and run interactive applications. Application Servers gather a set of common functions for these types of applications. These functions include network communications, transaction functionality, features to allow scaling of applications, recovery functions, database connectivity features, logging functionality and much more.

For Java a standard for these functions is created in the Java Enterprise Edition (JEE) standard. The z/OS Transaction Managers all provide a similar set of functions, for multiple programming language like COBOL, PL/I, C/C++ and Java.

With a modern web application server, the user enters a url consisting of the name of a server and an identification of the piece of code on that server. For example, a user types in his browser http://acme.com/fireworks/index.html . In this, acme.com is the server name and fireworks/index.html is the piece of code to execute on that server – called the uri. The application server takes the uri, executes the code and returns a response html page.

The traditional transaction managers work in a similar way. First you must make a connection from your terminal to the transaction manager.

Traditionally you did this by typing something like “LOGON APPLID(CICSABC)”. Then you were connected to the application server and you were presented some screen. Then you type in a transaction code. The transaction code is similar to the uri: it identifies which piece of code to run. The transaction manager executes the code and returns a response screen to the user.

The transaction managers on z/OS nowadays can work in both ways. They still have the traditional interface, which is hardly used for business applications anymore, and they also have a modern web application interface like web application servers.

CICS traditional versus a web application server

CICS traditional versus a web application server

Now let’s have a look at what sort of application servers we have on the mainframe.

WebSphere Application Server

IBM’s WebSphere Application Server (WAS) is an application server for Java programs, complying to the JEE Java application standard. WebSphere was one of the first implementations of a Java application server. It was made also available on z/OS.

Initial implementations of WAS on z/OS were very inefficient and had stability issues. After a redesign and the introduction of speciality engines for Java processing (see section Specialty engines), z/OS has become of very cheap platform for JEE applications.

CICS

CICS is the most popular Transaction Manager on z/OS. It was designed for COBOL and PL1 applications, but nowadays you can also runs Java applications.

IMS

IMS is a transaction manager like CICS, but it also has a database management component. Although less prominent as CICS, quite a number of very large organizations are relying on IMS for their daily core processing.

An interesting fact is that IMS was built for NASA as part of Kennedy’s moon challenge.

IMS has two parts: IMS/TM and IMS/DB. IMS/TM is IMS/Transaction Manager, an application server. IMS/TM is as such functionally similar to CICS. It is also build for COBOL and PL/I, and can now also run Java programs.

IMS/DB is described briefly below.

IDMS

IDMS is a transaction manager and a network database manager, owned by CA/Broadcom.

IDMS, like IMS, also has an application server and a database manager part. IDMS/DC is the transaction manager/application server part. It looks very much like CICS.

IDMS/DB is a network database management system. See below.

ADABAS and NATURAL

NATURAL is Software AG’s fourth generation application development system that allows you to create, modify, read, and protect data that the DBMS manages. You can have online – like CICS – and batch Natural programs.

Natural usually uses ADABAS. Natural is the application server that uses ADABAS as it’s backend database management system.

IDEAL and Datacom

Another combination of application server tools that is quite common on mainframes is Datacom/DB and IDEAL. The products are now owned by CA/Broadcom.

IDEAL is a 4GL programming environment, designed for the relation database management system Datacom/DB. IDEAL generates COBOL, which runs in CICS, and uses Datacom/DB as a backend store. Although originally built for Datacom/DB, it was later also enabled for IBM Db2.

Modern tools for development and operations

  • Post category:DBAOTM
  • Reading time:3 mins read

In the previous section I explained that green screen interfaces still exist for administrative tasks. But even for these kinds of work there are modern tools with contemporary interfaces. z/OS itself and almost all middleware running on z/OS can be managed with web-based tools, Eclipse-based tools for z/OS, or nowadays more and more Visual Studio Code based tools for z/OS. Furthermore almost all administration tasks on z/OS can be invoked from external tools through REST APIs. More and more development and operations functionality will be made available only in this modern kind of tool sets.

The standard Eclipse-based tool for z/OS that you can download for free is called z/OS Explorer. This tool is a desktop client interfacing with z/OS. Many mainframe tools and middleware solutions provide plug-ins for this tool.

z/OS Explorer
Figure – z/OS Explorer

Considering development tools, there are a number of modern options for the mainframe. IBM has developed an Eclipse-based development tool called IBM Developer for z/OS (IDz). The software company Compuware sells a set of tools for mainframe development called Topaz. There are also open source tools and plugins, like the IBM Z Open Editor for Visual Studio.

These modern tools provide a development experience for z/OS applications that is very similar to the experience you have when you develop applications for other platforms, like Java, PHP or Python. Support for developing and debugging mainframe languages such as COBOL and PL/I is supported in the tools, but also Java is supported. As importantly it provides plugins for interacting with middleware such as Db2, MQ, CICS and IMS.

IBM, CA, BMC and other vendors provide many modern tools for the administration of specific middleware in your organization.

Finally, a recent development is the open source project called Zowe. This project is a collaboration of a number of mainframe software vendors and aims to provide an open source software framework for development and operations teams to securely, manage, control, script and develop for z/OS like any other cloud platform.

In separate chapters I will discuss a little bit more on the modern application development and operations architecture tools for z/OS, and on modern monitoring architecture and tools.

The interface to z/OS and the green screen myth

  • Post category:DBAOTM
  • Reading time:6 mins read

In the previous posts I have shown you many modern technologies available on z/OS. But still when you think of the mainframe, you think of black screen with green characters, which looks cool in the Matrix, but not so much in real life. Where does this green screen imago come from?

In this section I will talk a little bit about the origins of the green screens in mainframe technology. I will also show you that these green screens have become as uncommon to use as a terminal in Unix or command prompt in Windows.

Green screens are for administrators and programmers, not end-users

The green screens on the mainframe are user-interfaces. In the early days programmers created their programs on paper, behind their desks. They then entered the programs on punch cards or paper tape. That were the media that were then fed into the computers, using a special reader device.

Later, in the 1960s, computers with terminal interfaces were build. With the terminals, users could enter programs and data online. This is the period that the green screens originate from. Each computer type had its own terminal technology. Mainframes had technology indicated with a number: the 3270 terminal. These terminals originally worked with green letters on a black background, and could hold as much as 32 lines of 80 characters (or so). We still refer to these 3270 terminals as green screen.

Modern mainframe applications do not use these terminal interfaces anymore. Applications on the mainframe most often do not even have a user-interface anymore. They only expose services, or APIs, exposed to mid-tier or front-end applications. (See modern mainframe application architecture section.)

Today therefore, the need for these green screens is limited. Only special system administration tools and application programming tools still have a low level interface. And even these are being replaced by tools with more modern interface.

System administrators on Windows use the “DOS” command prompt and Unix techies use the Terminal sessions. Similarly, for mainframe techies there is the “green-screen” 3270 terminal.  Actually, the Unix Terminal and Windows Command Prompt are quite rudimentary, compared to the 3270 interface to z/OS.

Green screen application? Technical debt

The days where you had green screen applications are long gone. If you still have them, you should get rid of them.

Most well-architected green-screen applications can be turned into service-oriented applications. The front-end can then be replaced by a modern front-end application.

You may find yourself in the situation where you need to integrate with green-screen applications that have not been so well-designed. I will talk a little bit about that in a separate section Integration with the rest of the world.

In section Application architecture for modern mainframe applications, I describe a reference architecture for modern mainframe applications.

Now, I will describe what tools typically still needs 3270 screens.

TSO

What is the functionality of the command prompt for Windows and the Shell function is for Unix, is the TSO tool for z/OS: a command line interface with which you can fire off commands to the operating system to get things done on the computer.

Like the DOS command prompt and the Unix shell, this is a very powerful, but clumsy interface. To provide a more user-friendly interface IBM built the tool ISPF on top of TSO.

ISPF

I will no go into the abbreviations here. What you need to know is that ISPF is a standard part of z/OS. This tool gives the user, nowadays mostly system administrators, a very powerful interface to z/OS.

The editor in ISPF and the dataset list utility are probably the mostly used functions is ISPF. With the screen-oriented file editor is you can edit the z/OS datasets. The dataset list utility lets you find the datasets on your z/OS system.

ISPF - Data Set list utility
ISPF – Data Set list utility
ISPF Editor
ISPF Editor

ISPF also provides facilities to extend its features through a programming interface. Many tool vendors provide ISPF tools built on these interfaces as part of their tool installation.

SDSF – or equivalent

SDSF is one of the extensions to ISPF that IBM itself provides as a separate product. This product, or one of its equivalents from other vendors, is an essential tool for system administrators of z/OS installations. SDSF allows support staff to operate z/OS and manage the processes running on z/OS, look at output from processes (jobs) and inspect application and system logs. It is somewhat similar to the Task Manager in Windows.

I talk about SDSF here, which is an IBM tool, but there are tools with equivalent functions from other software vendors, such as IOF, SYSVIEW or (E)JES.