Th job below is submitted and then waiting and activated when there is a change in the input directory /yourdirectory/testfileloc
The job itself moves the files in the input directory to a processing directory, starts the next “watch” job and then the processing job for the files in the processing directory.
The setup is clearly to test the mechanism – and not necessarily a model for a production-like situation. In a production situation you might want to use TWS applications etc, but that is up to your application design of course.
In the previous posts I have shown you many modern technologies available on z/OS. But still when you think of the mainframe, you think of black screen with green characters, which looks cool in the Matrix, but not so much in real life. Where does this green screen imago come from?
In this section I will talk a little bit about the origins of the green screens in mainframe technology. I will also show you that these green screens have become as uncommon to use as a terminal in Unix or command prompt in Windows.
Green screens are for administrators and programmers, not end-users
The green screens on the mainframe are user-interfaces. In the early days programmers created their programs on paper, behind their desks. They then entered the programs on punch cards or paper tape. That were the media that were then fed into the computers, using a special reader device.
Later, in the 1960s, computers with terminal interfaces were build. With the terminals, users could enter programs and data online. This is the period that the green screens originate from. Each computer type had its own terminal technology. Mainframes had technology indicated with a number: the 3270 terminal. These terminals originally worked with green letters on a black background, and could hold as much as 32 lines of 80 characters (or so). We still refer to these 3270 terminals as green screen.
Modern mainframe applications do not use these terminal interfaces anymore. Applications on the mainframe most often do not even have a user-interface anymore. They only expose services, or APIs, exposed to mid-tier or front-end applications. (See modern mainframe application architecture section.)
Today therefore, the need for these green screens is limited. Only special system administration tools and application programming tools still have a low level interface. And even these are being replaced by tools with more modern interface.
System administrators on Windows use the “DOS” command prompt and Unix techies use the Terminal sessions. Similarly, for mainframe techies there is the “green-screen” 3270 terminal. Actually, the Unix Terminal and Windows Command Prompt are quite rudimentary, compared to the 3270 interface to z/OS.
Green screen application? Technical debt
The days where you had green screen applications are long gone. If you still have them, you should get rid of them.
Most well-architected green-screen applications can be turned into service-oriented applications. The front-end can then be replaced by a modern front-end application.
You may find yourself in the situation where you need to integrate with green-screen applications that have not been so well-designed. I will talk a little bit about that in a separate section Integration with the rest of the world.
In section Application architecture for modern mainframe applications, I describe a reference architecture for modern mainframe applications.
Now, I will describe what tools typically still needs 3270 screens.
What is the functionality of the command prompt for Windows and the Shell function is for Unix, is the TSO tool for z/OS: a command line interface with which you can fire off commands to the operating system to get things done on the computer.
Like the DOS command prompt and the Unix shell, this is a very powerful, but clumsy interface. To provide a more user-friendly interface IBM built the tool ISPF on top of TSO.
I will no go into the abbreviations here. What you need to know is that ISPF is a standard part of z/OS. This tool gives the user, nowadays mostly system administrators, a very powerful interface to z/OS.
The editor in ISPF and the dataset list utility are probably the mostly used functions is ISPF. With the screen-oriented file editor is you can edit the z/OS datasets. The dataset list utility lets you find the datasets on your z/OS system.
ISPF also provides facilities to extend its features through a programming interface. Many tool vendors provide ISPF tools built on these interfaces as part of their tool installation.
SDSF – or equivalent
SDSF is one of the extensions to ISPF that IBM itself provides as a separate product. This product, or one of its equivalents from other vendors, is an essential tool for system administrators of z/OS installations. SDSF allows support staff to operate z/OS and manage the processes running on z/OS, look at output from processes (jobs) and inspect application and system logs. It is somewhat similar to the Task Manager in Windows.
I talk about SDSF here, which is an IBM tool, but there are tools with equivalent functions from other software vendors, such as IOF, SYSVIEW or (E)JES.
One of the most distinguishing features of the z/OS operating system is the way you can cluster z/OS systems in a Parallel Sysplex. Parallel Sysplex, or Sysplex in short, is a feature of z/OS that was built in the 90s that enables extreme scalability and availability.
In the previous post we highlighted the z/OS Unix part. Here we will dive into the z/OS Parallel Syplex.
With Parallel Sysplex you can configure a cluster of z/OS operating system instances. In such a sysplex you can combine the computing power of multiple of z/OS instances on multiple mainframe boxes into a single logical z/OS server.
When you run your application on a sysplex, it actually runs on all the instances of the sysplex. If you need more processing power for your applications in a sysplex, you can add CPUs to the instances, but you can also add a new z/OS system to the sysplex.
This makes a z/OS infrastructure is extremely scalable. Also, a sysplex isolates your applications from failures of software and hardware components. If a system or component in a Parallel Sysplex fails, the software will signal this. The failed part will be isolated while your application continues processing on the surviving instances in the sysplex.
For a parallel sysplex configuration, a special piece of software is used: a Coupling Facility. This Coupling Facility functions as shared memory and communication vehicle to all the z/OS members forming a sysplex.
The z/OS operating system and the middleware can share data in the Coupling Facility. The type of data that is shared are the things that members of a cluster should know about each other since they are action on the same data: status information, lock information about resources that are accessed concurrently by the members, and caching of shared data from databases.
A Coupling Facility runs in a dedicated special operating system, in an LPAR of its own, to which even system administrators do not need access. In that sense it is a sort appliance.
A sysplex with Coupling Facilities is depicted below. There are multiple Coupling Facilities to avoid a single point of failure. The members in sysplex connect to the Coupling Facilities. I have not included all the required connections in this picture, as that would become a cluttered view.
Middleware components can make use of the sysplex features provided by z/OS, to create clusters of middleware software.
Db2 can be clustered into so-called Datasharing Group. In a Datasharing Group you can create a database that can process queries on multiple Db2 for z/OS instances on multiple z/OS systems.
Similarly WebSphere MQ can be configured in a Queue Sharing Group, CICS in a CICSPlex, IMS in an IMSPlex and other software like WebSphere Application Server, IDMS, Adabas and other middleware use parallel sysplex functions to build highly available and scalable clusters.
This concept is illustrated in Figure 15. Here you see a cluster setup of CICS and Db2 in a sysplex. Both CICS and Db2 form one logical middleware instance.
You can see the big benefit of parallel sysplex lies in it’s a generic facilties to build scalable and high available clusters of middleware solutions. You can achieve similar solutions on other operating systems, but every middleware component needs to supply its own clustering features to achieve such a scalable and highly available configuration. This often needs additional components and leads to more complex solutions.
What is unique about a parallel sysplex is that it is a clustering facility that is part of the operating system.
On other platforms you can build cluster of middleware tools as well, but these are always specific solution and technologies for that piece of middleware. The clustering facilities are part of the middleware. With parallel sysplex, clustering is solved in a central facility, in the operating system of z/OS.
An extension to Parallel Sysplex is Geographically Dispersed Parallel Sysplex, GDPS for short. GDPS provides an additional solution to assure your data remains available in case of failures. With GDPS you can make sure that even in the case of a severe hardware failure, or even a whole data centre outage, your data remains available in a secondary datacentre, with minimal to no disruption of the applications running on z/OS.
In a GDPS configuration, your data is mirrored between storage systems in the two data centres. One site has the primary storage system, the storage system in the other data centre receives a copy of all updates. If the primary storage system, or even data centre fails, GDPS automatically makes the secondary storage device the primary, usually without disrupting any running applications.
Recently I had to get some people started with a few utilities. I thought to share this here. They will be in the next few posts for searchability reasons.
There are more ways to Rome, as we say here, so please feel free share you variants in a comment. (I unfortunately need manually curate comments to filter out the spam which is even with this small site overwhelming.)