Filewatch utility – file triggering

  • Post category:JCLUtilities
  • Reading time:2 mins read

The filewatch utility is not a well known utility, that can be very handy. I have used it especially when I needed process a unix file after it has been copied to a certain directory.

This is the documentation about the filewatch utility:

https://www.ibm.com/support/knowledgecenter/SSRULV_9.3.0/com.ibm.tivoli.itws.doc_9.3/zos/src_man/eqqr1hfszfstrig.htm

Th job below is submitted and then waiting and activated when there is a change in the input directory /yourdirectory/testfileloc

The job itself moves the files in the input directory to a processing directory, starts the next “watch” job and then the processing job for the files in the processing directory.

The setup is clearly to test the mechanism – and not necessarily a model for a production-like situation. In a production situation you might want to use TWS applications etc, but that is up to your application design of course.

I hope this helps.

 //STEP01  EXEC PGM=EQQFLWAT,PARM='/-c wcr -dea 900 -i 20
 //             -r 123 -t 3 -fi /yourdirectory/testfileloc'
 //*
 //* Register and move files
 //RUNSCRPT EXEC PGM=IKJEFT01,REGION=0M
 //STDOUT   DD SYSOUT=*
 //STDERR   DD SYSOUT=*
 //SYSTSPRT DD SYSOUT=*
 //SYSTSIN  DD *
 BPXBATSL SH  -
   mv /yourdirectory/testfileloc* /yourdirectory/testfileloc/processing
 //*
 //* Then kick off separate job
 //* - Next filewatch job
 //*
 //SUBJOB   EXEC PGM=IEBGENER
 //SYSIN    DD DUMMY
 //SYSPRINT DD DUMMY
 //SYSUT1   DD DISP=SHR,DSN=YOURPDS.JCL(EQQFLWAT)
 //SYSUT2   DD SYSOUT=(A,INTRDR)
 //*
 //* - Processing job
 //*
 //SUBJOB   EXEC PGM=IEBGENER
 //SYSIN    DD DUMMY
 //SYSPRINT DD DUMMY
 //SYSUT1   DD DISP=SHR,DSN=YOURPDS.JCL(PROCJOB)
 //SYSUT2   DD SYSOUT=(A,INTRDR)
 //*
 //* 

The interface to z/OS and the green screen myth

  • Post category:DBAOTM
  • Reading time:6 mins read

In the previous posts I have shown you many modern technologies available on z/OS. But still when you think of the mainframe, you think of black screen with green characters, which looks cool in the Matrix, but not so much in real life. Where does this green screen imago come from?

In this section I will talk a little bit about the origins of the green screens in mainframe technology. I will also show you that these green screens have become as uncommon to use as a terminal in Unix or command prompt in Windows.

Green screens are for administrators and programmers, not end-users

The green screens on the mainframe are user-interfaces. In the early days programmers created their programs on paper, behind their desks. They then entered the programs on punch cards or paper tape. That were the media that were then fed into the computers, using a special reader device.

Later, in the 1960s, computers with terminal interfaces were build. With the terminals, users could enter programs and data online. This is the period that the green screens originate from. Each computer type had its own terminal technology. Mainframes had technology indicated with a number: the 3270 terminal. These terminals originally worked with green letters on a black background, and could hold as much as 32 lines of 80 characters (or so). We still refer to these 3270 terminals as green screen.

Modern mainframe applications do not use these terminal interfaces anymore. Applications on the mainframe most often do not even have a user-interface anymore. They only expose services, or APIs, exposed to mid-tier or front-end applications. (See modern mainframe application architecture section.)

Today therefore, the need for these green screens is limited. Only special system administration tools and application programming tools still have a low level interface. And even these are being replaced by tools with more modern interface.

System administrators on Windows use the “DOS” command prompt and Unix techies use the Terminal sessions. Similarly, for mainframe techies there is the “green-screen” 3270 terminal.  Actually, the Unix Terminal and Windows Command Prompt are quite rudimentary, compared to the 3270 interface to z/OS.

Green screen application? Technical debt

The days where you had green screen applications are long gone. If you still have them, you should get rid of them.

Most well-architected green-screen applications can be turned into service-oriented applications. The front-end can then be replaced by a modern front-end application.

You may find yourself in the situation where you need to integrate with green-screen applications that have not been so well-designed. I will talk a little bit about that in a separate section Integration with the rest of the world.

In section Application architecture for modern mainframe applications, I describe a reference architecture for modern mainframe applications.

Now, I will describe what tools typically still needs 3270 screens.

TSO

What is the functionality of the command prompt for Windows and the Shell function is for Unix, is the TSO tool for z/OS: a command line interface with which you can fire off commands to the operating system to get things done on the computer.

Like the DOS command prompt and the Unix shell, this is a very powerful, but clumsy interface. To provide a more user-friendly interface IBM built the tool ISPF on top of TSO.

ISPF

I will no go into the abbreviations here. What you need to know is that ISPF is a standard part of z/OS. This tool gives the user, nowadays mostly system administrators, a very powerful interface to z/OS.

The editor in ISPF and the dataset list utility are probably the mostly used functions is ISPF. With the screen-oriented file editor is you can edit the z/OS datasets. The dataset list utility lets you find the datasets on your z/OS system.

ISPF - Data Set list utility
ISPF – Data Set list utility
ISPF Editor
ISPF Editor

ISPF also provides facilities to extend its features through a programming interface. Many tool vendors provide ISPF tools built on these interfaces as part of their tool installation.

SDSF – or equivalent

SDSF is one of the extensions to ISPF that IBM itself provides as a separate product. This product, or one of its equivalents from other vendors, is an essential tool for system administrators of z/OS installations. SDSF allows support staff to operate z/OS and manage the processes running on z/OS, look at output from processes (jobs) and inspect application and system logs. It is somewhat similar to the Task Manager in Windows.

I talk about SDSF here, which is an IBM tool, but there are tools with equivalent functions from other software vendors, such as IOF, SYSVIEW or (E)JES.

Parallel sysplex

One of the most distinguishing features of the z/OS operating system is the way you can cluster z/OS systems in a Parallel Sysplex. Parallel Sysplex, or Sysplex in short, is a feature of z/OS that was built in the 90s that enables extreme scalability and availability.

In the previous post we highlighted the z/OS Unix part. Here we will dive into the z/OS Parallel Syplex.

A cluster of z/OS instances

With Parallel Sysplex you can configure a cluster of z/OS operating system instances. In such a sysplex you can combine the computing power of multiple of z/OS instances on multiple mainframe boxes into a single logical z/OS server.

When you run your application on a sysplex, it actually runs on all the instances of the sysplex. If you need more processing power for your applications in a sysplex, you can add CPUs to the instances, but you can also add a new z/OS system to the sysplex.

This makes a z/OS infrastructure is extremely scalable. Also, a sysplex isolates your applications from failures of software and hardware components. If a system or component in a Parallel Sysplex fails, the software will signal this. The failed part will be isolated while your application continues processing on the surviving instances in the sysplex.

Special sysplex components: the Coupling Facility

For a parallel sysplex configuration, a special piece of software is used: a Coupling Facility. This Coupling Facility functions as shared memory and communication vehicle to all the z/OS members forming a sysplex.

The z/OS operating system and the middleware can share data in the Coupling Facility. The type of data that is shared are the things that members of a cluster should know about each other since they are action on the same data: status information, lock information about resources that are accessed concurrently by the members, and caching of shared data from databases.

A Coupling Facility runs in a dedicated special operating system, in an LPAR of its own, to which even system administrators do not need access. In that sense it is a sort appliance.

A sysplex with Coupling Facilities is depicted below. There are multiple Coupling Facilities to avoid a single point of failure. The members in sysplex connect to the Coupling Facilities. I have not included all the required connections in this picture, as that would become a cluttered view.

A parallel sysplex

Middleware exploits the sysplex functions

Middleware components can make use of the sysplex features provided by z/OS, to create clusters of middleware software.

Db2 can be clustered into so-called Datasharing Group. In a Datasharing Group you can create a database that can process queries on multiple Db2 for z/OS instances on multiple z/OS systems.

Similarly WebSphere MQ can be configured in a Queue Sharing Group, CICS in a CICSPlex, IMS in an IMSPlex and other software like WebSphere Application Server, IDMS, Adabas and other middleware use parallel sysplex functions to build highly available and scalable clusters.

This concept is illustrated in Figure 15. Here you see a cluster setup of CICS and Db2 in a sysplex. Both CICS and Db2 form one logical middleware instance.

A parallel sysplex cluster with Db2 and CICS
A parallel sysplex cluster with Db2 and CICS

You can see the big benefit of parallel sysplex lies in it’s a generic facilties to build scalable and high available clusters of middleware solutions. You can achieve similar solutions on other operating systems, but every middleware component needs to supply its own clustering features to achieve such a scalable and highly available configuration. This often needs additional components and leads to more complex solutions.

How is this different from other clustering technologies?

What is unique about a parallel sysplex is that it is a clustering facility that is part of the operating system.

On other platforms you can build cluster of middleware tools as well, but these are always specific solution and technologies for that piece of middleware. The clustering facilities are part of the middleware. With parallel sysplex, clustering is solved in a central facility, in the operating system of z/OS.

GDPS

An extension to Parallel Sysplex is Geographically Dispersed Parallel Sysplex, GDPS for short.  GDPS provides an additional solution to assure your data remains available in case of failures. With GDPS you can make sure that even in the case of a severe hardware failure, or even a whole data centre outage, your data remains available in a secondary datacentre, with minimal to no disruption of the applications running on z/OS.

In a GDPS configuration, your data is mirrored between storage systems in the two data centres. One site has the primary storage system, the storage system in the other data centre receives a copy of all updates. If the primary storage system, or even data centre fails, GDPS automatically makes the secondary storage device the primary, usually without disrupting any running applications.

Code page conversion of a file in a batch job

  • Post category:JCLUtilities
  • Reading time:2 mins read

In this post a sample of how to perform a code page conversion of z/OS Unix files in a batch job. The different iconv utility lines show invocation for different code pages.

//*                                      
//STEP1    EXEC PGM=BPXBATCH   
//STDERR   DD SYSOUT=*  
//STDPARM  DD   *              
 SH  iconv -f UTF-8 -t IBM-1140 < inutf8 > outebc 
/*                                                     
//               
 SH  iconv -f IBM-1140 -t IBM-1252 < inebc > out1252 
//  
 SH  iconv -f IBM-1252 -t IBM-1140 < in1252 > out1140 
//     
 SH  iconv -f IBM-1140 -t UTF-8 < inebc > oututf8  
 //                       

Copying Unix files to MVS Datasets

  • Post category:JCLUtilities
  • Reading time:1 mins read

As mentioned in the previous post, another sample. This one copies a z/OS unix file to an MVS dataset. Also using OCOPY.

//STEP5 EXEC PGM=IKJEFT01 
//INUNIX DD PATH='/mydir/infile.txt',PATHOPTS=(ORDONLY) 
//OUTMVS DD DSN='TEST.MVS.OUTDS',DISP=SHR 
//* 
//SYSTSPRT DD SYSOUT=* 
//SYSTSIN DD * 
//SYSTSIN DD * 
 OCOPY IND(INUNIX) OUTDD(OUTMVS) TEXT CONVERT(YES) PATHOPTS(USE) 
/* 
 

Copying MVS Datasets to Unix files

  • Post category:JCLUtilities
  • Reading time:1 mins read

Recently I had to get some people started with a few utilities. I thought to share this here. They will be in the next few posts for searchability reasons.

There are more ways to Rome, as we say here, so please feel free share you variants in a comment. (I unfortunately need manually curate comments to filter out the spam which is even with this small site overwhelming.)

//STEP1 EXEC PGM=IKJEFT01
//INMVS DD DSN=TEST.TRADMVS.DATA,
// DISP=SHR
//OUTFILE DD FILEDATA=TEXT,
// PATHOPTS=(OWRONLY,OCREAT,OTRUNC),
// PATHMODE=SIRWXU,
// PATH='/mydir/myunixfile.txt’
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*//SYSTSIN DD *
 OCOPY IND(INMVS) OUTDD(OUTFILE)
//*

Here’s a link to the IBM documentation on OCOPY.

Running a MVS or TSO Rexx program from the z/OS Unix environment

  • Post category:Rexx
  • Reading time:2 mins read

You probably know that you can use Rexx programs in z/OS Unix.

What you may not know is that you can also run a TSO or MVS Rexx program from the z/OS Unix environment.

There is a unix command called tso for this. It works as simple as this:

tso –t “exec ‘YOUR.MVS.REXXPDS(TESTREXX)’ EXEC”

will execute your TESTREXX program.

You may want or need to allocate TSO datasets or other datasets in order to execute the Rexx.

You can simply allocate these through the export command.

export ddname=YOUR.MVS.DATASET

You can add these exports to you shell script, or add them to your .profile.