In my previous post I shared a way to execute operator command from a batch job using Rexx and SDSF. That is of course a bit cumbersome if you just want to fire off a fixed operator command.
Therefore here the simplest way to execute an operator command:
One of the many way to execute an operator command from a batch rexx program.
With this solution here, with Rexx and SDSF, you can embed the commands in more complex business logic, and use Rexx variables to dynamically establish values for the commands to be issued.
By the way I have started a repository on GitHub on which I will share assets in the future.
Reg Harbeck wrote an excellent article in Destination z, Overcoming IBM Z Inertia, in which encourages IBM Z (mainframe) users to take action on modernizing their mainframe.
The path of action Harbeck describes is to assign new mainframers (RePro’s) with the task to find and document what the current mainframe solutions in place are expected to do, and to work with the organisation to see what needs to retired, replaced or contained.
In other words task a new generation with the mainframe portfolio renewal, and not leave this to the old generation, who are “surviving until they retire while rocking the boat as little as possible” (hard words from Harbeck but it is time to get people in action).
In additional to the general approach Harbeck describes I think it is important to assure senior management support on a level as high as possible. Doing so you prevent that the priority of this program is too easily watered down by day-to-day priorities and you assure perceived or real “impossibilities” and roadblocks are moved out of the way resolutely. So:
Make modernization a senior management priority. Separate it organizationally from task from the KSOR (Keep the Show On the Road) activities, to make sure modernization priorities compete as little as possible with KSOR activities.
Appoint a senior management and senior technical exec with a strong Z affiliation to mentor and support and guide the young team from a organisational and strategic perspective.
Have a forward thinking, strong and respected senior mainframe specialist support the team, with education and coaching (not to do it for them).
It will be an investment and, according to the “survivors” never be as efficient as before, but one very important thing it will be: fit for the future.
This year COBOL was delivered 60 years ago as one of the first general purpose, cross-platform programming languages.
On 8 January 1960 the ‘COBOL Executive Committee’ formally approved the design of the programming language “COBOL 60”.
One of the very early adopters of COBOL in the Netherlands, and long time member of the COBOL standard, Wim Ebbinkhuijsen, did a very nice talk at the event organized by Ordina. He went through the history of COBOL through the past 60 years. As a close observer and influencer of the programming language you get a great insight in this recent history of computing. Slides can be found here.
Some time ago I did a short summary presentation on my experience with replication solutions for Db2 on z/OS. The pictures and text are quite generic, so I thought it might be worthwhile sharing the main topics here. The picture below summarizes the options visually:
Queue replication
Synchronizes tables. The synchronization process on the capture side reads the Db2 transaction log, and puts the updates for which a “subscription” is defined on a queue. On the apply side, the tool retrieves the updates from the queue and applies them to the target database.
SQL replication
Also synchronizes tables. In this case the capture process stores the updates in an intermediate or staging table, from which the apply process takes the updates and applies them to the target tables.
Data Event Publishing
Takes the updates to the tables for which a subscription is defined and produces a comma-delimited or xml message from it which is put on a queue. The consumer of the message can be any user-defined program.
Change Data Capture
CDC provides a flexible solution that can push data updates to multiple forms of target media, whether tables, messages or an ETL tool.
Requirements
After my short summary, we dug a little in the requirement’s for the specific problem this team was aiming to address. They needed:
A lean operational database for the primary processes.
Ad-hoc and reporting queries on archival tables, where data in archive table can be kept over several years.
The amount of data is relatively large: it should support tens to hundreds of millions of database updates per day, with a peak of tens of millions in an hour.
Target database management system was not decided yet; could be Db2 or Oracle.
So a solution should replicate the operational database to an archive database, while data must be very current, demanding near-realtime synchronization.
We focused a it on the Queue Replication solution. The target DBMS for Queue replication can be Db2, Oracle and SQL Server (and a few more). Furthermore, in my experience this solution can support:
High volumes in peak periods: millions of row inserted/updated in short period of time
Latency can remain within seconds, even in peak periods – this does require tuning of the solution, such as spreading messages over queues.For selected table you can specify suppress deletes, which allows for building up of historical data.
There are a few concerns in the Queue Replication solution:
Data model changes will require coordination of changes in source, Queue Replication configuration and target data model.
Very large transactions (not committing often enough) may be a problem for Queue Replication (and also a very bad programming practice).
The most useful ISPF EDIT macro ever. I think it was written by Paul van Goethem from Belgium, somewhere before 1993. It has quickly spread through many sites.
Point your cursor at at dataset name in a file your are editing en executing the macro will launch an EDIT session of that file.
I always put in under PF key PF4 which by default has the not very useful default value RETURN.
Note the value of the variable INVALID in the below may be corrupted. It should contain any character that is not valid as part of a dataset. The binary values it can contain are not very portable.
/****************************REXX***********************************/
/* FUNCTION: RECURSIVE EDIT OF BROWSE VIA DSNAAM AND,OR MEMBER */
/* SPECIFIED VIA CURSOR SENSITIVE EDIT MACRO */
/* FORMATS ACCEPTED: */
/* DATA.SET.NAME : EDIT/BROWSE OF THIS */
/* DATA.SET.NAME(MEMBER) : EDIT/BROWSE THIS */
/* MEMBER : EDIT/BROWSE IN SAME LIBRARY AS INDEX */
/****************************REXX***********************************/
ADDRESS ISPEXEC INVALID= ",'\<\>,:;+��▖!�%�-="
ADDRESS ISREDIT 'MACRO (FUNCTIE)'
FUNCTIE = TRANSLATE(FUNCTIE)
IF SUBSTR(FUNCTIE,1,1)='B'
THEN FUNCTIE='BROWSE'
ELSE FUNCTIE='EDIT'
LIN=0
ADDRESS ISREDIT '(LIN,COL) = CURSOR'
ADDRESS ISREDIT '(CLINE) = LINE 'LIN
/* FIND CURRENT WORD */
T=SUBSTR(CLINE,1,COL)
T =TRANSLATE(T,' ',INVALID)
Y=LASTPOS(' ',T)
IF Y=0
THEN T=CLINE
ELSE T=SUBSTR(CLINE,Y+1)
PARSE VAR T WOORD .
WOORD =TRANSLATE(WOORD,' ',INVALID)
"CONTROL ERRORS RETURN"
IF INDEX(WOORD,'(') /= 0 THEN DO /* TAKE DSN IF SPECIFIED */
PARSE VAR WOORD DSNAME '(' MEMBER ')'
FUNCTIE" DATASET('"SPACE(DSNAME)"("MEMBER")')"
FRC=RC
END
ELSE DO
IF INDEX(WOORD,'.')/=0 THEN DO
PARSE VAR WOORD DSNAME .
FUNCTIE" DATASET('"SPACE(DSNAME)"')"
END
ELSE DO
ADDRESS ISREDIT "(DSNAME) = DATASET"
WOORD = SPACE(TRANSLATE(WOORD,' ','.()'))
FUNCTIE" DATASET('"DSNAME"("WOORD")')"
END
FRC=RC
END
ADDRESS ISPEXEC "CONTROL ERRORS CANCEL"
IF FRC> 4 & SYMBOL(ZERRMSG)/= 'LIT' THEN DO
MSG= ZERRMSG':'ZERRLM
"SETMSG MSG(ZOM000A)"
END
RETURN
Again a simple solution for a common problem: how to run a Db2 query from a batch script. Here we use the utility DSNTEP2 that is provided for this purpose with the Db2 installation.
In the STEPLIB, specify your names for Db2 runtime libraries.
In the SYSTEM (xxxx) clause specify your Db2 subsystem.
The SQL in the SYSIN label can be taken from in-stream, or from a dataset as below.
//GO EXEC PGM=IKJEFT01,DYNAMNBR=20
//STEPLIB DD DSN=SYS2.SDSNEXIT,DISP=SHR
// DD DSN=SYS2.SDSNLOAD,DISP=SHR
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM (xxxx)
RUN PROG (DSNTEP2)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//*SYSIN DD DISP=SHR,DSN=YOURDSN.DB2.SQL(YOURSQL) OR
//SYSIN DD *
SELECT * FROM YOURTABLE
/*
IBM has announced the new mainframe box, not unexpectedly called z15. The z15 is a huge machine and I would summarize this evolution “bigger and better” without being disrespectful. Nevertheless, the accompanying announcement of the new release of the flagship operating system for the z15, z/OS 2.4 is massively interesting.
The most eye-catching new feature in z/OS 2.4 is z/OS Container Extensions. With this feature you can run Linux Docker containers under z/OS, next to your existing z/OS applications. Making your Containers thus part of your z/OS infrastructure has some powerful advantages.
Very easy integration of containers in the Continuous Availability solution for Disaster Recovery of your z/OS platform provided by GDPS. I think this is by far the most powerful benefit. Thus you can provide Linux on z solutions even more easily than through a native Linux for z deployment.
Exploit the capabilities of network virtualization of z/OS.
Optimized integration with z/OS workloads.
The second important item to look at is the evolution of z/OSMF. Contrary to what IBM Marketing tells us I believe the second important item in my opinion is not Container Pricing, and not Open Data Analytics on z/OS. The development of z/OSMF is more important because it is fundamentally changing the consumability of the platform, turning the platform into the most open computing platform in the market, through the availability op REST APIs on basically any resource on the platform. And fundamentally transforming the way the platform is managed.
It is exciting to see how the mainframe is changing at the moment. The movements of the last five years are really turning the mainframe from a self-oriented platform into one of the most open and accessible technologies on the market.
The only thing left in my opinion is an affordable, public and easily accessible facility to the platform in order to boost the collaborative community around the platform.
Ok another short one: just a small JCL script with working example for the definition of a user catalog. This would accompany the example of the alias definition for a user catalog that can be found here.