A very short summary of replication solutions (for Db2)

  • Post category:Db2
  • Reading time:4 mins read

Some time ago I did a short summary presentation on my experience with replication solutions for Db2 on z/OS. The pictures and text are quite generic, so I thought it might be worthwhile sharing the main topics here. The picture below summarizes the options visually:

Queue replication

Synchronizes tables. The synchronization process on the capture side reads the Db2 transaction log, and puts the updates for which a “subscription” is defined on a queue. On the apply side, the tool retrieves the updates from the queue and applies them to the target database.

SQL replication

Also synchronizes tables. In this case the capture process stores the updates in an intermediate or staging table, from which the apply process takes the updates and applies them to the target tables.

Data Event Publishing

Takes the updates to the tables for which a subscription is defined and produces a comma-delimited or xml message from it which is put on a queue. The consumer of the message can be any user-defined program.

Change Data Capture

CDC provides a flexible solution that can push data updates to multiple forms of target media, whether tables, messages or an ETL tool.

Requirements

After my short summary, we dug a little in the requirement’s for the specific problem this team was aiming to address. They needed:

  • A lean operational database for the primary processes.
  • Ad-hoc and reporting queries on archival tables, where data in archive table can be kept over several years.
  • The amount of data is relatively large: it should support tens to hundreds of millions of database updates per day, with a peak of tens of millions in an hour.
  • Target database management system was not decided yet; could be Db2 or Oracle.

So a solution should replicate the operational database to an archive database, while data must be very current, demanding near-realtime synchronization.

We focused a it on the Queue Replication solution. The target DBMS for Queue replication can be Db2, Oracle and SQL Server (and a few more). Furthermore, in my experience this solution can support:

  • High volumes in peak periods: millions of row inserted/updated in short period of time
  • Latency can remain within seconds, even in peak periods – this does require tuning of the solution, such as spreading messages over queues.For selected table you can specify suppress deletes, which allows for building up of historical data. 

There are a few concerns in the Queue Replication solution:

  • Data model changes will require coordination of changes in source, Queue Replication configuration and target data model.
  • Very large transactions (not committing often enough) may be a problem for Queue Replication (and also a very bad programming practice).

Hope this helps with your replication needs.

$ZOOM – Mother of all EDIT macro’s

  • Post category:ISPF
  • Reading time:2 mins read

The most useful ISPF EDIT macro ever. I think it was written by Paul van Goethem from Belgium, somewhere before 1993. It has quickly spread through many sites.

Point your cursor at at dataset name in a file your are editing en executing the macro will launch an EDIT session of that file.
I always put in under PF key PF4 which by default has the not very useful default value RETURN.

Note the value of the variable INVALID in the below may be corrupted. It should contain any character that is not valid as part of a dataset. The binary values it can contain are not very portable.

/****************************REXX***********************************/
/* FUNCTION: RECURSIVE EDIT OF BROWSE VIA DSNAAM AND,OR MEMBER     */
/*  SPECIFIED VIA CURSOR SENSITIVE EDIT MACRO                      */
/* FORMATS ACCEPTED:                                               */
/*  DATA.SET.NAME   : EDIT/BROWSE OF THIS                          */
/*  DATA.SET.NAME(MEMBER) : EDIT/BROWSE THIS                       */
/*  MEMBER    : EDIT/BROWSE IN SAME LIBRARY AS INDEX               */
/****************************REXX***********************************/  
ADDRESS ISPEXEC    INVALID= ",'\<\>,:;+��▖!�%�-="
ADDRESS ISREDIT 'MACRO (FUNCTIE)'
FUNCTIE = TRANSLATE(FUNCTIE)  
IF SUBSTR(FUNCTIE,1,1)='B'
THEN FUNCTIE='BROWSE'
ELSE FUNCTIE='EDIT' 
LIN=0 
ADDRESS ISREDIT '(LIN,COL) = CURSOR'  
ADDRESS ISREDIT '(CLINE) = LINE 'LIN   
/* FIND CURRENT WORD */ 
T=SUBSTR(CLINE,1,COL)  
T =TRANSLATE(T,' ',INVALID)    
Y=LASTPOS(' ',T) 
IF Y=0
THEN T=CLINE    
ELSE T=SUBSTR(CLINE,Y+1)
PARSE VAR T WOORD  .   
WOORD =TRANSLATE(WOORD,' ',INVALID)
"CONTROL ERRORS RETURN"
IF INDEX(WOORD,'(') /= 0 THEN DO /* TAKE DSN IF SPECIFIED */
  PARSE VAR WOORD  DSNAME '('  MEMBER ')' 
  FUNCTIE" DATASET('"SPACE(DSNAME)"("MEMBER")')" 
  FRC=RC
  END
ELSE DO
  IF INDEX(WOORD,'.')/=0 THEN DO   
    PARSE VAR WOORD  DSNAME . 
    FUNCTIE" DATASET('"SPACE(DSNAME)"')" 
    END  
  ELSE DO         
    ADDRESS ISREDIT "(DSNAME) = DATASET"  
    WOORD = SPACE(TRANSLATE(WOORD,' ','.()'))
    FUNCTIE" DATASET('"DSNAME"("WOORD")')"  
    END    
  FRC=RC   
  END  
ADDRESS ISPEXEC "CONTROL ERRORS CANCEL"  
IF FRC> 4 & SYMBOL(ZERRMSG)/= 'LIT' THEN DO      
  MSG= ZERRMSG':'ZERRLM            
  "SETMSG MSG(ZOM000A)"            
  END                           
RETURN

Db2 SQL in batch

  • Post category:JCL
  • Reading time:1 mins read

Again a simple solution for a common problem: how to run a Db2 query from a batch script. Here we use the utility DSNTEP2 that is provided for this purpose with the Db2 installation.

In the STEPLIB, specify your names for Db2 runtime libraries.

In the SYSTEM (xxxx) clause specify your Db2 subsystem.

The SQL in the SYSIN label can be taken from in-stream, or from a dataset as below.

//GO EXEC PGM=IKJEFT01,DYNAMNBR=20 
//STEPLIB DD DSN=SYS2.SDSNEXIT,DISP=SHR 
//        DD DSN=SYS2.SDSNLOAD,DISP=SHR     
//SYSTSPRT DD SYSOUT=*  
//SYSTSIN DD * 
 DSN SYSTEM (xxxx)                                       
 RUN PROG (DSNTEP2)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//*SYSIN    DD DISP=SHR,DSN=YOURDSN.DB2.SQL(YOURSQL) OR
//SYSIN    DD *
 SELECT * FROM YOURTABLE
/*

//Dean

The new mainframe is full of APIs (and has an extension for Docker Containers)

IBM has announced the new mainframe box, not unexpectedly called z15. The z15 is a huge machine and I would summarize this evolution “bigger and better” without being disrespectful.
Nevertheless, the accompanying announcement of the new release of the flagship operating system for the z15, z/OS 2.4 is massively interesting.

The most eye-catching new feature in z/OS 2.4 is z/OS Container Extensions. With this feature you can run Linux Docker containers under z/OS, next to your existing z/OS applications. Making your Containers thus part of your z/OS infrastructure has some powerful advantages.

  • Very easy integration of containers in the Continuous Availability solution for Disaster Recovery of your z/OS platform provided by GDPS. I think this is by far the most powerful benefit. Thus you can provide Linux on z solutions even more easily than through a native Linux for z deployment.
  • Exploit the capabilities of network virtualization of z/OS.
  • Optimized integration with z/OS workloads.

The second important item to look at is the evolution of z/OSMF. Contrary to what IBM Marketing tells us I believe the second important item in my opinion is not Container Pricing, and not Open Data Analytics on z/OS. The development of z/OSMF is more important because it is fundamentally changing the consumability of the platform, turning the platform into the most open computing platform in the market, through the availability op REST APIs on basically any resource on the platform. And fundamentally transforming the way the platform is managed.

It is exciting to see how the mainframe is changing at the moment. The movements of the last five years are really turning the mainframe from a self-oriented platform into one of the most open and accessible technologies on the market.

The only thing left in my opinion is an affordable, public and easily accessible facility to the platform in order to boost the collaborative community around the platform.

The definition of a user catalog

  • Post category:Catalog
  • Reading time:1 mins read

Ok another short one: just a small JCL script with working example for the definition of a user catalog. This would accompany the example of the alias definition for a user catalog that can be found here.

//DEFCAT   EXEC PGM=IDCAMS                               
//SYSPRINT DD SYSOUT=*                                  
//SYSIN    DD *                                       
DEFINE UCAT (NAME(SYS1.USERCAT.PROD) VOLUME(DASD1B) - 
        CYL(6,1) ICFCAT  )          

Enjoy.

Pierre G.

How to copy the contents of a catalog: IDCAMS

  • Post category:Catalog
  • Reading time:1 mins read

A very short one, but regularly asked by my mentees: how to copy the contents of a catalog? Just use IDCAMS’ REPRO facility, like you would for a regular VSAM dataset

//COPYCAT   EXEC PGM=IDCAMS                           
//SYSPRINT DD SYSOUT=*                         
//SYSIN    DD *                                      
REPRO  INDATASET(CATALOG.MVSICFM.VD9ECAT) -         
        OUTDATASET(SYS1.MSTRCTLG)    

//Pierre G

Change the volume of the entry for a dataset in a catalog

  • Post category:JCLUtilities
  • Reading time:1 mins read

When you have defined a dataset in the wrong catalog – in this case a master catalog – and you want to correct this you can use this technique.The DELETE NOSCRATCH option assures only the catalog entry is deleted. If you would omit this, the entire dataset would be deleted, so be cautious.

//DEFCAT   EXEC PGM=IDCAMS          
//STEPCAT DD DISP=SHR,DSN=SYS1.MSTRCTLG           
//SYSPRINT DD SYSOUT=*                          
//SYSIN    DD *                  
DELETE (SYS1.DATASET) NVSAM NOSCRATCH CAT(SYS1.MSTRCTLG)     
DEFINE NVSAM (NAME(SYS1.DATASET) DEVT(3390) VOL(VOL123)) - 
            CAT(SYS1.MSTRCTLG)   

//Pierre G

Allocate a page dataset / page space for new system, in a second master catalog

  • Post category:JCL
  • Reading time:1 mins read

This job shows you how you can define a page dataset in another master catalog than the currently active master catalog. This technique is typically used when you are building a new system from a driver system. The master catalog referred to via the CATALOG statement below is the to-be master catalog of the new system you are building.

//DEFPAGE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//PAGE2    DD UNIT=3390,VOL=SER=PSYS1B,DISP=OLD
//SYSIN    DD *
  DEFINE PAGESPACE( -
      FILE(PAGE2) -
      NAME(SYS1.MVST.PLPA2) -
      CYLINDERS(300) -
      VOLUME(PSYS1B) ) -
    CATALOG(SYS1.MSTRCAT0)
/* 

//Pierre G.

Define an ALIAS for your User Catalog

  • Post category:Catalog
  • Reading time:1 mins read

This JCL defines an ALIAS in the current master catalog. The alias points to the user catalog named in the RELATE clause.

Use a STEPCAT pointing to another master catalog when needed.

//DEFCAT   EXEC PGM=IDCAMS 
//* STEPCAT DD DISP=SHR etc when needed     
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *        
  DEF ALIAS (NAME(PROD) RELATE(SYS1.USERCAT.PROD)) 
/*
//

//Pierre G

ABEND Assembler program

  • Post category:Assembler
  • Reading time:1 mins read

I can not remember why I needed this assembler program – and why it is in assembler – but here it is: the program who’s sole function is to ABEND

ABENDIT  CSECT
         EQUATES
         SAVE (14,12),,ABENDIT/SPDEPT/&SYSDATE/&SYSTIME/
         USING ABENDIT,R11             SET UP BASE ADDRESSABILITY
         LR    R11,R15                 LOAD BASE REG WITH ENTRY POINT
         LA    R14,SAVE                GET ADDRESS OF REGISTER SAVE
         ST    R13,4(0,R14)            SAVE CALLER'S SAVE AREA ADDR
         ST    R14,8(0,R13)            SAVE MY SAVE AREA ADDRESS
         LR    R13,R14                 LOAD SAVE AREA ADDRESS 
*        Business Logic
         ABEND 4321
*        Epilogue 
RETURN   EQU    *
         L      R13,4(R13) 
         RETURN (14,12)                RETURN TO CALLER 
         LTORG 
SAVE     DS     18F 
         END ABENDIT 

// Bert Grobs