Middleware for z/OS – Database management systems

  • Post category:Db2DBAOTM
  • Reading time:3 mins read

In the previous post I started the first part of describing the middleware tools available on z/OS, kicking off with the available application servers of transaction managers.

In this part I will discuss the database management systems that can run on z/OS.


Db2 for z/OS is the z/OS version of IBM’s well-known relational database management system. It is a regular high-end RDBMS, except that it exploits the sysplex capabilities of z/OS.


IDMS/DB is the network database management system com CA/Broadcom. A network database uses special concept to organize data, namely in the form of a network of relationships. Besides some modelling advantages this way of data access can be extremely fast, but as for hierarchical data models like in IMS, it is more difficult to program for it.


IBM’s IMS/DB is a hierarchical database management system. Data in such a database management system is not structured in this database in tables like in Db2, but in tree-like hierarchies. In Db2 and other relational databases there is the well-known SQL language to access data, in IMS you have a language called DL/I to manipulate data.

The hierarchical data model has some modelling advantages and also data access is extremely fast and efficient. The drawback of it is that it is more complex to program.


Datacom /DB is a relational database management system from CA/Broadcom.


ADABAS is Software AG’s database management system. ADABAS organizes and accesses data according to relationships among data fields. The relationships among data fields are expressed by ADABAS files, which consist of data fields and logical records. This data model is called an inverted-list model.

A very short summary of replication solutions (for Db2)

  • Post category:Db2
  • Reading time:4 mins read

Some time ago I did a short summary presentation on my experience with replication solutions for Db2 on z/OS. The pictures and text are quite generic, so I thought it might be worthwhile sharing the main topics here. The picture below summarizes the options visually:

Queue replication

Synchronizes tables. The synchronization process on the capture side reads the Db2 transaction log, and puts the updates for which a “subscription” is defined on a queue. On the apply side, the tool retrieves the updates from the queue and applies them to the target database.

SQL replication

Also synchronizes tables. In this case the capture process stores the updates in an intermediate or staging table, from which the apply process takes the updates and applies them to the target tables.

Data Event Publishing

Takes the updates to the tables for which a subscription is defined and produces a comma-delimited or xml message from it which is put on a queue. The consumer of the message can be any user-defined program.

Change Data Capture

CDC provides a flexible solution that can push data updates to multiple forms of target media, whether tables, messages or an ETL tool.


After my short summary, we dug a little in the requirement’s for the specific problem this team was aiming to address. They needed:

  • A lean operational database for the primary processes.
  • Ad-hoc and reporting queries on archival tables, where data in archive table can be kept over several years.
  • The amount of data is relatively large: it should support tens to hundreds of millions of database updates per day, with a peak of tens of millions in an hour.
  • Target database management system was not decided yet; could be Db2 or Oracle.

So a solution should replicate the operational database to an archive database, while data must be very current, demanding near-realtime synchronization.

We focused a it on the Queue Replication solution. The target DBMS for Queue replication can be Db2, Oracle and SQL Server (and a few more). Furthermore, in my experience this solution can support:

  • High volumes in peak periods: millions of row inserted/updated in short period of time
  • Latency can remain within seconds, even in peak periods – this does require tuning of the solution, such as spreading messages over queues.For selected table you can specify suppress deletes, which allows for building up of historical data. 

There are a few concerns in the Queue Replication solution:

  • Data model changes will require coordination of changes in source, Queue Replication configuration and target data model.
  • Very large transactions (not committing often enough) may be a problem for Queue Replication (and also a very bad programming practice).

Hope this helps with your replication needs.