DevOps processes and tools for z/OS

  • Post category:DBAOTM
  • Reading time:4 mins read

In this post I will discuss a traditional view of the DevOps processes and tools for z/OS, and in the follow-on post I will discuss a somewhat futuristic view. The ideal situation for development for z/OS is work for all of us. However, significant progress has been made of the past few years to change the traditional waterfall-oriented processes and tools for development of applications on z/OS into a modern-day agile way of working. Traditional DevOps process for development Before we look at modern development tools for z/OS, let’s first have a look at how application development was traditionally done. The traditional waterfall is a staged approach that is reflected in the processes and tools The development process of applications on z/OS traditionally goes through a number of stages, typically called Development – Test – Acceptance and Production. An application is developed in the development stage. It is unit-tested in the Development environment. When that is done the application moves to the Test stage, from which it is integration-tested in the Test environment. When all is well, the application moved to the Acceptance stage, from which it is Acceptance-tested in the Acceptance environment. Finally, for Go-Live in Production the application is moved to the Production stage, reflecting the situation in the Production environment. What you read from the above simplified process description is that every stage in the process, also has an environment associated with it. The infrastructure setup for the development process, is very much aligned with this waterfall-oriented development process. An application version that has its source code in the Test stage, is using the Test environment to validate correct functioning. Not only does this create obvious source code management problems with parallel development, it also creates a rigid relation between the development process and the physical infrastructure. Deployments are incremental - the concept of a build does not exist What is also different is the traditional development process compared to modern ideas, is that the concept of a build did not exist. A build today, is a collection of all the application artefacts that are needed to run an application in a runtime environment.  To run an application you need an executable, and typically also configuration files, scripts and definitions. On the mainframe we get an executable program through a compilation process. For a z/OS application to work, there are typically also some runtime definitions required. These are things like JCL scripts, properties files, database definitions, interface definitions, etcetera. All these artefacts together we nowadays call a build. Most of the processes to create all the z/OS application artefacts that are needed for an application, were disparate, unique processes. Some technologies allowed for standardization of build processes for certain components, mostly for the compilation processes. But most processes were either manual, or automated with in-house created tools, using whatever technology the organization thought best at the time when the need was identified. In summary, creating an application build as we know it today was impossible, and automation…

On the REST API provided by IBM MQ

  • Post category:MQ
  • Reading time:1 min read

Just a few things on the possibilities on  the MQ REST API. With the MQ API facility you can PUT and GET messages on an MQ queue through a REST API. This capability only supports interacting with text messages. You will get the payload as a string, not as a "neat" JSON structure. This is explained in Using the messaging REST API - IBM Documentation. If you want to get a “neat” JSON API and map the “text” structure to a JSON structure and get a real API, you should use z/OS Connect. Matt Leming from IBM explains things very clearly in this presentation REST APIs and MQ (slideshare.net) By the way, z/OS Connect option also requires the MQ REST API infrastructure to talk to MQ.

Integrating z/OS applications with the rest of the world

Many mainframe applications were built in an era where little integration with other applications was needed. Where integrations were needed, this was mostly done through the exchange of files. For example, for the exchange of information between organizations. In the 1990s the dominance of the mainframe applications ended and client-server applications emerged. These new applications required more extensive and real-time integrations with existing mainframe applications. In this period many special integration tools and facilities were built to make it possible to integrate z/OS applications and new client-server applications. In this chapter I will highlight categories of these integration tools that are available on z/OS, from screen-scraping tools to modern integrations supporting the latest REST API interfaces. File interfaces The mainframe was designed for batch processing. Therefore integration via files is traditionally well catered for and straightforward. You can use multiple options to exchange files between applications on z/OS and other platforms. Network File System Network File System (NFS) is a common protocol that you can use to create a shared space where you can share files between applications. Although it was originally mostly used with Unix operating systems, it is now built into most other operating systems, including z/OS. NFS solutions however are usually not a preferred option due to security and availability challenges. FTP The File Transfer Protocol (FTP) is a common protocol to send files over a TCP/IP network to a receiving party, and it is also supported on z/OS. With FTP a script or program can be written to automatically transfer a file as part of an automated process. FTP can be made very secure with cryptographic facilities. FTP is built into most operating systems, including z/OS. Managed File Transfer Managed file transfer is also a facility to send files over a network, but the “Managed” in the category means a number of additional features are added. Managed file transfer solutions make file transfers more reliable and manageable. A number of additional operational tasks and security functions related to file exchange are automated. Managed file transfer tools provide enhanced encryption facilities, some form of strong authentications, integration with existing security repositories, handling of failed transfers with resend functionality, reporting of file transfer operations, and more extensive API’s. On z/OS a number of managed file transfer tools are available as separate products: IBM has Connect:Direct and MQ-FTE, CA/Broadcom has Netmaster file transfer and XCOM, BMC provides Control-M  and there are other less commonly known tools. Message queueing Message queuing is a generic manner for applications to communicate with each other in a point-to-point manner. With message queuing applications remain de-coupled, so they are less dependent on each other’s availability and response times. Applications can be running at different times and communicate over networks and systems that may be temporarily down. As we will see in the next section, when using alternative point-to-point protocols like web services, both applications and intermediate infrastructures must be available for successful application communications. The basic notion of message queuing is that an application…

Programming languages for z/OS

  • Post category:DBAOTMProgramming
  • Reading time:6 mins read

In this post I will discuss the programming languages you find on z/OS, and what they are generally used for. COBOL The COBOL programming language was invented 60 years ago to make programs portable across different computers. The language is best usable for business programs (as opposed to scientific programs). COBOL is a language that must be compiled into executables, load modules. IDENTIFICATION DIVISION. PROGRAM-ID. COBPROG. ENVIRONMENT DIVISION. DATA DIVISION. PROCEDURE DIVISION. DISPLAY "HELLO WORLD". STOP RUN. PL/I PL/I was developed in the mid-1960s with the aim to create a programming language that could be used for business as well as scientific applications. Like COBOL, PL/I programs must be compiled into load modules. World: Procedure options(main); Put List( 'Hello world' ); End World; Assembler Assembler is still around. In the past business applications were developed using Assembler. Nowadays you should not do that anymore. But there are still a lot of legacy assembler programs around on the mainframe. In the old days, assembler was often used to implement tricks to achieve things that were not possible with the standard operating system, or other programming languages. This practice has created a problematic legacy of very technical programs in many mainframe application portfolios. The modern stance is that Assembler program should be regarded as severe legacy, because it is no longer maintainable and Assembler program are a risk for operating system and middleware updates. Furthermore, we find Assembler programs in modifications to the z/OS operating system and middleware. z/OS offers a number of points where you can customize the behavior of the operating system. These so-called exit-points oftentimes only have interfaces in Assembler. Like application programs in Assembler, z/OS exits in Assembler are a continuity risk. Not only because nobody knows how to program Assembler anymore, but even more so because these exit points make use of interfaces that IBM may (and wishes to) change at any point in the future. IBM is actively removing Assembler-based exit points and replacing these where needed with configuration parameters. The bottom line is that you should remove all home-grown Assembler programs from your z/OS installation. TEST0001 CSECT STM 14,12,12(13) BALR 12,0 USING *,12 ST 13,SAVE+4 LA 13,SAVE WTO 'HELLO WORLD!' L 13,SAVE+4 LM 14,12,12(13) BR 14 SAVE DS 18F END Java The language invented by a team from Sun in the 1990s with the goal to develop a language that could run on any device. Support for Java on the mainframe was introduced somewhere in the beginning of the 21st century. Java programs do not need to be compiled. They are interpreted by a special layer that must be installed in the runtime environment, called the Java Virtual Machine. The execution is (therefore) far more inefficient than COBOL and PL/I. So inefficient that running it on the mainframe would be very expensive (see section Understanding the cost of software on z/OS, MLC and OTC). To address this IBM invented the concept of zIIP specialty engines (see section Specialty engines), which makes running Java on the…

Middleware for z/OS – Database management systems

  • Post category:Db2DBAOTM
  • Reading time:2 mins read

In the previous post I started the first part of describing the middleware tools available on z/OS, kicking off with the available application servers of transaction managers. In this part I will discuss the database management systems that can run on z/OS. Db2 Db2 for z/OS is the z/OS version of IBM’s well-known relational database management system. It is a regular high-end RDBMS, except that it exploits the sysplex capabilities of z/OS. IDMS/DB IDMS/DB is the network database management system com CA/Broadcom. A network database uses special concept to organize data, namely in the form of a network of relationships. Besides some modelling advantages this way of data access can be extremely fast, but as for hierarchical data models like in IMS, it is more difficult to program for it. IMS/DB IBM’s IMS/DB is a hierarchical database management system. Data in such a database management system is not structured in this database in tables like in Db2, but in tree-like hierarchies. In Db2 and other relational databases there is the well-known SQL language to access data, in IMS you have a language called DL/I to manipulate data. The hierarchical data model has some modelling advantages and also data access is extremely fast and efficient. The drawback of it is that it is more complex to program. Datacom/DB Datacom /DB is a relational database management system from CA/Broadcom. ADABAS ADABAS is Software AG's database management system. ADABAS organizes and accesses data according to relationships among data fields. The relationships among data fields are expressed by ADABAS files, which consist of data fields and logical records. This data model is called an inverted-list model.

Middleware for z/OS – Application Servers

  • Post category:DBAOTM
  • Reading time:4 mins read

There is a large variety of middleware tools available on z/OS. Some are very similar to the software also available on other platforms, like WebSphere Application Server and Db2, and some are only available on the mainframe, like IMS and IDMS. I will highlight a number of the main middleware tools for z/OS in this chapter. Application Servers Application Servers are tools that make it easier to run interactive applications. Today we call these tools Application Servers. On the mainframe these tools were traditionally called Transaction Managers. A small intermezzo to explain the similarities and get acquainted with the terminology. Applications Servers and Transaction Managers intermezzo Despite their different name, Application Servers and Transaction Managers achieve the same goal: make it easy to build and run interactive applications. Application Servers gather a set of common functions for these types of applications. These functions include network communications, transaction functionality, features to allow scaling of applications, recovery functions, database connectivity features, logging functionality and much more. For Java a standard for these functions is created in the Java Enterprise Edition (JEE) standard. The z/OS Transaction Managers all provide a similar set of functions, for multiple programming language like COBOL, PL/I, C/C++ and Java. With a modern web application server, the user enters a url consisting of the name of a server and an identification of the piece of code on that server. For example, a user types in his browser http://acme.com/fireworks/index.html . In this, acme.com is the server name and fireworks/index.html is the piece of code to execute on that server – called the uri. The application server takes the uri, executes the code and returns a response html page. The traditional transaction managers work in a similar way. First you must make a connection from your terminal to the transaction manager. Traditionally you did this by typing something like “LOGON APPLID(CICSABC)”. Then you were connected to the application server and you were presented some screen. Then you type in a transaction code. The transaction code is similar to the uri: it identifies which piece of code to run. The transaction manager executes the code and returns a response screen to the user. The transaction managers on z/OS nowadays can work in both ways. They still have the traditional interface, which is hardly used for business applications anymore, and they also have a modern web application interface like web application servers. CICS traditional versus a web application server Now let’s have a look at what sort of application servers we have on the mainframe. WebSphere Application Server IBM’s WebSphere Application Server (WAS) is an application server for Java programs, complying to the JEE Java application standard. WebSphere was one of the first implementations of a Java application server. It was made also available on z/OS. Initial implementations of WAS on z/OS were very inefficient and had stability issues. After a redesign and the introduction of speciality engines for Java processing (see section Specialty engines), z/OS has become of very cheap platform for…

Turning a PDS into a PS with standard tools (for email)

  • Post category:Utilities
  • Reading time:1 min read

I recently got a question from a collegue. He wanted to transfer an entire PDS in an email to someone else. You can download all the member of the PDS with FTP, zip up all the files and transfer that. But it might be easier to use this trick. Create a PS from in PDS by using the XMIT command. XMIT is a TSO command that you can use to transfer a dataset to another user, or system, or both. The trick now to "zip" a PDS with XMIT is to XMIT the PDS to yourself on the same system: xmit userid.node dsn(dataset) outdsn(outdsn) This creates the "zipped" PS dataset, that you can send through email. If you download the file to your Mac or PC, makes sure you download it in binary mode. The receiver can "unzip" the file into a PDS with the accompanying receive command: RECEIVE INDSN(dataset) DSN(outdataset)

Modern tools for development and operations

  • Post category:DBAOTM
  • Reading time:2 mins read

In the previous section I explained that green screen interfaces still exist for administrative tasks. But even for these kinds of work there are modern tools with contemporary interfaces. z/OS itself and almost all middleware running on z/OS can be managed with web-based tools, Eclipse-based tools for z/OS, or nowadays more and more Visual Studio Code based tools for z/OS. Furthermore almost all administration tasks on z/OS can be invoked from external tools through REST APIs. More and more development and operations functionality will be made available only in this modern kind of tool sets. The standard Eclipse-based tool for z/OS that you can download for free is called z/OS Explorer. This tool is a desktop client interfacing with z/OS. Many mainframe tools and middleware solutions provide plug-ins for this tool. Figure - z/OS Explorer Considering development tools, there are a number of modern options for the mainframe. IBM has developed an Eclipse-based development tool called IBM Developer for z/OS (IDz). The software company Compuware sells a set of tools for mainframe development called Topaz. There are also open source tools and plugins, like the IBM Z Open Editor for Visual Studio. These modern tools provide a development experience for z/OS applications that is very similar to the experience you have when you develop applications for other platforms, like Java, PHP or Python. Support for developing and debugging mainframe languages such as COBOL and PL/I is supported in the tools, but also Java is supported. As importantly it provides plugins for interacting with middleware such as Db2, MQ, CICS and IMS. IBM, CA, BMC and other vendors provide many modern tools for the administration of specific middleware in your organization. Finally, a recent development is the open source project called Zowe. This project is a collaboration of a number of mainframe software vendors and aims to provide an open source software framework for development and operations teams to securely, manage, control, script and develop for z/OS like any other cloud platform. In separate chapters I will discuss a little bit more on the modern application development and operations architecture tools for z/OS, and on modern monitoring architecture and tools.

Theropod blog on Medium

  • Post category:General
  • Reading time:1 min read

Thought you might find this blog by IBMers interesting: Theropod. Let's hope they can keep up the good work. Will add this site to the Link Pack Area collection.

Filewatch utility – file triggering

  • Post category:JCLUtilities
  • Reading time:1 min read

The filewatch utility is not a well known utility, that can be very handy. I have used it especially when I needed process a unix file after it has been copied to a certain directory. This is the documentation about the filewatch utility: https://www.ibm.com/support/knowledgecenter/SSRULV_9.3.0/com.ibm.tivoli.itws.doc_9.3/zos/src_man/eqqr1hfszfstrig.htm Th job below is submitted and then waiting and activated when there is a change in the input directory /yourdirectory/testfileloc The job itself moves the files in the input directory to a processing directory, starts the next “watch” job and then the processing job for the files in the processing directory. The setup is clearly to test the mechanism – and not necessarily a model for a production-like situation. In a production situation you might want to use TWS applications etc, but that is up to your application design of course. I hope this helps. //STEP01  EXEC PGM=EQQFLWAT,PARM='/-c wcr -dea 900 -i 20 //             -r 123 -t 3 -fi /yourdirectory/testfileloc' //* //* Register and move files //RUNSCRPT EXEC PGM=IKJEFT01,REGION=0M //STDOUT   DD SYSOUT=* //STDERR   DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //SYSTSIN  DD * BPXBATSL SH  -   mv /yourdirectory/testfileloc* /yourdirectory/testfileloc/processing //* //* Then kick off separate job //* - Next filewatch job //* //SUBJOB   EXEC PGM=IEBGENER //SYSIN    DD DUMMY //SYSPRINT DD DUMMY //SYSUT1   DD DISP=SHR,DSN=YOURPDS.JCL(EQQFLWAT) //SYSUT2   DD SYSOUT=(A,INTRDR) //* //* - Processing job //* //SUBJOB   EXEC PGM=IEBGENER //SYSIN    DD DUMMY //SYSPRINT DD DUMMY //SYSUT1   DD DISP=SHR,DSN=YOURPDS.JCL(PROCJOB) //SYSUT2   DD SYSOUT=(A,INTRDR) //* //*