Testing the IBM Workload Scheduler API

  • Post category:Programming
  • Reading time:3 mins read

We see REST API’s appearing on many middleware tools. In a previous post I have talked about the REST API on MQ. I have also been playing around with the IBM Workload Scheduler (IWS) REST API.

The API is very promising. You can use it for automation of IWS administration, but also in your daily business operation.

A major thing that the API lacks is support for certificate-based authentication. This is incomprehensible since the application that provides the API is a normal Liberty application, just like we MQ Web application providing the MQ API’s that I mentioned before. Apparently the people in Hursley do a more thorough programming job then their IWS brothers (not sure where they are located after IBM made the silly move to outsource IWS development to HCL).

Here my Python program to do the most rudimentary test through the API: get engine info.

(I have left in some code commented out that I used to test certificate authentication.)

import requests

print("tested")


print("hello") 

host = "https://yourserver.com:1603"
# your server
baseapiurl = "/twsz" 
# your request url - engine name is your instead of YRTW
getrequest = "/v1/YRTW/engine/info"
api_url = host + baseapiurl + getrequest
print(api_url)

request_headers = {
    'Content-Type': 'application/json'
}

#cert_file_path = "/your/pythonprograms/dwc-client-ssl-xat.crt"
#key_file_path = "/your/pythonprograms/dwc-client-ssl-xat.privkey"
#cert = (cert_file_path, key_file_path)
# data = {'api_dev_key':API_KEY,
#        'api_option':'paste',
#        'api_paste_code':source_code,
#        'api_paste_format':'python'}

data = """{
  "hasDatabasePlan": true,
  "locale": "string",
  "timezone": "string",
  "timezoneEnable": true,
  "roleBasedSecurityEnabled": true,
  "type": "string",
  "version": "string",
  "apiLevel": 0,
  "featureLevel": 0,
  "hasModel": true,
  "hasPlan": true,
  "enableRerunOpt": true,
  "engineType": "string",
  "ltpStartDate": "2022-02-16T13:48:01.978Z",
  "ltpEndDate": "2022-02-16T13:48:01.978Z",
  "dbTimezone": "string",
  "planTimezone": "string",
  "workstationName": "string",
  "domainName": "string",
  "synphonyRunNumber": 0,
  "synphonyScheduledDate": "2022-02-16T13:48:01.978Z",
  "synphonyBatchManStatus": "string",
  "synphonyStartOfDay": 0,
  "masterDomain": "string",
  "masterWorkstation": "string",
  "synphonyFileName": "string",
  "synphonyPlanStart": "2022-02-16T13:48:01.978Z",
  "synphonyPlanEnd": "2022-02-16T13:48:01.978Z",
  "synphonySize": 0,
  "synphonyStartTime": "2022-02-16T13:48:01.978Z",
  "synphonyFound": true,
  "enableLegacyStartOdDayEvaluation": true,
  "dbStartOfDay": "string",
  "rdbmsSchema": "string",
  "rdbmsUser": "string",
  "rdbmsType": "string",
  "rdbmsUrl": "string",
  "fipsEnabled": true,
  "regardlessOfStatusFilterEnabled": true,
  "executorList": [
    {
      "application": "string",
      "namespace": "string",
      "version": "string",
      "factory": "string",
      "supportedOS": "string",
      "stoppable": true,
      "restartable": true,
      "labels": {
        "additionalProp1": "string",
        "additionalProp2": "string",
        "additionalProp3": "string"
      },
      "id": "string",
      "xsdResourceName": "string",
      "cancelSupported": true,
      "supportedWorkstation": "string"
    }
  ],
  "auditStore": "string",
  "auditModel": "string",
  "auditPlan": "string",
  "licenseType": "string",
  "licenseJobNumber": 0,
  "licenseSendDate": 0,
  "wasFirstStartDate": 0,
  "licenseError": "string"
}"""



try:
    response = requests.get(api_url, auth=('ZOSUSER', 'PASSWORD'), verify=False, headers=request_headers)
    # use this whenever they get certificates working
    #response = requests.get(api_url, cert=cert, verify=False, headers=request_headers)

except requests.exceptions.RequestException as e:
    print(e)

#response.json()
print('---------')

print(response)
print('---------')
print(response.json())
print('---------')

Testing the MQ REST API

  • Post category:MQProgramming
  • Reading time:3 mins read

I have been playing around with the MQ REST API. It works very well. Also certificate-based authentication work out of the box.

Of course, you are doing something that MQ-fanatics might find horrific: reliable messaging over an unreliable protocol. They are somewhat right. By no means can MQ provide assured message delivery over an unreliable HTTP protocol. When using this in application, make sure you handle all error situations. For example, when you do not get an http response, you don’t know whether the message was successfully delivered or not. You application has to cater for such situations. Some call this idempotence.

Here is my small Python program that illustrates how you can use the MQ REST API.

import requests
import json
import sys

class MQWebManager:
    baseapiurl = "/ibmmq/rest/v1/messaging"
    
    def __init__(self, ep, ak, cert_file_path, key_file_path):
        self.endpoint = ep
        self.apikey = ak
        self.cert = (cert_file_path, key_file_path) 

    def apideleterequest(self, qmgr, queue, msgid):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'messageId': "'" + msgid + "'",
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue',
            'correlationId' : ''
            }
        data = {}
        response = requests.delete(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response

    def apipostrequest(self, qmgr, queue):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue'
            }
        data = 'hello from apipostrequest'
        print('resource url: ', resourceurl)
        response = requests.post(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response



print('---------')

#cert_file_path = "/yourpath/yourcert.crt"   
#key_file_path = "/yourpath/yourcert.privkey"

cert_file_path = sys.argv[1]
key_file_path = sys.argv[2]

m1 = MQWebManager("https://mqweb.yourzos.com:12345","", cert_file_path, key_file_path)

#put a message on the queue
response = m1.apipostrequest("QMGR","YOUR.Q.NAME") 
print(">>>", response.status_code, response.json)

print(response.headers) 

print(response)
#retrieve msgid from the message we just put there
msgid = response.headers['ibm-mq-md-messageId']
print(response.headers['ibm-mq-md-messageId'])

#delete that message we just put there
response = m1.apideleterequest("QMGR","YOUR.Q.NAME", msgid) 
print(">>>", response.status_code, response.json)



print('---------')

On the REST API provided by IBM MQ

  • Post category:MQ
  • Reading time:2 mins read

Just a few things on the possibilities on  the MQ REST API.

With the MQ API facility you can PUT and GET messages on an MQ queue through a REST API. This capability only supports interacting with text messages. You will get the payload as a string, not as a “neat” JSON structure.

This is explained in Using the messaging REST API – IBM Documentation.

If you want to get a “neat” JSON API and map the “text” structure to a JSON structure and get a real API, you should use z/OS Connect.

Matt Leming from IBM explains things very clearly in this presentation REST APIs and MQ (slideshare.net)

By the way, z/OS Connect option also requires the MQ REST API infrastructure to talk to MQ.

Integrating z/OS applications with the rest of the world

Many mainframe applications were built in an era where little integration with other applications was needed. Where integrations were needed, this was mostly done through the exchange of files. For example, for the exchange of information between organizations.

In the 1990s the dominance of the mainframe applications ended and client-server applications emerged. These new applications required more extensive and real-time integrations with existing mainframe applications. In this period many special integration tools and facilities were built to make it possible to integrate z/OS applications and new client-server applications.

In this chapter I will highlight categories of these integration tools that are available on z/OS, from screen-scraping tools to modern integrations supporting the latest REST API interfaces.

File interfaces

The mainframe was designed for batch processing. Therefore integration via files is traditionally well catered for and straightforward.

You can use multiple options to exchange files between applications on z/OS and other platforms.

Network File System

Network File System (NFS) is a common protocol that you can use to create a shared space where you can share files between applications. Although it was originally mostly used with Unix operating systems, it is now built into most other operating systems, including z/OS. NFS solutions however are usually not a preferred option due to security and availability challenges.

FTP

The File Transfer Protocol (FTP) is a common protocol to send files over a TCP/IP network to a receiving party, and it is also supported on z/OS. With FTP a script or program can be written to automatically transfer a file as part of an automated process. FTP can be made very secure with cryptographic facilities.

FTP is built into most operating systems, including z/OS.

Managed File Transfer

Managed file transfer is also a facility to send files over a network, but the “Managed” in the category means a number of additional features are added.

Managed file transfer solutions make file transfers more reliable and manageable. A number of additional operational tasks and security functions related to file exchange are automated. Managed file transfer tools provide enhanced encryption facilities, some form of strong authentications, integration with existing security repositories, handling of failed transfers with resend functionality, reporting of file transfer operations, and more extensive API’s.

On z/OS a number of managed file transfer tools are available as separate products: IBM has Connect:Direct and MQ-FTE, CA/Broadcom has Netmaster file transfer and XCOM, BMC provides Control-M  and there are other less commonly known tools.

Message queueing

Message queuing is a generic manner for applications to communicate with each other in a point-to-point manner. With message queuing applications remain de-coupled, so they are less dependent on each other’s availability and response times. Applications can be running at different times and communicate over networks and systems that may be temporarily down. As we will see in the next section, when using alternative point-to-point protocols like web services, both applications and intermediate infrastructures must be available for successful application communications.

The basic notion of message queuing is that an application sends a message to a queue and another application asynchronously reads messages from that queue and (optionally) responds with another message over a queue. Besides the specific asynchronous nature of message queuing, a big advantage is that it can assure message delivery: messages will not get lost, and when the infrastructure is not available, messages remain stored until they can be delivered.

IBM’s MQSeries, or WebSphere MQ as it is called now, is a separate is one of the most well-known and robust solutions for message queuing available on z/OS.

The open API for messaging called Java Message Service (JMS) is implemented by WebSphere MQ and WebSphere Application Server on z/OS.

Applications using Message Queuing

Web services (SOAP, REST)

Web services is the modern technology that enables applications to communicate over the web protocol HTTP, the protocol we also use for browsing the web.

SOAP and REST are two different types of web services. SOAP is a bit older and exchanges XML messages. XML is more resource intensive because handling XML is a complex operation. REST is more modern and lightweight, and today’s API economy is mostly based on REST APIs.

The benefit of integration with web services is that no special infrastructure is needed for applications to integrate, apart from a capable web application server. Integrations are lightweight and can be very loosely coupled.

The downside of web service is that the HTTP protocol does not guarantee message delivery (as opposed to message queueing as we have seen above). Applications using web services have to implement their own recovery and retry mechanisms to cope with situations where connections are lost.

On z/OS today, most modern versions of application middleware on z/OS, like CICS, IMS, WebSphere Application Server, IDMS, and others support REST and SOAP interfaces.

Applications using Web Services

Enterprise Service Bus

Another form of integration can be achieved through Enterprise Service Bus tools. These tools probably give the widest variety of integration options. They can receive and send service requests over a number of different protocols. They can convert messages from and to many formats. And they can orchestrate complex message interactions between multiple applications. Enterprise Service Bus products in the market are Tibco Substation ES and IBM Integration Bus.

ESB solutions can be implemented on z/OS itself, which than often has the advantage of easier integration with the z/OS application side, but also a non-z/OS platform and integrate with z/OS agent software.

Enterprise Service Bus

Adapters

In many situations it may not be possible to refactor your old mainframe applications. The applications may not be designed properly in a layered manner, middleware technology may have limited options, skills may not be available, or the risk of a changing existing applications is too high. Or there may be other reason you do not want to touch the code.

For these situations, application adapters can help in opening up applications. In general, an adapter converts a proprietary middleware protocol like a CICS, IDMS or IMS API into a more common API or generic protocol, like a Java program, a web service or message queueing interface. Some adapters provide the option of converting a proprietary 3270 screen interface into a neat API through screen scraping.

I will highlight a number of the type of tools here.

Generic functioning of an adapter

CICS Transaction Gateway

CICS Transaction Gateway provides an API for Java and C programs to call a CICS transaction on z/OS.

CICS Transaction Gateway provides only a way to call functionality in CICS, but there is no possibility in this tool to reversely invoke an external program from CICS. CTG is only meant for external programs to call CICS.

CICS Transaction Gateway adapter

IMS Connect

IMS Connect provides a Java API through which you can invoke IMS functions form Java programs. Through IMS Connect you can access IMS transactions as well as data in IMS DB (see section Middleware for z/OS). As such it functions quite similar to CTG, although the native interfaces are of course different.

z/OS Connect

A recent product from IBM is z/OS Connect. This tool converts a REST API into one or more proprietary backend protocols, like a CICS or IMS transaction or call to Db2. Also, z/OS Connect makes it possible to call REST APIs from mainframe applications.

Thus, z/OS Connect provides a bi-directional adapter for REST API through which you expose and call RESTful APIs from existing z/OS programs in CICS, IMS, Db2, WebSphere Application Server and MQ.

z/OS Connect adapter

Screen scraping tools

You may have old legacy applications that are built as a silo, have only 3270 user interfaces and no decent program interfaces.

For this problem, screen scraping tools can be a last resort.

The integration problem of an application silo – refactoring is the ideal solution

A screen scraping tool provides the ability to simulate the interaction of a business user behind a screen, with the old application’s user interface. The screen scraper tool automates the workflow of the end-user by filling in the old application screen programmatically. With these automations such a tool can then aggregate and expose these interactions into higher level services. These higher level services can then be invoked through a modern API, such as a web service by other applications in your organization.

Integration with a screen scraping solution

The big problem with screen-scraping integrations is that you end up with more development artefacts that you need to maintain. Not only do you have the old application to maintain, but now also need to manage the screen scraping middleware and logic.

Screen-scraping should be considered a (very) temporary solution for a serious issue in your application landscape. Such a solution should be replaced by a strategic integration or new application as soon as possible.

Products like HostBridge, Rocket LegaSuite and IBM Host on Demand provide screen scraping facilities.

Legacy integration suites

There are many integration tools on the market that provide one or more of the forms of adapters that I have discussed in the above. For example, GT Software and Oracle Legacy Adapter provide functionality to bridge native z/OS interfaces including screen interfaces to and from modern applications.

Database access via JDBC, ODBC

So far, we have discussed application integration through application interactions – applications calling one another.

Applications on non-z/OS platforms alternatively can directly access data in databases on z/OS through the standard data access protocols ODBC and JDBC. All suppliers of database software for z/OS that I know provide drivers for ODBC and/or JDBC.

Integrating with JDBC and ODBC

From an architectural perspective it is not a preferred solution for integrating applications. Applications should manage their own data and access other applications’ data only through service interfaces, and follow the principle of loosely coupling for application architectures.

Parallel sysplex

One of the most distinguishing features of the z/OS operating system is the way you can cluster z/OS systems in a Parallel Sysplex. Parallel Sysplex, or Sysplex in short, is a feature of z/OS that was built in the 90s that enables extreme scalability and availability.

In the previous post we highlighted the z/OS Unix part. Here we will dive into the z/OS Parallel Syplex.

A cluster of z/OS instances

With Parallel Sysplex you can configure a cluster of z/OS operating system instances. In such a sysplex you can combine the computing power of multiple of z/OS instances on multiple mainframe boxes into a single logical z/OS server.

When you run your application on a sysplex, it actually runs on all the instances of the sysplex. If you need more processing power for your applications in a sysplex, you can add CPUs to the instances, but you can also add a new z/OS system to the sysplex.

This makes a z/OS infrastructure is extremely scalable. Also, a sysplex isolates your applications from failures of software and hardware components. If a system or component in a Parallel Sysplex fails, the software will signal this. The failed part will be isolated while your application continues processing on the surviving instances in the sysplex.

Special sysplex components: the Coupling Facility

For a parallel sysplex configuration, a special piece of software is used: a Coupling Facility. This Coupling Facility functions as shared memory and communication vehicle to all the z/OS members forming a sysplex.

The z/OS operating system and the middleware can share data in the Coupling Facility. The type of data that is shared are the things that members of a cluster should know about each other since they are action on the same data: status information, lock information about resources that are accessed concurrently by the members, and caching of shared data from databases.

A Coupling Facility runs in a dedicated special operating system, in an LPAR of its own, to which even system administrators do not need access. In that sense it is a sort appliance.

A sysplex with Coupling Facilities is depicted below. There are multiple Coupling Facilities to avoid a single point of failure. The members in sysplex connect to the Coupling Facilities. I have not included all the required connections in this picture, as that would become a cluttered view.

A parallel sysplex

Middleware exploits the sysplex functions

Middleware components can make use of the sysplex features provided by z/OS, to create clusters of middleware software.

Db2 can be clustered into so-called Datasharing Group. In a Datasharing Group you can create a database that can process queries on multiple Db2 for z/OS instances on multiple z/OS systems.

Similarly WebSphere MQ can be configured in a Queue Sharing Group, CICS in a CICSPlex, IMS in an IMSPlex and other software like WebSphere Application Server, IDMS, Adabas and other middleware use parallel sysplex functions to build highly available and scalable clusters.

This concept is illustrated in Figure 15. Here you see a cluster setup of CICS and Db2 in a sysplex. Both CICS and Db2 form one logical middleware instance.

A parallel sysplex cluster with Db2 and CICS
A parallel sysplex cluster with Db2 and CICS

You can see the big benefit of parallel sysplex lies in it’s a generic facilties to build scalable and high available clusters of middleware solutions. You can achieve similar solutions on other operating systems, but every middleware component needs to supply its own clustering features to achieve such a scalable and highly available configuration. This often needs additional components and leads to more complex solutions.

How is this different from other clustering technologies?

What is unique about a parallel sysplex is that it is a clustering facility that is part of the operating system.

On other platforms you can build cluster of middleware tools as well, but these are always specific solution and technologies for that piece of middleware. The clustering facilities are part of the middleware. With parallel sysplex, clustering is solved in a central facility, in the operating system of z/OS.

GDPS

An extension to Parallel Sysplex is Geographically Dispersed Parallel Sysplex, GDPS for short.  GDPS provides an additional solution to assure your data remains available in case of failures. With GDPS you can make sure that even in the case of a severe hardware failure, or even a whole data centre outage, your data remains available in a secondary datacentre, with minimal to no disruption of the applications running on z/OS.

In a GDPS configuration, your data is mirrored between storage systems in the two data centres. One site has the primary storage system, the storage system in the other data centre receives a copy of all updates. If the primary storage system, or even data centre fails, GDPS automatically makes the secondary storage device the primary, usually without disrupting any running applications.