Technical debt

  • Post category:Modernization
  • Reading time:2 mins read

Technical debt is a well-understood and well-ignored reality.

We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.

We borrow time by writing quick (and dirty) code, building up debt.

Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.

Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.

That is why continuous renovation is a business concern.

Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.

Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.

Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.

Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.

But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.

The 80/20 principle

  • Post category:Principles
  • Reading time:3 mins read

The 80/20 principle also known as the Pareto principle applies to many areas.

The most common application of the principle is in the assessment of a project effort: 20% of the effort produces 80% of the job to be do done. Or the other way around, the last 20% of the work to be done will take 80% of the effort.

In IT, the principle applies also similarly to requirements versus functionality: 20% of the requirements determine 80 procent of the architecture. 20% of the requirements are the important ones for the core construction of a solution.

The principe thus tells you to focus on the 20% important requirements that determine the architecture. It helps you shrink the options you need to consider and prioritize and focus on the important parts of the solution.

The question is of course: which of the requirements are the important ones. The experience of the architect helps here. But in general you will realize while analysing requirements, if a requirement will need a significant change or addition to the solution.

A good book about the 80/20 principle is the book with the same name: The 80/20 Principle, by Richard Koch.

An example from an practitioner architecting an airline reservation system.

“The first time I (unconsciously) applied the 80/20 rule was in my early days as an architect. I was working with a team of architects on application infrastructure for completely new web applications. A wide variety of applications were planned to run on this infrastructure. However, it was not clear yet what the characteristics, the volumes, response time needs, concurrent users et cetera were for these applications, and that made it uncertain what we needed to cater for in this generic hosting solution.

So we decided to walk through the known use cases for the different applications.

We worked our way through four of the tens of applications. During the fourth we somehow could not come up with additional requirements for the application infrastructure. We realized that the rest of the set of applications would ‘only’ be a variety of one of the apps wealready looked at. So we had our 80% from looking at just 20%.”

Transition to obverse

  • Post category:General
  • Reading time:1 mins read

This blog is now transitioning. When I started the blog I wanted to write about IBM mainframe technology, giving space to other readers, presenting a fresh view.

My intentions have changed, challenges have changed, and readers have changed.

After some posts expressing somewhat obverse standpoints of mine, readers reacted they wanted more of that. Also, in an earlier blog I shared snippets called ‘Principles of doing IT’, which got positive feedback. In this blog I will now bring these together. I will categorize my posts so the reader can easily filter what he wants so see. Yet, I give myself the freedom to keep posting in the order I like, and on the topic that I feel most urgently needs an obverse view.

I hope you enjoy. Please let me know what you think.

Niek

Test if a directory exists in z/OS Unix System Services

  • Post category:Utilities
  • Reading time:1 mins read

Snippet of shell script code to test if a directory exists in z/OS Unix System Services.

#!/bin/sh                                
DEST="/app/somedirectory"                        
if test -d "$DEST"                       
then                                     
    # It is a directory #                
    echo "Directory found ..."           
else                                     
    # does not exist                     
    echo "Directory does not exist"      
fi            

Security on z/OS

  • Post category:DBAOTM
  • Reading time:8 mins read

Security has always been one of the strong propositions and differentiators of the mainframe and the z/OS operating system. In this post I will highlight a few of the differentiating factors of the mainframe hardware and the z/OS operating system.

The mainframe provides a number of distinguishing security features in its hardware. In z/OS a centralized security facility is a mandatory and built-in part of the operating system. Also, z/OS exploits the security features that the mainframe hardware provides. This chapter will highlight what the central security facility in z/OS is, and how z/OS exploit unique hardware features of the mainframes.

Centralized security management

The central security management built into z/OS provides a standardized interface for security operations. A few software vendors have implemented this interface in commercial products, thus providing a security management solution for z/OS.

The SAF interface

The main security component of z/OS is the centralized security function called System Authorization Facility or SAF. This component provides authentication and authorisation functions.

The z/OS operating system itself and the middleware installed on z/OS make use of this central facility. With the SAF functions, z/OS and middleware tools can validate access to the resources that the middleware products need to protect.

A protected resource can be a dataset, a message queue, a database table, but also a special function or command that is part of the middleware software. By building in API calls to the SAF interface, the middleware product controls access to sensitive functions and resources.

Security products

The SAF interface of z/OS operating system is just that: a standardized interface. The implementation of the interface is left to software vendors. The SAF interface does not prescribe how security definitions should be stored or administrated.

There are three commercial solutions in the market that have implemented the SAF interface: IBM with its security product RACF, and CA/Broadcom with two different tools: ACF2 and Top Secret. All three software products provide additional services related to security management such as administration, auditing and reporting services. All three products define a special role in the organisation that is appointed to have the restricted ability to define and change the security rules. The security administrator has these special authorizations. The security administrator defines which users and/or groups of users are allowed to access certain resources.

The SAF interface and security products

IBM Enterprise Key Management Foundation

The z/OS operating system in equipped with a tool that IBM calls the IBM Enterprise Key Management Foundation (EKMF). This is a tool that manages cryptographic keys. EKMF is a full-fledged solution for centralized management of cryptographic keys that can be used on the mainframe, but also on other platforms.

Many organizations have dedicated key management infrastructure for different platforms. The EKMF solution allows organization to instead build a key management solution that can be used for all platforms.

Cryptographic facilities on the mainframe

EKMF and other cryptographic features in z/OS make use of the extensive cryptographic functions built into the mainframe hardware. Traditional encryption facilities have since long been a core part of the mainframe hardware. Recently IBM has added innovative features such as pervasive encryption and Data Privacy Passports, now called Hyper Protect data Controller.

Traditional encryption

The mainframe hardware and software are equipped with the latest encryption facilities, that allow for encryption of data and communications in the traditional manner.

What differentiates the mainframe from other platforms is that it is equipped with special processors that accelerate encryption and decryption operations and can enable encryption of high volumes of data.

Pervasive encryption

Pervasive encryption is a new general feature facilitated in the mainframe hardware. With pervasive encryption data is always encrypted: data is encrypted when stored on disk, but also during the communication over the networks end internal connections between systems. This encrypted data can only be used by users that are authorized to the right decryption keys.

Pervasive encryption gives an additional level of security. Even when a hacker has gained access to the system and gained access to the files or datasets, she still cannot use the data because it is encrypted. Similarly, even if you could “snif” the communications between systems and over the network, this is not sufficient because also the data flowing over communications networks is always encrypted.

IBM Hyper Protect Data Controller

Another problem occurs when data that is replicated from the source in the mainframe to other environments, typically for analysis, or aggregation with other data sources. The data that was so well protected on the mainframe, but now has become available in potentially less controlled environments. For this issue IBM has developed the IBM Hyper Protect Data Controller solution.

With this IBM Hyper Protect Data Controller solution, when a copy of the data is needed, the copy is encrypted and in this copy a piece of information is included that administers who is authorized to access that copy. This access scheme can be as detailed as describing who can use which fields in the data, who can see the content of certain fields, and who can see only masked values. A new component on z/OS, the Trust Authority maintains a registry of all data access definitions.

When the data copy is accessed, the so-called passport controller checks the identity of the person requesting the data access, and authorizations of that person for this copy of the data.

Doing so, a copy of the data can be centrally protected, while still it can be made copied to different environments.

Multifactor Authentication

Traditional authentication on z/OS relies on a userID / password combination, that is validated against the central security registry, as we have seen in RACF, ACF2 or Top-Secret.

However, the userID / password authentication is nowadays not considered sufficiently safe anymore. To address this safety issue multifactor authentication in broadly adopted. For the z/OS platform, IBM has developed the product called Multifactor Authentication for z/OS. Instead of using the normal password to logon to z/OS, a user must supply a token that is generated by a special authorized device. This special device can be a SecurID token device, a smartphone with a special app, or otherwise. The key thing is that next to a userID and password, pin code or fingerprint, there is a second thing – the second factor – needed for the user to prove his identity: the special device or authorized app on your phone.

Multifactor authentication on z/OS

Testing the IBM Workload Scheduler API

  • Post category:Programming
  • Reading time:3 mins read

We see REST API’s appearing on many middleware tools. In a previous post I have talked about the REST API on MQ. I have also been playing around with the IBM Workload Scheduler (IWS) REST API.

The API is very promising. You can use it for automation of IWS administration, but also in your daily business operation.

A major thing that the API lacks is support for certificate-based authentication. This is incomprehensible since the application that provides the API is a normal Liberty application, just like we MQ Web application providing the MQ API’s that I mentioned before. Apparently the people in Hursley do a more thorough programming job then their IWS brothers (not sure where they are located after IBM made the silly move to outsource IWS development to HCL).

Here my Python program to do the most rudimentary test through the API: get engine info.

(I have left in some code commented out that I used to test certificate authentication.)

import requests

print("tested")


print("hello") 

host = "https://yourserver.com:1603"
# your server
baseapiurl = "/twsz" 
# your request url - engine name is your instead of YRTW
getrequest = "/v1/YRTW/engine/info"
api_url = host + baseapiurl + getrequest
print(api_url)

request_headers = {
    'Content-Type': 'application/json'
}

#cert_file_path = "/your/pythonprograms/dwc-client-ssl-xat.crt"
#key_file_path = "/your/pythonprograms/dwc-client-ssl-xat.privkey"
#cert = (cert_file_path, key_file_path)
# data = {'api_dev_key':API_KEY,
#        'api_option':'paste',
#        'api_paste_code':source_code,
#        'api_paste_format':'python'}

data = """{
  "hasDatabasePlan": true,
  "locale": "string",
  "timezone": "string",
  "timezoneEnable": true,
  "roleBasedSecurityEnabled": true,
  "type": "string",
  "version": "string",
  "apiLevel": 0,
  "featureLevel": 0,
  "hasModel": true,
  "hasPlan": true,
  "enableRerunOpt": true,
  "engineType": "string",
  "ltpStartDate": "2022-02-16T13:48:01.978Z",
  "ltpEndDate": "2022-02-16T13:48:01.978Z",
  "dbTimezone": "string",
  "planTimezone": "string",
  "workstationName": "string",
  "domainName": "string",
  "synphonyRunNumber": 0,
  "synphonyScheduledDate": "2022-02-16T13:48:01.978Z",
  "synphonyBatchManStatus": "string",
  "synphonyStartOfDay": 0,
  "masterDomain": "string",
  "masterWorkstation": "string",
  "synphonyFileName": "string",
  "synphonyPlanStart": "2022-02-16T13:48:01.978Z",
  "synphonyPlanEnd": "2022-02-16T13:48:01.978Z",
  "synphonySize": 0,
  "synphonyStartTime": "2022-02-16T13:48:01.978Z",
  "synphonyFound": true,
  "enableLegacyStartOdDayEvaluation": true,
  "dbStartOfDay": "string",
  "rdbmsSchema": "string",
  "rdbmsUser": "string",
  "rdbmsType": "string",
  "rdbmsUrl": "string",
  "fipsEnabled": true,
  "regardlessOfStatusFilterEnabled": true,
  "executorList": [
    {
      "application": "string",
      "namespace": "string",
      "version": "string",
      "factory": "string",
      "supportedOS": "string",
      "stoppable": true,
      "restartable": true,
      "labels": {
        "additionalProp1": "string",
        "additionalProp2": "string",
        "additionalProp3": "string"
      },
      "id": "string",
      "xsdResourceName": "string",
      "cancelSupported": true,
      "supportedWorkstation": "string"
    }
  ],
  "auditStore": "string",
  "auditModel": "string",
  "auditPlan": "string",
  "licenseType": "string",
  "licenseJobNumber": 0,
  "licenseSendDate": 0,
  "wasFirstStartDate": 0,
  "licenseError": "string"
}"""



try:
    response = requests.get(api_url, auth=('ZOSUSER', 'PASSWORD'), verify=False, headers=request_headers)
    # use this whenever they get certificates working
    #response = requests.get(api_url, cert=cert, verify=False, headers=request_headers)

except requests.exceptions.RequestException as e:
    print(e)

#response.json()
print('---------')

print(response)
print('---------')
print(response.json())
print('---------')

Testing the MQ REST API

  • Post category:MQProgramming
  • Reading time:3 mins read

I have been playing around with the MQ REST API. It works very well. Also certificate-based authentication work out of the box.

Of course, you are doing something that MQ-fanatics might find horrific: reliable messaging over an unreliable protocol. They are somewhat right. By no means can MQ provide assured message delivery over an unreliable HTTP protocol. When using this in application, make sure you handle all error situations. For example, when you do not get an http response, you don’t know whether the message was successfully delivered or not. You application has to cater for such situations. Some call this idempotence.

Here is my small Python program that illustrates how you can use the MQ REST API.

import requests
import json
import sys

class MQWebManager:
    baseapiurl = "/ibmmq/rest/v1/messaging"
    
    def __init__(self, ep, ak, cert_file_path, key_file_path):
        self.endpoint = ep
        self.apikey = ak
        self.cert = (cert_file_path, key_file_path) 

    def apideleterequest(self, qmgr, queue, msgid):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'messageId': "'" + msgid + "'",
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue',
            'correlationId' : ''
            }
        data = {}
        response = requests.delete(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response

    def apipostrequest(self, qmgr, queue):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue'
            }
        data = 'hello from apipostrequest'
        print('resource url: ', resourceurl)
        response = requests.post(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response



print('---------')

#cert_file_path = "/yourpath/yourcert.crt"   
#key_file_path = "/yourpath/yourcert.privkey"

cert_file_path = sys.argv[1]
key_file_path = sys.argv[2]

m1 = MQWebManager("https://mqweb.yourzos.com:12345","", cert_file_path, key_file_path)

#put a message on the queue
response = m1.apipostrequest("QMGR","YOUR.Q.NAME") 
print(">>>", response.status_code, response.json)

print(response.headers) 

print(response)
#retrieve msgid from the message we just put there
msgid = response.headers['ibm-mq-md-messageId']
print(response.headers['ibm-mq-md-messageId'])

#delete that message we just put there
response = m1.apideleterequest("QMGR","YOUR.Q.NAME", msgid) 
print(">>>", response.status_code, response.json)



print('---------')

Try out you IBM Z Open Automation Utilities

  • Post category:Programming
  • Reading time:1 mins read

Please find a mini Python program to check out your IBM Z Open Automation Utilities installation and get started with it’s possibilities.

Find more information in the IBM documentation.

#import logging_config
import sys
from zoautil_py import datasets 


# total arguments
n = len(sys.argv)
print("Total arguments passed: ", n)

def printname(dsn):
    print("Testing python and IZOAU")
    print("Dataset " + dsn + " exists: " + str(datasets.exists(dsn)))

if (n > 2):
    print("Pass 1 parameter (dataset name) or less (use default dataset name)") 
elif (n==1):
    printname("MYDATA.DATASET")
else:
    printname(sys.argv[1])

Invoke with:

python testibmzoau.py DATASET.NAME

Modern mainframe application development

  • Post category:DBAOTM
  • Reading time:9 mins read

In the previous DBAOTM article on DevOps I have introduced the traditional development process, which is often still used in a mainframe environment. In this post I will present a modern approach to development on the mainframe.

Modern development processes for the mainframe

Requirements for the development process have changed. Applications must be built faster and it must be possible to change applications more often and quicker without impacting the quality of the application. In other words, agile development is needed. The only way to address today business needs into modern agile development processes is to automate all build and deploy processes.

A set of principles can then be derived for modern mainframe develops processes.

  • All application artefacts are managed in the (or a) Source Code Management tool.
  • The build processes for all artefact are automated, and can be coherently executed.
  • A build can be deployed in any environment. A build has no environment or organization-specific dependencies.
  • The deployment process for a build is fully automated. Including the fallback procedure. The deployment process is a coherent process for all application artefacts.

These principles need to be supported by tools and processes that are (re)designed for these purposes. Of course this is not something specific to z/OS applications, but is true for any modern IT solution. But with the background I have sketched in the previous section, there is a legacy of development processes and tools to take into account and in many organizations this implies significant technical and organizational changes.

The modern SCM for z/OS

The modern SCM tool for z/OS needs to support all kinds of application artefacts. For the mainframe this means for one thing that not only traditional MVS-type artefacts must be supported, like COBOL programs, COPYBOOKS and JCL, but also Unix type artefacts like Unix scripts and configuration files in z/OS Unix directories. The tools and processes should allow for EBCDIC type artefacts to be created or the z/OS runtime environment, as well as ASCI, Unicode and binary artefacts.

Modern SCM tools that can manage z/OS artefacts, are ISPW from Compuware, RTC form IBM, and a new option nowadays is Git, or GitHub.

Build automation

The modern DevOps process automates the creation of a build. The build process takes the required versions of the application artefacts from the source code management repository and creates a coherent package of these artefacts. This package, also called the build, is deployed in a (test) environment.

The build could be deployed in any runtime environment, even outside your organizations. This principle not only enforces standardization of processes and infrastructure in your IT organization, it also allows any future deployments in yet unknown environments – for example in the Cloud.

The automated build process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

Build automation on z/OS can be accomplished with a number of tools. Some of these tools are able to handle the z/OS specific needs. IBM has two solutions: Rational build engine and Jazz build engine. Compuware has capabilities in ISPW. As it stands, all these tools still have some gaps to fill in the coverage of the different artefacts that can make up a z/OS application.

Deployment automation

The modern DevOps process for z/OS automates also the deployment of the application build. The deployment process takes all the artefacts in the build, customizes them for the specific runtime environment, for example through the application of naming conventions and runtime aspects, and deploys the artefacts on the different runtime components in an environment.

The automated deployment process itself should be callable through some generic API, so it can be integrated into other automated processes when needed.

The most important deployment tools available on the market for z/OS are IBM’s UrbanCode Deploy and XebiaLabs’ XLDeploy.

Integration in other pipelines

I have indicated above that the DevOps processes described there must be callable, to use the most generic term I can think of. Since we do not just want to automate the individual pieces of a development process, but the entire chain, this requirement is important.

Only a fully automated Development process – a CI/CD pipeline – can provide optimal speed of development. To achieve this, the integration of build and deployment with other processes like infrastructure provisioning, test data provisioning, and testing is key.

Most of the tools mentioned above have API’s or command line interfaces that allow integration with CI/CD orchestration tools like Jenkins, Ansible, and others.

Implications

The agile development process sketched here impacts the way we do other things on the mainframe as well. I will mention a few here.

Full deployments versus delta deployments

The traditional DTAP development process is based on the development of delta’s: you only deploy these things that are changed.

To facilitate agile development in z/OS environments, we need to move to a process that supports full application deployments. What the consequence of this change are is fully clear, but I am convinced the old way of working with delta’s will not give of the speed and flexibility we need today.

Other impact:

  • Phasing in of an application that consists of many more load modules than we have today, while remaining active, needs to be supported in the middleware tools on z/OS.
  • Application may need to become smaller. Traditionally applications are defined relative coarse grained on z/OS. We may need to split up applications into smaller distinguishable, more loosely coupled parts. We might need to reuse some of the microservices architecture goodies.

To facilitate agile development drastic changes in our thinking about mainframe applications is necessary, and in principle no single goodie from the past should be exempt from reconsideration.

Infrastructure provisioning

We have talked about application processes so far, but the agile DevOps process must be supported by the runtime infrastructure. In the DTAP model, runtime environments are static, defined once and gradually changed, when this was functionally needed.

In order to support rapid changes in applications, we must also allow rapid changes in infrastructure. Similar to the build and deploy processes, all infrastructure provisioning must be automated to allow flexible and instant creation and modification of infrastructure for test environments. This also means that environments must rigorously standardized. Definitions of the infrastructure making up the environment must be treated like code, and be managed in a source code management system, where it can be properly versioned. 

Currently the tool support for infrastructure provisioning is very limited. As part of z/OS the tool z/OSMF is provided that allows the creation of provisioning workflows for z/OS technology-specific creation of infrastructure.

Furthermore, there is work ongoing in IBM and other vendors to extend this lower level capability and integrate this is infrastructure provisioning tools like Kubernetes and OpenShift. And also Ansible for z/OS is quickly emerging. Yet, there is still a long way to go but the first steps have been made.

In a future article I will talk a little bit more on infrastructure provisioning.

Please let me know your thoughts. Always happy to hear from you.

Converting any file from one code page to another

  • Post category:Utilities
  • Reading time:1 mins read

The easiest way on z/OS to convert a file from one code page to another is to use the standard unix iconv utility.

The sample below is using a traditional batch job, but you can see it can just as easily be run from a shell environment or a shell script.

//STEP1    EXEC PGM=BPXBATCH                                   
//STDERR   DD SYSOUT=*                                         
//STDPARM  DD   *                                              
SH  iconv -f UTF-8 -t IBM-1140 inputfile.utf8 outputfile.ebcdic                
/*                                                             
//      ebcdic - Windows                                                       
SH  iconv -f IBM-1140 -t IBM-1252 inputfile.ebcdic outputfile.ibm1252            
//      Windows - ebcdic                                                        
SH  iconv -f IBM-1252 -t IBM-1140 inputfile.ibm1252 outputfile.ibm1140           
//      ebcdic - utf8                                                       
SH  iconv -f IBM-1140 -t UTF-8 inputfile.ebcdic outputfile.utf8                
//