25 wishes for the mainframe anno 2023

I listed my wishes for the mainframe. Can we have this please ?

For our software and hardware suppliers:

  1. Continuous testing, at reasonable cost.

Continuous testing is not an option anymore. For the speed that modern agile software factories need, continuous functional, regression and performance testing are mandatory. But with mainframes, the cost of continuous testing quickly become inhibitive. Currently all MSUs are the same, and the new testing MSUs are driving the hardware and more importantly the MLC software costs through the roof. 

Please not through yet another impractical License model (see later).

  1. Real Z hardware for flexible Dev & Test. 

For reliable and manageable regression and performance testing, multiple configurations of test environments are needed, on real Z hardware. The emulation software zPDT or zD&T is not fit-for-purpose and not a manageable and maintainable solution for these needs.

  1. Some problem, same solution for our software suppliers.

Customers do not want to notice that development teams of IBM / CA/Broadcom / Compuware / BMC / HCL /  Rocket / … are geographically dispersed (pun). Please let all your software developers in your company work the same way, on shared problem. 

  1. Sysplex is not an option but a given.

Everything sysplex-enabled by default please. Meaning, ready for data sharing, queue sharing, file sharing, VIPA, etcetera.

  1. Cloud is not an option but a given.

I do not mean Saas. I mean everything is scripted and ready to be automated. Everything can be engineered and parameterised once and rerun many times.

  1. Open source z/OS sandbox for everyone (community managed– do we want to help with this?).

Want to boost innovation on the mainframe? Let’s have a publicly accessible mainframe for individual practitioners. And I mean for z/OS!   

  1. Open source code (parts) for extensions (radicalize ZOWE and Zorow like initiatives).

Give us more open source for z/OS. And the opportunity for the broad public to contribute. We need a community mainframe for z/OS open source initiatives.

  1. Open APIs on everything.

Extend what z/OSMF has given us: APIs on everything you create. Automatically. 

  1. Everything Unicode.

Yes there are more characters than those in codepage 037,and they are really used on mainframes outside the US.

  1. Automate everything.

(everything!)

  1. Fast and easy 5 minute OS+MW installation (push button), like Linux and Windows.

Ok make it half an hour. Still too challenging? Ok, half a day. (Hide SMP/E for customers?)

  1. Clean up old stuff.

There is a lot of things that are not useful nowadays anymore. For example, ISPF is full of it. For example Primary ISPF screen ISR@PRIM Option 1, 4, 5 7 can go. Many other things (print and punch utilities, really). 

  1. Standardized z/OS images.

Remove customization options. Work on a standardized z/OS image. We don’t need the option to rename SYS1.LINKLIB to OURCO.CUSTOM.LINKLIB. Define a standard. If customers want to deviate, it is their problem, not all ours.

  1. Standardized software distribution.

My goodness everyone has invented something to get code from the installation to their LPARs because there’s nothing there. Develop/define a standard. (Oh, and we do not need SMP/E on production systems).

  1. Radically remove innovation hurdles.

For example stop (near) eternal downward compatibility (announce everything must be 64 bit from 2025 forward). Abandon assembler. Force customers to clean up their stuff too.

  1. Radical new pricing.

Ok if it applies to innovative/renovated applications. (But keep pricing SIMPLE please. No 25 new license models, just 1.)

  1. Quality first, speed next.

Slower is not always worse, even into day’s agile world… Fast is good enough.

  1. Support a real sharing community.

We need the next-gen CBT Tape.

  1. A radical innovation mindset.

Versus “this is how we have done things the past 30 years so it must be good”. Yawn.

  1. Everything radically dynamic by design.

Remove need for (unnecessary) IPLs, rebinds, restarts (unless in rippling clusters), …Kill all Exits (see later).

  1. Delete the assembler.

Remove assembler interfaces (anything is better than assembler). Replace with open APIs. Remove all (assembler) exits (see later).

  1. Dynamic SQL.

By default.

  1. More memory.

As much memory as we can get (at a reasonable price). (Getting there, kudo’s.)

  1. Cheap zIIPs.

Smaller z/OS sites crave zIIPs to run the innovative tools like z/OSMF, ZOWE, Ansible, python, etcetera seamlessly.

  1. Removeexits.

Give us parameters, inifiles yaml, JSON, properties, anything other than exits. Better even: give us no options for customization. Standardize.

For our mainframe users:

Kill old technology. Assembler, custom build ISPF interfaces, CLISTs, unsupported compilers, applications using VSAM or worse BDAM, …

Modernise your DevOps pipeline. Use modern tools with modern interfaces and integrations. They are available.

Re-architect applications. Break silo’s into (micro) services while rebuilding applications. (Use Microservices where useful, not as a standard architecture)

Retire old applications. Either revamp or retire. If you have not touched your application for more than 2 years, it has become an operational risk. 

Hire young people, teach them new tools, forbid them to use old tools. Yes I know we need 3270, but not to edit code.

Task your young people with technology modernization.

What am I missing?

Self-centered IT architecture and technology as a necessary evil

  • Post category:IT architecture
  • Reading time:1 mins read

Everyone thinks their own area of technical expertise is the most important.

The Data Architect thinks software solutions must be data-driven.

The integration architect thinks everything must be event-driven, or every interface must be a REST API.

The service management guys think that the CMDB is the center of the universe.

The cloud architect (if such a thing exists) thinks everything must be deployed in the cloud because the cloud is heaven.

We all forget that successful architectures are based on best practices. Quite universal best practices. Don’t tie everything tightly together (layering, loose coupling), do not make things complex (simplicity), etcetera. Technologies are not a goal. They are just a means. At best. Technologies are a necessary evil. You want as little as possible of it.

A brief GraphQL vs REST investigation

  • Post category:Programming
  • Reading time:2 mins read

People around me are talking about using GraphQL, where they position this next to or opposed to REST API’s. I was not sure how these compare so I needed to find out.

In short, from 10000 feet, GraphQL is an alternatief for REST API’s for application programming interfaces. GraphQL provides more flexibility from several perspectives. Read more about that in the link below.

However, it requires a specific GraphQL server side infrastructure. This is probably also going to be a problem for large scale adoption. You can build a GraphQL client in a number of programming languages, but to serve as a GraphQL provider you need one of the few server side implementations.

So, a big benefit of REST API’s is that is is an implementation-independent interface specification that is easy to implement on your server side middleware. GraphQL would require your middleware to integrate one of the GraphQL implementations, or build one natively. This could be a matter of time and adoption, but currently I do not see a broad adoption.

 Apollographql-rubyJunipergqlgen, and Lacinia are GraphQL implementations. 

I found the article that best describes what GraphQL is on the AWS blog Comparing API Design Architectures – AWS (amazon.com). (I am not an AWS afficionado, it is just that the article best addressed what I wanted to know.)

Provisioning z/OS

This week I entertained a little talk about provisioning automation for z/OS. IBM has created a provisioning tool for z/OS that is part of the z/OS base. I talked about our experiences with the tool. It is changing to Ansible technology now. Next technology hop. Let’s talk tech again to refrain from doing things.

Later that same day I started a course called Google Cloud Platform Essentials.

Auch!!

We are somewhat behind on z/OS. That is a major understatement.

We done do tapes anymore, and MVSGENs, but it still feels like an upgrade from a horse to a steam engine.

Yet, I believe if done well the z/OS tools available today should allow us to catch up quickly.

There’s no technical obstacle. It’s only mindset.

Technical debt

  • Post category:Modernization
  • Reading time:2 mins read

Technical debt is a well-understood and well-ignored reality.

We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.

We borrow time by writing quick (and dirty) code, building up debt.

Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.

Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.

That is why continuous renovation is a business concern.

Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.

Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.

Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.

Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.

But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.

The 80/20 principle

  • Post category:Principles
  • Reading time:3 mins read

The 80/20 principle also known as the Pareto principle applies to many areas.

The most common application of the principle is in the assessment of a project effort: 20% of the effort produces 80% of the job to be do done. Or the other way around, the last 20% of the work to be done will take 80% of the effort.

In IT, the principle applies also similarly to requirements versus functionality: 20% of the requirements determine 80 procent of the architecture. 20% of the requirements are the important ones for the core construction of a solution.

The principe thus tells you to focus on the 20% important requirements that determine the architecture. It helps you shrink the options you need to consider and prioritize and focus on the important parts of the solution.

The question is of course: which of the requirements are the important ones. The experience of the architect helps here. But in general you will realize while analysing requirements, if a requirement will need a significant change or addition to the solution.

A good book about the 80/20 principle is the book with the same name: The 80/20 Principle, by Richard Koch.

An example from an practitioner architecting an airline reservation system.

“The first time I (unconsciously) applied the 80/20 rule was in my early days as an architect. I was working with a team of architects on application infrastructure for completely new web applications. A wide variety of applications were planned to run on this infrastructure. However, it was not clear yet what the characteristics, the volumes, response time needs, concurrent users et cetera were for these applications, and that made it uncertain what we needed to cater for in this generic hosting solution.

So we decided to walk through the known use cases for the different applications.

We worked our way through four of the tens of applications. During the fourth we somehow could not come up with additional requirements for the application infrastructure. We realized that the rest of the set of applications would ‘only’ be a variety of one of the apps wealready looked at. So we had our 80% from looking at just 20%.”

Transition to obverse

  • Post category:General
  • Reading time:1 mins read

This blog is now transitioning. When I started the blog I wanted to write about IBM mainframe technology, giving space to other readers, presenting a fresh view.

My intentions have changed, challenges have changed, and readers have changed.

After some posts expressing somewhat obverse standpoints of mine, readers reacted they wanted more of that. Also, in an earlier blog I shared snippets called ‘Principles of doing IT’, which got positive feedback. In this blog I will now bring these together. I will categorize my posts so the reader can easily filter what he wants so see. Yet, I give myself the freedom to keep posting in the order I like, and on the topic that I feel most urgently needs an obverse view.

I hope you enjoy. Please let me know what you think.

Niek

Test if a directory exists in z/OS Unix System Services

  • Post category:Utilities
  • Reading time:1 mins read

Snippet of shell script code to test if a directory exists in z/OS Unix System Services.

#!/bin/sh                                
DEST="/app/somedirectory"                        
if test -d "$DEST"                       
then                                     
    # It is a directory #                
    echo "Directory found ..."           
else                                     
    # does not exist                     
    echo "Directory does not exist"      
fi            

Security on z/OS

  • Post category:DBAOTM
  • Reading time:8 mins read

Security has always been one of the strong propositions and differentiators of the mainframe and the z/OS operating system. In this post I will highlight a few of the differentiating factors of the mainframe hardware and the z/OS operating system.

The mainframe provides a number of distinguishing security features in its hardware. In z/OS a centralized security facility is a mandatory and built-in part of the operating system. Also, z/OS exploits the security features that the mainframe hardware provides. This chapter will highlight what the central security facility in z/OS is, and how z/OS exploit unique hardware features of the mainframes.

Centralized security management

The central security management built into z/OS provides a standardized interface for security operations. A few software vendors have implemented this interface in commercial products, thus providing a security management solution for z/OS.

The SAF interface

The main security component of z/OS is the centralized security function called System Authorization Facility or SAF. This component provides authentication and authorisation functions.

The z/OS operating system itself and the middleware installed on z/OS make use of this central facility. With the SAF functions, z/OS and middleware tools can validate access to the resources that the middleware products need to protect.

A protected resource can be a dataset, a message queue, a database table, but also a special function or command that is part of the middleware software. By building in API calls to the SAF interface, the middleware product controls access to sensitive functions and resources.

Security products

The SAF interface of z/OS operating system is just that: a standardized interface. The implementation of the interface is left to software vendors. The SAF interface does not prescribe how security definitions should be stored or administrated.

There are three commercial solutions in the market that have implemented the SAF interface: IBM with its security product RACF, and CA/Broadcom with two different tools: ACF2 and Top Secret. All three software products provide additional services related to security management such as administration, auditing and reporting services. All three products define a special role in the organisation that is appointed to have the restricted ability to define and change the security rules. The security administrator has these special authorizations. The security administrator defines which users and/or groups of users are allowed to access certain resources.

The SAF interface and security products

IBM Enterprise Key Management Foundation

The z/OS operating system in equipped with a tool that IBM calls the IBM Enterprise Key Management Foundation (EKMF). This is a tool that manages cryptographic keys. EKMF is a full-fledged solution for centralized management of cryptographic keys that can be used on the mainframe, but also on other platforms.

Many organizations have dedicated key management infrastructure for different platforms. The EKMF solution allows organization to instead build a key management solution that can be used for all platforms.

Cryptographic facilities on the mainframe

EKMF and other cryptographic features in z/OS make use of the extensive cryptographic functions built into the mainframe hardware. Traditional encryption facilities have since long been a core part of the mainframe hardware. Recently IBM has added innovative features such as pervasive encryption and Data Privacy Passports, now called Hyper Protect data Controller.

Traditional encryption

The mainframe hardware and software are equipped with the latest encryption facilities, that allow for encryption of data and communications in the traditional manner.

What differentiates the mainframe from other platforms is that it is equipped with special processors that accelerate encryption and decryption operations and can enable encryption of high volumes of data.

Pervasive encryption

Pervasive encryption is a new general feature facilitated in the mainframe hardware. With pervasive encryption data is always encrypted: data is encrypted when stored on disk, but also during the communication over the networks end internal connections between systems. This encrypted data can only be used by users that are authorized to the right decryption keys.

Pervasive encryption gives an additional level of security. Even when a hacker has gained access to the system and gained access to the files or datasets, she still cannot use the data because it is encrypted. Similarly, even if you could “snif” the communications between systems and over the network, this is not sufficient because also the data flowing over communications networks is always encrypted.

IBM Hyper Protect Data Controller

Another problem occurs when data that is replicated from the source in the mainframe to other environments, typically for analysis, or aggregation with other data sources. The data that was so well protected on the mainframe, but now has become available in potentially less controlled environments. For this issue IBM has developed the IBM Hyper Protect Data Controller solution.

With this IBM Hyper Protect Data Controller solution, when a copy of the data is needed, the copy is encrypted and in this copy a piece of information is included that administers who is authorized to access that copy. This access scheme can be as detailed as describing who can use which fields in the data, who can see the content of certain fields, and who can see only masked values. A new component on z/OS, the Trust Authority maintains a registry of all data access definitions.

When the data copy is accessed, the so-called passport controller checks the identity of the person requesting the data access, and authorizations of that person for this copy of the data.

Doing so, a copy of the data can be centrally protected, while still it can be made copied to different environments.

Multifactor Authentication

Traditional authentication on z/OS relies on a userID / password combination, that is validated against the central security registry, as we have seen in RACF, ACF2 or Top-Secret.

However, the userID / password authentication is nowadays not considered sufficiently safe anymore. To address this safety issue multifactor authentication in broadly adopted. For the z/OS platform, IBM has developed the product called Multifactor Authentication for z/OS. Instead of using the normal password to logon to z/OS, a user must supply a token that is generated by a special authorized device. This special device can be a SecurID token device, a smartphone with a special app, or otherwise. The key thing is that next to a userID and password, pin code or fingerprint, there is a second thing – the second factor – needed for the user to prove his identity: the special device or authorized app on your phone.

Multifactor authentication on z/OS

Testing the IBM Workload Scheduler API

  • Post category:Programming
  • Reading time:3 mins read

We see REST API’s appearing on many middleware tools. In a previous post I have talked about the REST API on MQ. I have also been playing around with the IBM Workload Scheduler (IWS) REST API.

The API is very promising. You can use it for automation of IWS administration, but also in your daily business operation.

A major thing that the API lacks is support for certificate-based authentication. This is incomprehensible since the application that provides the API is a normal Liberty application, just like we MQ Web application providing the MQ API’s that I mentioned before. Apparently the people in Hursley do a more thorough programming job then their IWS brothers (not sure where they are located after IBM made the silly move to outsource IWS development to HCL).

Here my Python program to do the most rudimentary test through the API: get engine info.

(I have left in some code commented out that I used to test certificate authentication.)

import requests

print("tested")


print("hello") 

host = "https://yourserver.com:1603"
# your server
baseapiurl = "/twsz" 
# your request url - engine name is your instead of YRTW
getrequest = "/v1/YRTW/engine/info"
api_url = host + baseapiurl + getrequest
print(api_url)

request_headers = {
    'Content-Type': 'application/json'
}

#cert_file_path = "/your/pythonprograms/dwc-client-ssl-xat.crt"
#key_file_path = "/your/pythonprograms/dwc-client-ssl-xat.privkey"
#cert = (cert_file_path, key_file_path)
# data = {'api_dev_key':API_KEY,
#        'api_option':'paste',
#        'api_paste_code':source_code,
#        'api_paste_format':'python'}

data = """{
  "hasDatabasePlan": true,
  "locale": "string",
  "timezone": "string",
  "timezoneEnable": true,
  "roleBasedSecurityEnabled": true,
  "type": "string",
  "version": "string",
  "apiLevel": 0,
  "featureLevel": 0,
  "hasModel": true,
  "hasPlan": true,
  "enableRerunOpt": true,
  "engineType": "string",
  "ltpStartDate": "2022-02-16T13:48:01.978Z",
  "ltpEndDate": "2022-02-16T13:48:01.978Z",
  "dbTimezone": "string",
  "planTimezone": "string",
  "workstationName": "string",
  "domainName": "string",
  "synphonyRunNumber": 0,
  "synphonyScheduledDate": "2022-02-16T13:48:01.978Z",
  "synphonyBatchManStatus": "string",
  "synphonyStartOfDay": 0,
  "masterDomain": "string",
  "masterWorkstation": "string",
  "synphonyFileName": "string",
  "synphonyPlanStart": "2022-02-16T13:48:01.978Z",
  "synphonyPlanEnd": "2022-02-16T13:48:01.978Z",
  "synphonySize": 0,
  "synphonyStartTime": "2022-02-16T13:48:01.978Z",
  "synphonyFound": true,
  "enableLegacyStartOdDayEvaluation": true,
  "dbStartOfDay": "string",
  "rdbmsSchema": "string",
  "rdbmsUser": "string",
  "rdbmsType": "string",
  "rdbmsUrl": "string",
  "fipsEnabled": true,
  "regardlessOfStatusFilterEnabled": true,
  "executorList": [
    {
      "application": "string",
      "namespace": "string",
      "version": "string",
      "factory": "string",
      "supportedOS": "string",
      "stoppable": true,
      "restartable": true,
      "labels": {
        "additionalProp1": "string",
        "additionalProp2": "string",
        "additionalProp3": "string"
      },
      "id": "string",
      "xsdResourceName": "string",
      "cancelSupported": true,
      "supportedWorkstation": "string"
    }
  ],
  "auditStore": "string",
  "auditModel": "string",
  "auditPlan": "string",
  "licenseType": "string",
  "licenseJobNumber": 0,
  "licenseSendDate": 0,
  "wasFirstStartDate": 0,
  "licenseError": "string"
}"""



try:
    response = requests.get(api_url, auth=('ZOSUSER', 'PASSWORD'), verify=False, headers=request_headers)
    # use this whenever they get certificates working
    #response = requests.get(api_url, cert=cert, verify=False, headers=request_headers)

except requests.exceptions.RequestException as e:
    print(e)

#response.json()
print('---------')

print(response)
print('---------')
print(response.json())
print('---------')