Managing the open source software complexity with platforms?

The last couple of days I was working on a new setup for software development. I was surprised (actually somewhat irritated) by the efforts needed to get things working.

All the components I needed did not seem to work together: Eclipse, PHP plugin, Git plugin, html editor.

The same happened earlier when setting up for a Python project and some APIs (one based on Python 2, the other on Python 3).

I am still trying to think through what is the core problem. So far I can see that the components and platform are designed to integrate, but the tools all depend on small open-source components in the back, which we find incompatible between the components. 

Maybe there should be a less granular approach to these things, and we should move to (application) platforms. Instead of picking components from GitHub while building our software, get an assembled platform of components. Somebody, or rather, somebody, would assemble and publish the open source platforms, periodically, say every 6 months.

AWS IaaS first glance: my goodness how cumbersome

I am currently getting myself up to speed with AWS through an AWS Certified Solution Architect – Associate training.

No less detail than physical infrastructure

As an architect with quite a bit of background in infrastructures, I was surprised by the level of minute infrastructure details that are still needed for the design of a simple virtualized environment. Even more so by and the amount of subsequent manual configuration to setup an environment. The level of detail required is no less than what is needed for a physical environment. My ignorance for sure, but I had expected the IaaS provider to hide much more detail from the user. What you see however is that server and network details up to things like CIDR blocks is still needed. 

Boilerplates or similar could help

I guess there is an opportunity to create more high level infrastructure boilerplates. Around 2000 I was involved in many projects needing similar detailed design for the e-business infrastructures clients needed at that time. After having done that a few times we find that these infrastructure are in fact very similar. For a service like AWS I can similarly envision a boilerplate that needs only that entering of a limited number of functional and non-functional requirements for an setup like a web application infrastructure with a user registry, a scaleable set of application servers and highly available  relational database management system.

(I am assuming IBM, Google, Microsoft, and others are not different in this respect, but admittedly I have not checked all of them)

To be fair, AWS does include a quite extensive documentation system available with reference architectures, technical guides, and whitepapers. Here you can find blueprints for solutions, and guidance on how to set up such solutions in AWS. Adding assets, tools, and automations to support users in setting up such best practice configurations would probably be a great addition to AWS’s Architecture Center and/or Marketplace.

What’s your experience?

25 wishes for the mainframe anno 2023

I listed my wishes for the mainframe. Can we have this, please?

For our software and hardware suppliers here is my wish list:

  1. Continuous testing, at reasonable cost.

Continuous testing is no longer an option. For the speed that modern agile software factories need, continuous functional, regression, and performance testing are mandatory. But with mainframes, the cost of continuous testing quickly becomes inhibitive. Currently, all MSUs are the same, and the new testing MSUs are driving the hardware and, more importantly, the MLC software costs through the roof. 

Please not through yet another impractical License model (see later).

  1. Real Z hardware for flexible Dev & Test. 

For reliable and manageable regression and performance testing, multiple configurations of test environments are needed, on real Z hardware. The emulation software zPDT or zD&T is not fit-for-purpose and not a manageable and maintainable solution for these needs.

  1. Some problem, same solution for our software suppliers.

Customers do not want to notice that development teams of IBM / CA/Broadcom / Compuware / BMC / HCL /  Rocket / … are geographically dispersed (pun). Please let all your software developers in your company work the same way, on shared problem. 

  1. Sysplex is not an option but a given.

Everything sysplex-enabled by default please. Meaning, ready for data sharing, queue sharing, file sharing, VIPA, etcetera.

  1. Cloud is not an option but a given.

I do not mean Saas. I mean everything is scripted and ready to be automated. Everything can be engineered and parameterised once and rerun many times.

  1. Open source z/OS sandbox for everyone (community managed– do we want to help with this?).

Want to boost innovation on the mainframe? Let’s have a publicly accessible mainframe for individual practitioners. And I mean for z/OS!   

  1. Open source code (parts) for extensions (radicalize ZOWE and Zorow like initiatives).

Give us more open source for z/OS. And the opportunity for the broad public to contribute. We need a community mainframe for z/OS open source initiatives.

  1. Open APIs on everything.

Extend what z/OSMF has given us: APIs on everything you create. Automatically. 

  1. Everything Unicode.

Yes there are more characters than those in codepage 037,and they are really used on mainframes outside the US.

  1. Automate everything.

(everything!)

  1. Fast and easy 5 minute OS+MW installation (push button), like Linux and Windows.

Ok make it half an hour. Still too challenging? Ok, half a day. (Hide SMP/E for customers?)

  1. Clean up old stuff.

There is a lot of things that are not useful nowadays anymore. For example, ISPF is full of it. For example Primary ISPF screen ISR@PRIM Option 1, 4, 5 7 can go. Many other things (print and punch utilities, really). 

  1. Standardized z/OS images.

Remove customization options. Work on a standardized z/OS image. We don’t need the option to rename SYS1.LINKLIB to OURCO.CUSTOM.LINKLIB. Define a standard. If customers want to deviate, it is their problem, not all ours.

  1. Standardized software distribution.

My goodness everyone has invented something to get code from the installation to their LPARs because there’s nothing there. Develop/define a standard. (Oh, and we do not need SMP/E on production systems).

  1. Radically remove innovation hurdles.

For example, stop (near) eternal downward compatibility (announce everything must be 64-bit from 2025 forward). Abandon assembler. Force customers to clean up their stuff too.

  1. Radical new pricing.

Ok if it applies to innovative/renovated applications. (But keep pricing SIMPLE, please. No 25 new license models, just 1.)

  1. Quality first, speed next.

Slower is not always worse, even into day’s agile world… Fast is good enough.

  1. Support a real sharing community.

We need the next-gen CBT Tape.

  1. A radical innovation mindset.

Versus “this is how we have done things the past 30 years so it must be good”. Yawn.

  1. Everything radically dynamic by design.

Remove need for (unnecessary) IPLs, rebinds, restarts (unless in rippling clusters), …Kill all Exits (see later).

  1. Delete the assembler.

Remove assembler interfaces (anything is better than assembler). Replace with open APIs. Remove all (assembler) exits (see later).

  1. Dynamic SQL.

By default.

  1. More memory.

As much memory as we can get (at a reasonable price). (Getting there, kudo’s.)

  1. Cheap zIIPs.

Smaller z/OS sites crave zIIPs to run the innovative tools like z/OSMF, ZOWE, Ansible, python, etcetera seamlessly.

  1. Remove exits.

Give us parameters, ini files, yaml, JSON, properties, anything other than exits. Better even: give us no options for customization. Standardize.

For our mainframe users:

Kill old technology. Assembler, custom build ISPF interfaces, CLISTs, unsupported compilers, applications using VSAM or worse BDAM, …

Modernise your DevOps pipeline. Use modern tools with modern interfaces and integrations. They are available.

Re-architect applications. Break silo’s into (micro) services while rebuilding applications. (Use Microservices where useful, not as a standard architecture)

Retire old applications. Either revamp or retire. If you have not touched your application for more than 2 years, it has become an operational risk. 

Hire young people, teach them new tools, forbid them to use old tools. Yes I know we need 3270, but not to edit code.

Task your young people with technology modernization.

What am I missing?

A brief GraphQL vs REST investigation

People around me are talking about using GraphQL, where they position this next to or opposed to REST API’s. I was not sure how these compare so I needed to find out.

In short, from 10000 feet, GraphQL is an alternatief for REST API’s for application programming interfaces. GraphQL provides more flexibility from several perspectives. Read more about that in the link below.

However, it requires a specific GraphQL server side infrastructure. This is probably also going to be a problem for large scale adoption. You can build a GraphQL client in a number of programming languages, but to serve as a GraphQL provider you need one of the few server side implementations.

So, a big benefit of REST API’s is that is is an implementation-independent interface specification that is easy to implement on your server side middleware. GraphQL would require your middleware to integrate one of the GraphQL implementations, or build one natively. This could be a matter of time and adoption, but currently I do not see a broad adoption.

 Apollographql-rubyJunipergqlgen, and Lacinia are GraphQL implementations. 

I found the article that best describes what GraphQL is on the AWS blog Comparing API Design Architectures – AWS (amazon.com). (I am not an AWS afficionado, it is just that the article best addressed what I wanted to know.)

Provisioning z/OS

This week I entertained a little talk about provisioning automation for z/OS. IBM has created a provisioning tool for z/OS that is part of the z/OS base. I talked about our experiences with the tool. It is changing to Ansible technology now. Next technology hop. Let’s talk tech again to refrain from doing things.

Later that same day I started a course called Google Cloud Platform Essentials.

Auch!!

We are somewhat behind on z/OS. That is a major understatement.

We done do tapes anymore, and MVSGENs, but it still feels like an upgrade from a horse to a steam engine.

Yet, I believe if done well the z/OS tools available today should allow us to catch up quickly.

There’s no technical obstacle. It’s only mindset.

Transition to obverse

This blog is now transitioning. When I started the blog I wanted to write about IBM mainframe technology, giving space to other readers, presenting a fresh view.

My intentions have changed, challenges have changed, and readers have changed.

After some posts expressing somewhat obverse standpoints of mine, readers reacted they wanted more of that. Also, in an earlier blog I shared snippets called ‘Principles of doing IT’, which got positive feedback. In this blog I will now bring these together. I will categorize my posts so the reader can easily filter what he wants so see. Yet, I give myself the freedom to keep posting in the order I like, and on the topic that I feel most urgently needs an obverse view.

I hope you enjoy. Please let me know what you think.

Niek

Test if a directory exists in z/OS Unix System Services

Snippet of shell script code to test if a directory exists in z/OS Unix System Services.

#!/bin/sh                                
DEST="/app/somedirectory"                        
if test -d "$DEST"                       
then                                     
    # It is a directory #                
    echo "Directory found ..."           
else                                     
    # does not exist                     
    echo "Directory does not exist"      
fi            

Testing the IBM Workload Scheduler API

We see REST API’s appearing on many middleware tools. In a previous post I have talked about the REST API on MQ. I have also been playing around with the IBM Workload Scheduler (IWS) REST API.

The API is very promising. You can use it for automation of IWS administration, but also in your daily business operation.

A major thing that the API lacks is support for certificate-based authentication. This is incomprehensible since the application that provides the API is a normal Liberty application, just like we MQ Web application providing the MQ API’s that I mentioned before. Apparently the people in Hursley do a more thorough programming job then their IWS brothers (not sure where they are located after IBM made the silly move to outsource IWS development to HCL).

Here my Python program to do the most rudimentary test through the API: get engine info.

(I have left in some code commented out that I used to test certificate authentication.)

import requests

print("tested")


print("hello") 

host = "https://yourserver.com:1603"
# your server
baseapiurl = "/twsz" 
# your request url - engine name is your instead of YRTW
getrequest = "/v1/YRTW/engine/info"
api_url = host + baseapiurl + getrequest
print(api_url)

request_headers = {
    'Content-Type': 'application/json'
}

#cert_file_path = "/your/pythonprograms/dwc-client-ssl-xat.crt"
#key_file_path = "/your/pythonprograms/dwc-client-ssl-xat.privkey"
#cert = (cert_file_path, key_file_path)
# data = {'api_dev_key':API_KEY,
#        'api_option':'paste',
#        'api_paste_code':source_code,
#        'api_paste_format':'python'}

data = """{
  "hasDatabasePlan": true,
  "locale": "string",
  "timezone": "string",
  "timezoneEnable": true,
  "roleBasedSecurityEnabled": true,
  "type": "string",
  "version": "string",
  "apiLevel": 0,
  "featureLevel": 0,
  "hasModel": true,
  "hasPlan": true,
  "enableRerunOpt": true,
  "engineType": "string",
  "ltpStartDate": "2022-02-16T13:48:01.978Z",
  "ltpEndDate": "2022-02-16T13:48:01.978Z",
  "dbTimezone": "string",
  "planTimezone": "string",
  "workstationName": "string",
  "domainName": "string",
  "synphonyRunNumber": 0,
  "synphonyScheduledDate": "2022-02-16T13:48:01.978Z",
  "synphonyBatchManStatus": "string",
  "synphonyStartOfDay": 0,
  "masterDomain": "string",
  "masterWorkstation": "string",
  "synphonyFileName": "string",
  "synphonyPlanStart": "2022-02-16T13:48:01.978Z",
  "synphonyPlanEnd": "2022-02-16T13:48:01.978Z",
  "synphonySize": 0,
  "synphonyStartTime": "2022-02-16T13:48:01.978Z",
  "synphonyFound": true,
  "enableLegacyStartOdDayEvaluation": true,
  "dbStartOfDay": "string",
  "rdbmsSchema": "string",
  "rdbmsUser": "string",
  "rdbmsType": "string",
  "rdbmsUrl": "string",
  "fipsEnabled": true,
  "regardlessOfStatusFilterEnabled": true,
  "executorList": [
    {
      "application": "string",
      "namespace": "string",
      "version": "string",
      "factory": "string",
      "supportedOS": "string",
      "stoppable": true,
      "restartable": true,
      "labels": {
        "additionalProp1": "string",
        "additionalProp2": "string",
        "additionalProp3": "string"
      },
      "id": "string",
      "xsdResourceName": "string",
      "cancelSupported": true,
      "supportedWorkstation": "string"
    }
  ],
  "auditStore": "string",
  "auditModel": "string",
  "auditPlan": "string",
  "licenseType": "string",
  "licenseJobNumber": 0,
  "licenseSendDate": 0,
  "wasFirstStartDate": 0,
  "licenseError": "string"
}"""



try:
    response = requests.get(api_url, auth=('ZOSUSER', 'PASSWORD'), verify=False, headers=request_headers)
    # use this whenever they get certificates working
    #response = requests.get(api_url, cert=cert, verify=False, headers=request_headers)

except requests.exceptions.RequestException as e:
    print(e)

#response.json()
print('---------')

print(response)
print('---------')
print(response.json())
print('---------')

Testing the MQ REST API

I have been playing around with the MQ REST API. It works very well. Also certificate-based authentication work out of the box.

Of course, you are doing something that MQ-fanatics might find horrific: reliable messaging over an unreliable protocol. They are somewhat right. By no means can MQ provide assured message delivery over an unreliable HTTP protocol. When using this in application, make sure you handle all error situations. For example, when you do not get an http response, you don’t know whether the message was successfully delivered or not. You application has to cater for such situations. Some call this idempotence.

Here is my small Python program that illustrates how you can use the MQ REST API.

import requests
import json
import sys

class MQWebManager:
    baseapiurl = "/ibmmq/rest/v1/messaging"
    
    def __init__(self, ep, ak, cert_file_path, key_file_path):
        self.endpoint = ep
        self.apikey = ak
        self.cert = (cert_file_path, key_file_path) 

    def apideleterequest(self, qmgr, queue, msgid):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'messageId': "'" + msgid + "'",
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue',
            'correlationId' : ''
            }
        data = {}
        response = requests.delete(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response

    def apipostrequest(self, qmgr, queue):
        # operation = POST or DELETE
        resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
        request_headers = {
            'Content-Type' : 'text/plain;charset=utf-8' ,
            'ibm-mq-rest-csrf-token' : 'somevalue'
            }
        data = 'hello from apipostrequest'
        print('resource url: ', resourceurl)
        response = requests.post(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
        return response



print('---------')

#cert_file_path = "/yourpath/yourcert.crt"   
#key_file_path = "/yourpath/yourcert.privkey"

cert_file_path = sys.argv[1]
key_file_path = sys.argv[2]

m1 = MQWebManager("https://mqweb.yourzos.com:12345","", cert_file_path, key_file_path)

#put a message on the queue
response = m1.apipostrequest("QMGR","YOUR.Q.NAME") 
print(">>>", response.status_code, response.json)

print(response.headers) 

print(response)
#retrieve msgid from the message we just put there
msgid = response.headers['ibm-mq-md-messageId']
print(response.headers['ibm-mq-md-messageId'])

#delete that message we just put there
response = m1.apideleterequest("QMGR","YOUR.Q.NAME", msgid) 
print(">>>", response.status_code, response.json)



print('---------')

Try out you IBM Z Open Automation Utilities

Please find a mini Python program to check out your IBM Z Open Automation Utilities installation and explore its possibilities.

Find more information in the IBM documentation.

Be aware of the particular working of passing parameters to a Python program through the command line.

#import logging_config
import sys
from zoautil_py import datasets 


# total arguments
n = len(sys.argv)
print("Total arguments passed: ", n)

def printname(dsn):
    print("Testing python and IZOAU")
    print("Dataset " + dsn + " exists: " + str(datasets.exists(dsn)))

if (n > 2):
    print("Pass 1 parameter (dataset name) or less (use default dataset name)") 
elif (n==1):
    printname("MYDATA.DATASET")
else:
    printname(sys.argv[2])

Invoke with:

python testibmzoau.py DATASET.NAME