People around me are talking about using GraphQL, positioning it next to or opposed to REST APIs. I was not sure how these compare so I needed to find out.
In short, from 10000 feet, GraphQL is an alternatief for REST APIs for application programming interfaces. GraphQL provides more flexibility from several perspectives. Read more about that in the link below.
However, it requires a specific GraphQL server side infrastructure. This is probably also going to be a problem for large scale adoption. You can build a GraphQL client in a number of programming languages, but to serve as a GraphQL provider you need one of the few server side implementations.
So, a big benefit of REST APIs is that is is an implementation-independent interface specification that is easy to implement on your server side middleware. GraphQL would require your middleware to integrate one of the GraphQL implementations, or build one natively. This could be a matter of time and adoption, but currently I do not see a broad adoption.
I found the article that best describes what GraphQL is on the AWS blog Comparing API Design Architectures – AWS (amazon.com). (I am not an AWS afficionado, it is just that the article best addressed what I wanted to know.)
This week I entertained a little talk about provisioning automation for z/OS. IBM has created a provisioning tool for z/OS that is part of the z/OS base. I talked about our experiences with the tool. It is changing to Ansible technology now. Next technology hop. Let’s talk tech again to refrain from doing things.
Technical debt is a well-understood and well-ignored reality.
We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.
We borrow time by writing quick (and dirty) code, building up debt.
Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.
Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.
That is why continuous renovation is a business concern.
Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.
Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.
Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.
Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.
But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.
The 80/20 principle also known as the Pareto principle applies to many areas.
The most common application of the principle is in the assessment of a project effort: 20% of the effort produces 80% of the job to be do done. Or the other way around, the last 20% of the work to be done will take 80% of the effort.
In IT, the principle applies also similarly to requirements versus functionality: 20% of the requirements determine 80 procent of the architecture. 20% of the requirements are the important ones for the core construction of a solution.
The principe thus tells you to focus on the 20% important requirements that determine the architecture. It helps you shrink the options you need to consider and prioritize and focus on the important parts of the solution.
The question is of course: which of the requirements are the important ones. The experience of the architect helps here. But in general you will realize while analysing requirements, if a requirement will need a significant change or addition to the solution.
A good book about the 80/20 principle is the book with the same name: The 80/20 Principle, by Richard Koch.
An example from an practitioner architecting an airline reservation system.
“The first time I (unconsciously) applied the 80/20 rule was in my early days as an architect. I was working with a team of architects on application infrastructure for completely new web applications. A wide variety of applications were planned to run on this infrastructure. However, it was not clear yet what the characteristics, the volumes, response time needs, concurrent users et cetera were for these applications, and that made it uncertain what we needed to cater for in this generic hosting solution.
So we decided to walk through the known use cases for the different applications.
We worked our way through four of the tens of applications. During the fourth we somehow could not come up with additional requirements for the application infrastructure. We realized that the rest of the set of applications would ‘only’ be a variety of one of the apps wealready looked at. So we had our 80% from looking at just 20%.”
This blog is now transitioning. When I started the blog I wanted to write about IBM mainframe technology, giving space to other readers, presenting a fresh view.
My intentions have changed, challenges have changed, and readers have changed.
After some posts expressing somewhat obverse standpoints of mine, readers reacted they wanted more of that. Also, in an earlier blog I shared snippets called ‘Principles of doing IT’, which got positive feedback. In this blog I will now bring these together. I will categorize my posts so the reader can easily filter what he wants so see. Yet, I give myself the freedom to keep posting in the order I like, and on the topic that I feel most urgently needs an obverse view.
I hope you enjoy. Please let me know what you think.
Snippet of shell script code to test if a directory exists in z/OS Unix System Services.
#!/bin/sh
DEST="/app/somedirectory"
if test -d "$DEST"
then
# It is a directory #
echo "Directory found ..."
else
# does not exist
echo "Directory does not exist"
fi
Security has always been one of the strong propositions and differentiators of the mainframe and the z/OS operating system. In this post I will highlight a few of the differentiating factors of the mainframe hardware and the z/OS operating system.
The mainframe provides a number of distinguishing security features in its hardware. In z/OS a centralized security facility is a mandatory and built-in part of the operating system. Also, z/OS exploits the security features that the mainframe hardware provides. This article will highlight what the central security facility in z/OS is, and how z/OS exploits unique hardware features of the mainframe.
Centralized security management
The central security management built into z/OS provides a standardized interface for security operations. A few software vendors have implemented this interface in commercial products, thus providing a security management solution for z/OS.
The SAF interface
The main security component of z/OS is the centralized security function called System Authorization Facility or SAF. This component provides authentication and authorisation functions.
The z/OS operating system itself and the middleware installed on z/OS make use of this central facility. With the SAF functions, z/OS and middleware tools can validate access to the resources that the middleware products need to protect.
A protected resource can be a dataset, a message queue, a database table, but also a special function or command that is part of the middleware software. By building in API calls to the SAF interface, the middleware product controls access to sensitive functions and resources.
Security products
The SAF interface of z/OS operating system is just that: a standardized interface. The implementation of the interface is left to software vendors. The SAF interface does not prescribe how security definitions should be stored or administrated.
There are three commercial solutions in the market that have implemented the SAF interface: IBM with its security product RACF, and CA/Broadcom with two different tools: ACF2 and Top Secret. All three software products provide additional services related to security management such as administration, auditing and reporting services. All three products define a special role in the organisation that is appointed to have the restricted ability to define and change the security rules. The security administrator has these special authorizations. The security administrator defines which users and/or groups of users are allowed to access certain resources.
The SAF interface and security products
IBM Enterprise Key Management Foundation
The z/OS operating system in equipped with a tool that IBM calls the IBM Enterprise Key Management Foundation (EKMF). This is a tool that manages cryptographic keys. EKMF is a full-fledged solution for centralized management of cryptographic keys that can be used on the mainframe, but also on other platforms.
Many organizations have dedicated key management infrastructure for different platforms. The EKMF solution allows organization to instead build a key management solution that can be used for all platforms.
Cryptographic facilities on the mainframe
EKMF and other cryptographic features in z/OS make use of the extensive cryptographic functions built into the mainframe hardware. Traditional encryption facilities have since long been a core part of the mainframe hardware. Recently IBM has added innovative features such as pervasive encryption and Data Privacy Passports, now called Hyper Protect data Controller.
Traditional encryption
The mainframe hardware and software are equipped with the latest encryption facilities, that allow for encryption of data and communications in the traditional manner.
What differentiates the mainframe from other platforms is that it is equipped with special processors that accelerate encryption and decryption operations and can enable encryption of high volumes of data.
Pervasive encryption
Pervasive encryption is a new general feature facilitated in the mainframe hardware. With pervasive encryption data is always encrypted: data is encrypted when stored on disk, but also during the communication over the networks end internal connections between systems. This encrypted data can only be used by users that are authorized to the right decryption keys.
Pervasive encryption gives an additional level of security. Even when a hacker has gained access to the system and gained access to the files or datasets, she still cannot use the data because it is encrypted. Similarly, even if you could “snif” the communications between systems and over the network, this is not sufficient because also the data flowing over communications networks is always encrypted.
IBM Hyper Protect Data Controller
Another problem occurs when data that is replicated from the source in the mainframe to other environments, typically for analysis, or aggregation with other data sources. The data that was so well protected on the mainframe, but now has become available in potentially less controlled environments. For this issue IBM has developed the IBM Hyper Protect Data Controller solution.
With this IBM Hyper Protect Data Controller solution, when a copy of the data is needed, the copy is encrypted and in this copy a piece of information is included that administers who is authorized to access that copy. This access scheme can be as detailed as describing who can use which fields in the data, who can see the content of certain fields, and who can see only masked values. A new component on z/OS, the Trust Authority maintains a registry of all data access definitions.
When the data copy is accessed, the so-called passport controller checks the identity of the person requesting the data access, and authorizations of that person for this copy of the data.
Doing so, a copy of the data can be centrally protected, while still it can be made copied to different environments.
Multifactor Authentication
Traditional authentication on z/OS relies on a userID / password combination, that is validated against the central security registry, as we have seen in RACF, ACF2 or Top-Secret.
However, the userID / password authentication is nowadays not considered sufficiently safe anymore. To address this safety issue multifactor authentication in broadly adopted. For the z/OS platform, IBM has developed the product called Multifactor Authentication for z/OS. Instead of using the normal password to logon to z/OS, a user must supply a token that is generated by a special authorized device. This special device can be a SecurID token device, a smartphone with a special app, or otherwise. The key thing is that next to a userID and password, pin code or fingerprint, there is a second thing – the second factor – needed for the user to prove his identity: the special device or authorized app on your phone.
Multifactor authentication on z/OS
Why mainframe security still leads
The security features described in this article are not just technically impressive—they represent decades of refinement in protecting the world’s most critical data. While other platforms are still adopting zero-trust principles and pervasive encryption as new concepts, the mainframe has had these capabilities built in for years.
For organizations handling sensitive financial, healthcare, or government data, the mainframe’s security architecture remains unmatched. The question is not whether it is secure enough—it is whether your organization is using these capabilities to their full potential.
2025 Update: Quantum-Safe Encryption
Since this article was written, IBM has added quantum-safe encryption to the mainframe platform. With quantum computing emerging as a future threat to traditional encryption, IBM Z was among the first platforms to integrate post-quantum cryptography standards. This makes the mainframe not just secure today, but ready for tomorrow’s threats.
z/OS security is covered in depth in Chapter 11 of my book Don’t Be Afraid of the Mainframe, including how mainframe security compares to other platforms and why it remains the gold standard in enterprise IT.
The API is very promising. You can use it for automation of IWS administration, but also in your daily business operation.
A major thing that the API lacks is support for certificate-based authentication. This is incomprehensible since the application that provides the API is a normal Liberty application, just like we MQ Web application providing the MQ API’s that I mentioned before. Apparently the people in Hursley do a more thorough programming job then their IWS brothers (not sure where they are located after IBM made the silly move to outsource IWS development to HCL).
Here my Python program to do the most rudimentary test through the API: get engine info.
(I have left in some code commented out that I used to test certificate authentication.)
I have been playing around with the MQ REST API. It works very well. Also certificate-based authentication work out of the box.
Of course, you are doing something that MQ-fanatics might find horrific: reliable messaging over an unreliable protocol. They are somewhat right. By no means can MQ provide assured message delivery over an unreliable HTTP protocol. When using this in application, make sure you handle all error situations. For example, when you do not get an http response, you don’t know whether the message was successfully delivered or not. You application has to cater for such situations. Some call this idempotence.
Here is my small Python program that illustrates how you can use the MQ REST API.
import requests
import json
import sys
class MQWebManager:
baseapiurl = "/ibmmq/rest/v1/messaging"
def __init__(self, ep, ak, cert_file_path, key_file_path):
self.endpoint = ep
self.apikey = ak
self.cert = (cert_file_path, key_file_path)
def apideleterequest(self, qmgr, queue, msgid):
# operation = POST or DELETE
resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
request_headers = {
'messageId': "'" + msgid + "'",
'Content-Type' : 'text/plain;charset=utf-8' ,
'ibm-mq-rest-csrf-token' : 'somevalue',
'correlationId' : ''
}
data = {}
response = requests.delete(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
return response
def apipostrequest(self, qmgr, queue):
# operation = POST or DELETE
resourceurl = self.endpoint + "/ibmmq/rest/v1/messaging/qmgr/" + qmgr + "/queue/" + queue + "/message"
request_headers = {
'Content-Type' : 'text/plain;charset=utf-8' ,
'ibm-mq-rest-csrf-token' : 'somevalue'
}
data = 'hello from apipostrequest'
print('resource url: ', resourceurl)
response = requests.post(resourceurl, data=data, cert=self.cert, verify=False, headers=request_headers)
return response
print('---------')
#cert_file_path = "/yourpath/yourcert.crt"
#key_file_path = "/yourpath/yourcert.privkey"
cert_file_path = sys.argv[1]
key_file_path = sys.argv[2]
m1 = MQWebManager("https://mqweb.yourzos.com:12345","", cert_file_path, key_file_path)
#put a message on the queue
response = m1.apipostrequest("QMGR","YOUR.Q.NAME")
print(">>>", response.status_code, response.json)
print(response.headers)
print(response)
#retrieve msgid from the message we just put there
msgid = response.headers['ibm-mq-md-messageId']
print(response.headers['ibm-mq-md-messageId'])
#delete that message we just put there
response = m1.apideleterequest("QMGR","YOUR.Q.NAME", msgid)
print(">>>", response.status_code, response.json)
print('---------')