In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggles with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Legacy applications and technology limit business innovation capacity.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must be made whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a sound judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.

IT Architecture: a mini business case

Organizations heavily depend on software. Millions, billions of lines of code are produced every year.

This only accelerates in the future. (This may even be a self-accelerating process. I was looking to find an estimated for the numbers of lines of code produced every year, but could not find firm figures. Nevertheless who would doubt the number software based solutions is growing.)

Architecture provide a sense of how all the software pieces fit together.

For organisations this means (enterprise, business, IT)  architecture define and assure the organization’s ability the serve customer needs. 

Architecture is the organizing principle. 

You can survive for sometime without an explicit architecture, but at some point you will hit a wall. Inefficiencies in business processes and the supporting software systems will bring an organization to a halt. Problems with functional alignment, business process and application integration, application support, scalability, changeability and what have you (yes I am being lazy now) will need to be addressed to get back on track. 

It is better to get things organized.

architecture or mess

On efficiency (what’s a nanosecond, what’s a microsecond)

I heard Mark Andreessen predict that programmers need to get very efficient on programming again (At about 17:30) in this very interesting interview with Kevin Kelly.

https://a16z.com/2019/12/12/why-we-should-be-optimistic-about-the-future/

If we do not get more efficient in programming, things might get stuck.

Another insteresting perepctive was already provided by Grace Hopper, the lady that invested COBOL, amongst other things.

See this video: How long is a nanosecond 

This all reminds me of a small test we did recently to check resource consumption of programming languages, by writing just a very small Hello World program. On ein COBOL, one in Java and one in Groovy.

The following summarizes how many CPU seconds these programs needed to run:

COBOL 0,01 msec (basically it was unmeasurable).

Java 1 second.

Groovy: 3 seconds.

And then we are only looking at very inefficient programming languages. Much more could be gained when looking at application architectures. Microservices architectures, especially when applied radically, are incredibly inefficient compared to traditional tightly coupled applications in C, COBOL or even Java.

Of course I do not want to advertise stovepipe applications, history has proven the maintenance issues to be inhibitive, but a more balanced architecture with more eye for efficiency seems inevitable.

Eggheads not coaches for winning soccer

Egghead models for soccer.

In the Dutch Correspondent I found an interesting article from 2019 about data and soccer. It seems that a scientific approach to managing a soccer team is more promising than a hiring a charismatic coach.

https://decorrespondent.nl/10683/wat-krijg-je-als-je-hardcore-betas-hun-gang-laat-gaan-het-interessantste-voetbalexperiment-ter-wereld/2507919553755-aa40ad9f?mc_cid=dcff0ee106&mc_eid=ba609013f0

The scientist Sumpter at work.

AWS IaaS first glance: my goodness how cumbersome

I am currently getting myself up to speed with AWS through an AWS Certified Solution Architect – Associate training.

No less detail than physical infrastructure

As an architect with quite a bit of background in infrastructures, I was surprised by the level of minute infrastructure details that are still needed for the design of a simple virtualized environment. Even more so by and the amount of subsequent manual configuration to setup an environment. The level of detail required is no less than what is needed for a physical environment. My ignorance for sure, but I had expected the IaaS provider to hide much more detail from the user. What you see however is that server and network details up to things like CIDR blocks is still needed. 

Boilerplates or similar could help

I guess there is an opportunity to create more high level infrastructure boilerplates. Around 2000 I was involved in many projects needing similar detailed design for the e-business infrastructures clients needed at that time. After having done that a few times we find that these infrastructure are in fact very similar. For a service like AWS I can similarly envision a boilerplate that needs only that entering of a limited number of functional and non-functional requirements for an setup like a web application infrastructure with a user registry, a scaleable set of application servers and highly available  relational database management system.

(I am assuming IBM, Google, Microsoft, and others are not different in this respect, but admittedly I have not checked all of them)

To be fair, AWS does include a quite extensive documentation system available with reference architectures, technical guides, and whitepapers. Here you can find blueprints for solutions, and guidance on how to set up such solutions in AWS. Adding assets, tools, and automations to support users in setting up such best practice configurations would probably be a great addition to AWS’s Architecture Center and/or Marketplace.

What’s your experience?

25 wishes for the mainframe anno 2023

I listed my wishes for the mainframe. Can we have this, please?

For our software and hardware suppliers here is my wish list:

  1. Continuous testing, at reasonable cost.

Continuous testing is no longer an option. For the speed that modern agile software factories need, continuous functional, regression, and performance testing are mandatory. But with mainframes, the cost of continuous testing quickly becomes inhibitive. Currently, all MSUs are the same, and the new testing MSUs are driving the hardware and, more importantly, the MLC software costs through the roof. 

Please not through yet another impractical License model (see later).

  1. Real Z hardware for flexible Dev & Test. 

For reliable and manageable regression and performance testing, multiple configurations of test environments are needed, on real Z hardware. The emulation software zPDT or zD&T is not fit-for-purpose and not a manageable and maintainable solution for these needs.

  1. Some problem, same solution for our software suppliers.

Customers do not want to notice that development teams of IBM / CA/Broadcom / Compuware / BMC / HCL /  Rocket / … are geographically dispersed (pun). Please let all your software developers in your company work the same way, on shared problem. 

  1. Sysplex is not an option but a given.

Everything sysplex-enabled by default please. Meaning, ready for data sharing, queue sharing, file sharing, VIPA, etcetera.

  1. Cloud is not an option but a given.

I do not mean Saas. I mean everything is scripted and ready to be automated. Everything can be engineered and parameterised once and rerun many times.

  1. Open source z/OS sandbox for everyone (community managed– do we want to help with this?).

Want to boost innovation on the mainframe? Let’s have a publicly accessible mainframe for individual practitioners. And I mean for z/OS!   

  1. Open source code (parts) for extensions (radicalize ZOWE and Zorow like initiatives).

Give us more open source for z/OS. And the opportunity for the broad public to contribute. We need a community mainframe for z/OS open source initiatives.

  1. Open APIs on everything.

Extend what z/OSMF has given us: APIs on everything you create. Automatically. 

  1. Everything Unicode.

Yes there are more characters than those in codepage 037,and they are really used on mainframes outside the US.

  1. Automate everything.

(everything!)

  1. Fast and easy 5 minute OS+MW installation (push button), like Linux and Windows.

Ok make it half an hour. Still too challenging? Ok, half a day. (Hide SMP/E for customers?)

  1. Clean up old stuff.

There is a lot of things that are not useful nowadays anymore. For example, ISPF is full of it. For example Primary ISPF screen ISR@PRIM Option 1, 4, 5 7 can go. Many other things (print and punch utilities, really). 

  1. Standardized z/OS images.

Remove customization options. Work on a standardized z/OS image. We don’t need the option to rename SYS1.LINKLIB to OURCO.CUSTOM.LINKLIB. Define a standard. If customers want to deviate, it is their problem, not all ours.

  1. Standardized software distribution.

My goodness everyone has invented something to get code from the installation to their LPARs because there’s nothing there. Develop/define a standard. (Oh, and we do not need SMP/E on production systems).

  1. Radically remove innovation hurdles.

For example, stop (near) eternal downward compatibility (announce everything must be 64-bit from 2025 forward). Abandon assembler. Force customers to clean up their stuff too.

  1. Radical new pricing.

Ok if it applies to innovative/renovated applications. (But keep pricing SIMPLE, please. No 25 new license models, just 1.)

  1. Quality first, speed next.

Slower is not always worse, even into day’s agile world… Fast is good enough.

  1. Support a real sharing community.

We need the next-gen CBT Tape.

  1. A radical innovation mindset.

Versus “this is how we have done things the past 30 years so it must be good”. Yawn.

  1. Everything radically dynamic by design.

Remove need for (unnecessary) IPLs, rebinds, restarts (unless in rippling clusters), …Kill all Exits (see later).

  1. Delete the assembler.

Remove assembler interfaces (anything is better than assembler). Replace with open APIs. Remove all (assembler) exits (see later).

  1. Dynamic SQL.

By default.

  1. More memory.

As much memory as we can get (at a reasonable price). (Getting there, kudo’s.)

  1. Cheap zIIPs.

Smaller z/OS sites crave zIIPs to run the innovative tools like z/OSMF, ZOWE, Ansible, python, etcetera seamlessly.

  1. Remove exits.

Give us parameters, ini files, yaml, JSON, properties, anything other than exits. Better even: give us no options for customization. Standardize.

For our mainframe users:

Kill old technology. Assembler, custom build ISPF interfaces, CLISTs, unsupported compilers, applications using VSAM or worse BDAM, …

Modernise your DevOps pipeline. Use modern tools with modern interfaces and integrations. They are available.

Re-architect applications. Break silo’s into (micro) services while rebuilding applications. (Use Microservices where useful, not as a standard architecture)

Retire old applications. Either revamp or retire. If you have not touched your application for more than 2 years, it has become an operational risk. 

Hire young people, teach them new tools, forbid them to use old tools. Yes I know we need 3270, but not to edit code.

Task your young people with technology modernization.

What am I missing?

Self-centered IT architecture and technology as a necessary evil

Everyone thinks their own area of technical expertise is the most important.

The Data Architect thinks software solutions must be data-driven.

The integration architect thinks everything must be event-driven, or every interface must be a REST API.

The service management guys think that the CMDB is the center of the universe.

The cloud architect (if such a thing exists) thinks everything must be deployed in the cloud because the cloud is heaven.

We all forget that successful architectures are based on best practices. Quite universal best practices. Don’t tie everything tightly together (layering, loose coupling), do not make things complex (simplicity), etcetera. Technologies are not a goal. They are just a means. At best. Technologies are a necessary evil. You want as little as possible of it.

A brief GraphQL vs REST investigation

People around me are talking about using GraphQL, where they position this next to or opposed to REST API’s. I was not sure how these compare so I needed to find out.

In short, from 10000 feet, GraphQL is an alternatief for REST API’s for application programming interfaces. GraphQL provides more flexibility from several perspectives. Read more about that in the link below.

However, it requires a specific GraphQL server side infrastructure. This is probably also going to be a problem for large scale adoption. You can build a GraphQL client in a number of programming languages, but to serve as a GraphQL provider you need one of the few server side implementations.

So, a big benefit of REST API’s is that is is an implementation-independent interface specification that is easy to implement on your server side middleware. GraphQL would require your middleware to integrate one of the GraphQL implementations, or build one natively. This could be a matter of time and adoption, but currently I do not see a broad adoption.

 Apollographql-rubyJunipergqlgen, and Lacinia are GraphQL implementations. 

I found the article that best describes what GraphQL is on the AWS blog Comparing API Design Architectures – AWS (amazon.com). (I am not an AWS afficionado, it is just that the article best addressed what I wanted to know.)

Provisioning z/OS

This week I entertained a little talk about provisioning automation for z/OS. IBM has created a provisioning tool for z/OS that is part of the z/OS base. I talked about our experiences with the tool. It is changing to Ansible technology now. Next technology hop. Let’s talk tech again to refrain from doing things.

Later that same day I started a course called Google Cloud Platform Essentials.

Auch!!

We are somewhat behind on z/OS. That is a major understatement.

We done do tapes anymore, and MVSGENs, but it still feels like an upgrade from a horse to a steam engine.

Yet, I believe if done well the z/OS tools available today should allow us to catch up quickly.

There’s no technical obstacle. It’s only mindset.

Technical debt

Technical debt is a well-understood and well-ignored reality.

We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.

We borrow time by writing quick (and dirty) code, building up debt.

Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.

Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.

That is why continuous renovation is a business concern.

Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.

Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.

Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.

Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.

But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.