The Lindy effect and technology

  • Post category:Modernization
  • Reading time:1 mins read

Stuff that has been around for x years, can be expected to be around for another x years.

That is what the Lindy effect tells us.

Read about Lindy in Nassim Taleb’s Skin in the Game.

This informs how we should approach legacy technology: 

  • Maintain it motherly, or
  • Decommission it aggressively

The Lindy effect also informs us how to approach the adoption of new technology: with care.

An approach to settling technical debt in your application portfolio

A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.

Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates*.
  • The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:

  • An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of technical debt.

The legacy renovation project

Then, how to approach a legacy renovation project.

  • Make an inventory of your legacy.
  • With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
  • Clean up everything that is not used.
  • Migrate strategic applications.

The inventory

Make an inventory of the artifacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools?
  • Load module: what load modules do you have in our runtime environment,  and in which libraries do these reside?
  • Runtime usage: what load modules are used, and by which batch jobs, or application servers.

Assess the business risk

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.

Clean up

Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have. 

Migrate

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”.  This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.

*Note:

Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.

Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted. 

For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.

In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggle with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Business innovation capacity is limited by legacy applications and technology.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must decide whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a good judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.

25 wishes for the mainframe anno 2023

I listed my wishes for the mainframe. Can we have this please ?

For our software and hardware suppliers:

  1. Continuous testing, at reasonable cost.

Continuous testing is not an option anymore. For the speed that modern agile software factories need, continuous functional, regression and performance testing are mandatory. But with mainframes, the cost of continuous testing quickly become inhibitive. Currently all MSUs are the same, and the new testing MSUs are driving the hardware and more importantly the MLC software costs through the roof. 

Please not through yet another impractical License model (see later).

  1. Real Z hardware for flexible Dev & Test. 

For reliable and manageable regression and performance testing, multiple configurations of test environments are needed, on real Z hardware. The emulation software zPDT or zD&T is not fit-for-purpose and not a manageable and maintainable solution for these needs.

  1. Some problem, same solution for our software suppliers.

Customers do not want to notice that development teams of IBM / CA/Broadcom / Compuware / BMC / HCL /  Rocket / … are geographically dispersed (pun). Please let all your software developers in your company work the same way, on shared problem. 

  1. Sysplex is not an option but a given.

Everything sysplex-enabled by default please. Meaning, ready for data sharing, queue sharing, file sharing, VIPA, etcetera.

  1. Cloud is not an option but a given.

I do not mean Saas. I mean everything is scripted and ready to be automated. Everything can be engineered and parameterised once and rerun many times.

  1. Open source z/OS sandbox for everyone (community managed– do we want to help with this?).

Want to boost innovation on the mainframe? Let’s have a publicly accessible mainframe for individual practitioners. And I mean for z/OS!   

  1. Open source code (parts) for extensions (radicalize ZOWE and Zorow like initiatives).

Give us more open source for z/OS. And the opportunity for the broad public to contribute. We need a community mainframe for z/OS open source initiatives.

  1. Open APIs on everything.

Extend what z/OSMF has given us: APIs on everything you create. Automatically. 

  1. Everything Unicode.

Yes there are more characters than those in codepage 037,and they are really used on mainframes outside the US.

  1. Automate everything.

(everything!)

  1. Fast and easy 5 minute OS+MW installation (push button), like Linux and Windows.

Ok make it half an hour. Still too challenging? Ok, half a day. (Hide SMP/E for customers?)

  1. Clean up old stuff.

There is a lot of things that are not useful nowadays anymore. For example, ISPF is full of it. For example Primary ISPF screen ISR@PRIM Option 1, 4, 5 7 can go. Many other things (print and punch utilities, really). 

  1. Standardized z/OS images.

Remove customization options. Work on a standardized z/OS image. We don’t need the option to rename SYS1.LINKLIB to OURCO.CUSTOM.LINKLIB. Define a standard. If customers want to deviate, it is their problem, not all ours.

  1. Standardized software distribution.

My goodness everyone has invented something to get code from the installation to their LPARs because there’s nothing there. Develop/define a standard. (Oh, and we do not need SMP/E on production systems).

  1. Radically remove innovation hurdles.

For example stop (near) eternal downward compatibility (announce everything must be 64 bit from 2025 forward). Abandon assembler. Force customers to clean up their stuff too.

  1. Radical new pricing.

Ok if it applies to innovative/renovated applications. (But keep pricing SIMPLE please. No 25 new license models, just 1.)

  1. Quality first, speed next.

Slower is not always worse, even into day’s agile world… Fast is good enough.

  1. Support a real sharing community.

We need the next-gen CBT Tape.

  1. A radical innovation mindset.

Versus “this is how we have done things the past 30 years so it must be good”. Yawn.

  1. Everything radically dynamic by design.

Remove need for (unnecessary) IPLs, rebinds, restarts (unless in rippling clusters), …Kill all Exits (see later).

  1. Delete the assembler.

Remove assembler interfaces (anything is better than assembler). Replace with open APIs. Remove all (assembler) exits (see later).

  1. Dynamic SQL.

By default.

  1. More memory.

As much memory as we can get (at a reasonable price). (Getting there, kudo’s.)

  1. Cheap zIIPs.

Smaller z/OS sites crave zIIPs to run the innovative tools like z/OSMF, ZOWE, Ansible, python, etcetera seamlessly.

  1. Removeexits.

Give us parameters, inifiles yaml, JSON, properties, anything other than exits. Better even: give us no options for customization. Standardize.

For our mainframe users:

Kill old technology. Assembler, custom build ISPF interfaces, CLISTs, unsupported compilers, applications using VSAM or worse BDAM, …

Modernise your DevOps pipeline. Use modern tools with modern interfaces and integrations. They are available.

Re-architect applications. Break silo’s into (micro) services while rebuilding applications. (Use Microservices where useful, not as a standard architecture)

Retire old applications. Either revamp or retire. If you have not touched your application for more than 2 years, it has become an operational risk. 

Hire young people, teach them new tools, forbid them to use old tools. Yes I know we need 3270, but not to edit code.

Task your young people with technology modernization.

What am I missing?

Technical debt

  • Post category:Modernization
  • Reading time:2 mins read

Technical debt is a well-understood and well-ignored reality.

We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.

We borrow time by writing quick (and dirty) code, building up debt.

Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.

Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.

That is why continuous renovation is a business concern.

Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.

Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.

Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.

Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.

But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.

Getting started with Ansible – Bill Pereira

  • Post category:Modernization
  • Reading time:1 mins read

Bil Pereira’s channel is a great and very informative Youtube channel. He is adopting new tech for the mainframe and tries it all out, while explaining his experiments on his Youtube channel.

John Mertic on the importance of open-source for the mainframe

Interesting podcast, in which Reg Harbeck talks with John Mertic about the history, future role and community impact of open-source technology for mainframe clinet and in general.

… their ability to have a technology stack that enables them to execute and serve their customers better, is a competitive advantage. We see open source as kind of as little bit of that leveling appeal. It’s enabling people to get to that point faster than they ever had before. You don’t need a vendor to be that person. Even legacy organizations and companies have turned themselves into software companies because open source has opened that door for them.

Go fix it while it ain’t broken yet – modernize the mainframe

Reg Harbeck wrote an excellent article in Destination z, Overcoming IBM Z Inertia, in which encourages IBM Z (mainframe) users to take action on modernizing their mainframe.

The path of action Harbeck describes is to assign new mainframers (RePro’s) with the task to find and document what the current mainframe solutions in place are expected to do, and to work with the organisation to see what needs to retired, replaced or contained.

In other words task a new generation with the mainframe portfolio renewal, and not leave this to the old generation, who are “surviving until they retire while rocking the boat as little as possible” (hard words from Harbeck but it is time to get people in action).

In additional to the general approach Harbeck describes I think it is important to assure senior management support on a level as high as possible. Doing so you prevent that the priority of this program is too easily watered down by day-to-day priorities and you assure perceived or real “impossibilities” and roadblocks are moved out of the way resolutely. So:

  • Make modernization a senior management priority. Separate it organizationally from task from the KSOR (Keep the Show On the Road) activities, to make sure modernization priorities compete as little as possible with KSOR activities.
  • Appoint a senior management and senior technical exec with a strong Z affiliation to mentor and support and guide the young team from a organisational and strategic perspective.
  • Have a forward thinking, strong and respected senior mainframe specialist  support the team, with education and coaching (not to do it for them).

It will be an investment and, according to the “survivors” never be as efficient as before, but one very important thing it will be: fit for the future.

The new mainframe is full of APIs (and has an extension for Docker Containers)

IBM has announced the new mainframe box, not unexpectedly called z15. The z15 is a huge machine and I would summarize this evolution “bigger and better” without being disrespectful.
Nevertheless, the accompanying announcement of the new release of the flagship operating system for the z15, z/OS 2.4 is massively interesting.

The most eye-catching new feature in z/OS 2.4 is z/OS Container Extensions. With this feature you can run Linux Docker containers under z/OS, next to your existing z/OS applications. Making your Containers thus part of your z/OS infrastructure has some powerful advantages.

  • Very easy integration of containers in the Continuous Availability solution for Disaster Recovery of your z/OS platform provided by GDPS. I think this is by far the most powerful benefit. Thus you can provide Linux on z solutions even more easily than through a native Linux for z deployment.
  • Exploit the capabilities of network virtualization of z/OS.
  • Optimized integration with z/OS workloads.

The second important item to look at is the evolution of z/OSMF. Contrary to what IBM Marketing tells us I believe the second important item in my opinion is not Container Pricing, and not Open Data Analytics on z/OS. The development of z/OSMF is more important because it is fundamentally changing the consumability of the platform, turning the platform into the most open computing platform in the market, through the availability op REST APIs on basically any resource on the platform. And fundamentally transforming the way the platform is managed.

It is exciting to see how the mainframe is changing at the moment. The movements of the last five years are really turning the mainframe from a self-oriented platform into one of the most open and accessible technologies on the market.

The only thing left in my opinion is an affordable, public and easily accessible facility to the platform in order to boost the collaborative community around the platform.