Best practices, theories, and grandmother

Best practices stem from the practical, not from the theoretical. 

A theory explains reality. The current theory explains reality best. A theory is valid as long as there is no theory explaining reality better.

Best practices are ways of doing things. The practice is based on a year-long experience in the real world. Grandmother told us how she did it. It is not a theory. It is not proven formally by mathematics. It is proven by action and results.

Best practices are perennial. They change very infrequently. Theories change frequently.

In IT best practices are independent of technologies. Examples are: separation of concerns, layering, encapsulation, decoupling. 

Best practices exist for a reason: they work.

A theory may explain why they work. But it is not necessary.

Best practices have been around for years. They were not invented half a year ago. They may be theories. More often then not theories about the applicability of technologies.

I think we need to question “new best practices” .

Instead we must rely on grandmother’s wisdom. 

*All of this very likely inspired by (or rather, stolen from Nassim Taleb’s Anti-fragile writings, and the Lindy effect)

An approach to settling technical debt in your application portfolio

A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.

Risks addressed

Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates.
  • The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

Business case development

A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:

  • An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of technical debt.

The legacy renovation project

Then, how to approach a legacy renovation project.

  • Make an inventory of your legacy.
  • With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
  • Clean up everything that is not used.
  • Migrate strategic applications.

The inventory

Make an inventory of the artifacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools?
  • Load module: what load modules do you have in our runtime environment,  and in which libraries do these reside?
  • Runtime usage: what load modules are used, and by which batch jobs, or application servers.

Assess the business risk

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.

Clean up

Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have. 

Migrate

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”.  This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.

*Note:

Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.

Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted. 

For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.

In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggles with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Legacy applications and technology limit business innovation capacity.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must be made whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a sound judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.

IT Architecture: a mini business case

Organizations heavily depend on software. Millions, billions of lines of code are produced every year.

This only accelerates in the future. (This may even be a self-accelerating process. I was looking to find an estimated for the numbers of lines of code produced every year, but could not find firm figures. Nevertheless who would doubt the number software based solutions is growing.)

Architecture provide a sense of how all the software pieces fit together.

For organisations this means (enterprise, business, IT)  architecture define and assure the organization’s ability the serve customer needs. 

Architecture is the organizing principle. 

You can survive for sometime without an explicit architecture, but at some point you will hit a wall. Inefficiencies in business processes and the supporting software systems will bring an organization to a halt. Problems with functional alignment, business process and application integration, application support, scalability, changeability and what have you (yes I am being lazy now) will need to be addressed to get back on track. 

It is better to get things organized.

architecture or mess

On efficiency (what’s a nanosecond, what’s a microsecond)

I heard Mark Andreessen predict that programmers need to get very efficient on programming again (At about 17:30) in this very interesting interview with Kevin Kelly.

https://a16z.com/2019/12/12/why-we-should-be-optimistic-about-the-future/

If we do not get more efficient in programming, things might get stuck.

Another insteresting perepctive was already provided by Grace Hopper, the lady that invested COBOL, amongst other things.

See this video: How long is a nanosecond 

This all reminds me of a small test we did recently to check resource consumption of programming languages, by writing just a very small Hello World program. On ein COBOL, one in Java and one in Groovy.

The following summarizes how many CPU seconds these programs needed to run:

COBOL 0,01 msec (basically it was unmeasurable).

Java 1 second.

Groovy: 3 seconds.

And then we are only looking at very inefficient programming languages. Much more could be gained when looking at application architectures. Microservices architectures, especially when applied radically, are incredibly inefficient compared to traditional tightly coupled applications in C, COBOL or even Java.

Of course I do not want to advertise stovepipe applications, history has proven the maintenance issues to be inhibitive, but a more balanced architecture with more eye for efficiency seems inevitable.

Eggheads not coaches for winning soccer

Egghead models for soccer.

In the Dutch Correspondent I found an interesting article from 2019 about data and soccer. It seems that a scientific approach to managing a soccer team is more promising than a hiring a charismatic coach.

https://decorrespondent.nl/10683/wat-krijg-je-als-je-hardcore-betas-hun-gang-laat-gaan-het-interessantste-voetbalexperiment-ter-wereld/2507919553755-aa40ad9f?mc_cid=dcff0ee106&mc_eid=ba609013f0

The scientist Sumpter at work.

Self-centered IT architecture and technology as a necessary evil

Everyone thinks their own area of technical expertise is the most important.

The Data Architect thinks software solutions must be data-driven.

The integration architect thinks everything must be event-driven, or every interface must be a REST API.

The service management guys think that the CMDB is the center of the universe.

The cloud architect (if such a thing exists) thinks everything must be deployed in the cloud because the cloud is heaven.

We all forget that successful architectures are based on best practices. Quite universal best practices. Don’t tie everything tightly together (layering, loose coupling), do not make things complex (simplicity), etcetera. Technologies are not a goal. They are just a means. At best. Technologies are a necessary evil. You want as little as possible of it.

Technical debt

Technical debt is a well-understood and well-ignored reality.

We love to build new stuff, with new technologies. Which we are sure will soon replace the old stuff.

We borrow time by writing quick (and dirty) code, building up debt.

Eventually we have to pay back — with interest. There’s a high interest on quick and dirty code.

Making changes to the code becomes more and more cumbersome. Then business change becomes more painstakingly hard.

That is why continuous renovation is a business concern.

Organisations run into trouble when they continue ignoring technical debt, and keep building new stuff while neglecting the old stuff.

Techies like new stuff, but if they are professional they also warn you for the old stuff still out there. You see them often getting frustrated with overly pragmatic business or project management, pushing away the renovation needs.

Continuous renovation must be part of an IT organisation’s Continous Delivery and application lifecycle management practice.

Making renovation a priority requires courage. Renovation is unsexy. It requires a perspective that extends the short-term horizon.

But the alternative is a monstruous project every so many years to free an organisation from the concrete shoes of unmaintainable applications. At best. If you can afford such a project. Many organization do not survive the neglect of technical debt.

Transition to obverse

This blog is now transitioning. When I started the blog I wanted to write about IBM mainframe technology, giving space to other readers, presenting a fresh view.

My intentions have changed, challenges have changed, and readers have changed.

After some posts expressing somewhat obverse standpoints of mine, readers reacted they wanted more of that. Also, in an earlier blog I shared snippets called ‘Principles of doing IT’, which got positive feedback. In this blog I will now bring these together. I will categorize my posts so the reader can easily filter what he wants so see. Yet, I give myself the freedom to keep posting in the order I like, and on the topic that I feel most urgently needs an obverse view.

I hope you enjoy. Please let me know what you think.

Niek

John Mertic on the importance of open-source for the mainframe

Interesting podcast, in which Reg Harbeck talks with John Mertic about the history, future role and community impact of open-source technology for mainframe clinet and in general.

https://soundcloud.com/ibm-systems-magazine/ztalk-with-harbeck-john-mertic-06-12-20

… their ability to have a technology stack that enables them to execute and serve their customers better, is a competitive advantage. We see open source as kind of as little bit of that leveling appeal. It’s enabling people to get to that point faster than they ever had before. You don’t need a vendor to be that person. Even legacy organizations and companies have turned themselves into software companies because open source has opened that door for them.