Managing the open source software complexity with platforms?

  • Post category:Uncategorized
  • Reading time:2 mins read

The last couple of days I was working on a new setup for software development. I was surprised (actually somewhat irritated) by the efforts needed to get things working.

All the components I needed did not seem to work together: Eclipse, PHP plugin, Git plugin, html editor.

The same happened earlier when setting up for a Python project and some APIs (one based on Python 2, the other on Python 3).

I am still trying to think through what is the core problem. So far I can see that the components and platform are designed to integrate, but the tools all depend on small open-source components in the back, which we find incompatible between the components. 

Maybe there should be a less granular approach to these things, and we should move to (application) platforms. Instead of picking components from GitHub while building our software, get an assembled platform of components. Somebody, or rather, somebody, would assemble and publish the open source platforms, periodically, say every 6 months.

Status quo discomfort

  • Post category:General
  • Reading time:1 mins read

A thought:

The status quo should feel more uncomfortable than the uncertainty of the future.

Best practices, theories, grandmother

  • Post category:General
  • Reading time:2 mins read

Best practices stem from the practical, not from the theoretical. 

A theory explains reality. The current theory explains reality best. A theory is valid as long as there is no theory explaining reality better.

Best practices are ways of doing things. The practice is based on year long experience in the real world. Grandmother told us how she did it. It is not theory. It is not proven formally, by mathematics. It is proven by action and results.

Best practices are perennial. They change very infrequently. Theories change frequently.

In IT best practices are independent of technologies. Examples are: separation of concerns, layering, encapsulation, decoupling. 

Best practices exist for a reason: they work.

A theory may explain why they work. But it is not necessary.

Best practices have been around for years. They were not invented half a year ago. They may be theories. More often then not theories about the applicability of technologies.

I think we need to question “new best practices” .

Instead we must rely on grandmother’s wisdom. 

*All of this very likely inspired by (or rather, stolen from Nassim Taleb’s Anti-fragile writings, and the Lindy effect)

An approach to settling technical debt in your application portfolio

A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.

Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates*.
  • The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:

  • An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of technical debt.

The legacy renovation project

Then, how to approach a legacy renovation project.

  • Make an inventory of your legacy.
  • With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
  • Clean up everything that is not used.
  • Migrate strategic applications.

The inventory

Make an inventory of the artifacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools?
  • Load module: what load modules do you have in our runtime environment,  and in which libraries do these reside?
  • Runtime usage: what load modules are used, and by which batch jobs, or application servers.

Assess the business risk

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.

Clean up

Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have. 

Migrate

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”.  This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.

*Note:

Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.

Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted. 

For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.

In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggle with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Business innovation capacity is limited by legacy applications and technology.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must decide whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a good judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.

IT Architecture: a mini business case

  • Post category:IT architecture
  • Reading time:2 mins read

Organizations heavily depend on software. Millions, billions of lines of code are produced every year.

This only accelerates in the future. (This may even be a self-accelerating process. I was looking to find an estimated for the numbers of lines of code produced every year, but could not find firm figures. Nevertheless who would doubt the number software based solutions is growing.)

Architecture provide a sense of how all the software pieces fit together.

For organisations this means (enterprise, business, IT)  architecture define and assure the organization’s ability the serve customer needs. 

Architecture is the organizing principle. 

You can survive for sometime without an explicit architecture, but at some point you will hit a wall. Inefficiencies in business processes and the supporting software systems will bring an organization to a halt. Problems with functional alignment, business process and application integration, application support, scalability, changeability and what have you (yes I am being lazy now) will need to be addressed to get back on track. 

It is better to get things organized.

architecture or mess

On efficiency (what’s a nanosecond, what’s a microsecond)

  • Post category:IT architecture
  • Reading time:2 mins read

I heard Mark Andreessen predict that programmers need to get very efficient on programming again (At about 17:30) in this very interesting interview with Kevin Kelly.

https://a16z.com/2019/12/12/why-we-should-be-optimistic-about-the-future/

If we do not get more efficient in programming, things might get stuck.

Another insteresting perepctive was already provided by Grace Hopper, the lady that invested COBOL, amongst other things.

See this video: How long is a nanosecond 

This all reminds me of a small test we did recently to check resource consumption of programming languages, by writing just a very small Hello World program. On ein COBOL, one in Java and one in Groovy.

The following summarizes how many CPU seconds these programs needed to run:

COBOL 0,01 msec (basically it was unmeasurable).

Java 1 second.

Groovy: 3 seconds.

And then we are only looking at very inefficient programming languages. Much more could be gained when looking at application architectures. Microservices architectures, especially when applied radically, are incredibly inefficient compared to traditional tightly coupled applications in C, COBOL or even Java.

Of course I do not want to advertise stovepipe applications, history has proven the maintenance issues to be inhibitive, but a more balanced architecture with more eye for efficiency seems inevitable.

Eggheads not coaches for winning soccer

  • Post category:General
  • Reading time:1 mins read

Egghead models for soccer.

In the Dutch Correspondent I found an interesting article from 2019 about data and soccer. It seems that a scientific approach to managing a soccer team is more promising than a hiring a charismatic coach.

https://decorrespondent.nl/10683/wat-krijg-je-als-je-hardcore-betas-hun-gang-laat-gaan-het-interessantste-voetbalexperiment-ter-wereld/2507919553755-aa40ad9f?mc_cid=dcff0ee106&mc_eid=ba609013f0

The scientist Sumpter at work

AWS IaaS first glance: my goodness how cumbersome

  • Post category:Cloud
  • Reading time:2 mins read

I am currently getting myself up to speed with AWS through an AWS Certified Solution Architect – Associate training.

As an architect with quite a bit of background in infrastructures, I was surprised by the level of minute infrastructure details that are still needed for the design of a simple virtualized environment. Even more so by and the amount of subsequent manual configuration to setup an environment. The level is no less than what is needed for a physical environment. My ignorance for sure, but I had expected the IaaS provider to hide much more detail from the user. What you see however is that server and network details up to things like CIDR blocks is still needed. 

I guess there is an opportunity to create more high level infrastructure boilerplates. Around 2000 I was involved in many projects needing similar detailed design for the e-business infrastructures clients needed at that time. After having done that a few times we find that these infrastructure are in fact very similar. For a service like AWS I can similarly envision a boilerplate that needs only that entering of a limited number of functional and non-functional requirements for an setup like a web application infrastructure with a user registry, a scaleable set of application servers and highly available  relational database management system.

(I am assuming IBM, Google, Microsoft and others are not different in this respect, but admittedly I have not checked all of them)

To be fair, AWS does have include a quite extensive documentation system available with reference architectures, technical guides and whitepapers. Here you can find blueprints for solutions, and guidance how to set up such solutions in AWS. Addin assets, tools, automations to support users in setting up such best practice configurations would probably be a great addition to AWS’ Architecture Center and/or Marketplace.

What’s your experience?

25 wishes for the mainframe anno 2023

I listed my wishes for the mainframe. Can we have this please ?

For our software and hardware suppliers:

  1. Continuous testing, at reasonable cost.

Continuous testing is not an option anymore. For the speed that modern agile software factories need, continuous functional, regression and performance testing are mandatory. But with mainframes, the cost of continuous testing quickly become inhibitive. Currently all MSUs are the same, and the new testing MSUs are driving the hardware and more importantly the MLC software costs through the roof. 

Please not through yet another impractical License model (see later).

  1. Real Z hardware for flexible Dev & Test. 

For reliable and manageable regression and performance testing, multiple configurations of test environments are needed, on real Z hardware. The emulation software zPDT or zD&T is not fit-for-purpose and not a manageable and maintainable solution for these needs.

  1. Some problem, same solution for our software suppliers.

Customers do not want to notice that development teams of IBM / CA/Broadcom / Compuware / BMC / HCL /  Rocket / … are geographically dispersed (pun). Please let all your software developers in your company work the same way, on shared problem. 

  1. Sysplex is not an option but a given.

Everything sysplex-enabled by default please. Meaning, ready for data sharing, queue sharing, file sharing, VIPA, etcetera.

  1. Cloud is not an option but a given.

I do not mean Saas. I mean everything is scripted and ready to be automated. Everything can be engineered and parameterised once and rerun many times.

  1. Open source z/OS sandbox for everyone (community managed– do we want to help with this?).

Want to boost innovation on the mainframe? Let’s have a publicly accessible mainframe for individual practitioners. And I mean for z/OS!   

  1. Open source code (parts) for extensions (radicalize ZOWE and Zorow like initiatives).

Give us more open source for z/OS. And the opportunity for the broad public to contribute. We need a community mainframe for z/OS open source initiatives.

  1. Open APIs on everything.

Extend what z/OSMF has given us: APIs on everything you create. Automatically. 

  1. Everything Unicode.

Yes there are more characters than those in codepage 037,and they are really used on mainframes outside the US.

  1. Automate everything.

(everything!)

  1. Fast and easy 5 minute OS+MW installation (push button), like Linux and Windows.

Ok make it half an hour. Still too challenging? Ok, half a day. (Hide SMP/E for customers?)

  1. Clean up old stuff.

There is a lot of things that are not useful nowadays anymore. For example, ISPF is full of it. For example Primary ISPF screen ISR@PRIM Option 1, 4, 5 7 can go. Many other things (print and punch utilities, really). 

  1. Standardized z/OS images.

Remove customization options. Work on a standardized z/OS image. We don’t need the option to rename SYS1.LINKLIB to OURCO.CUSTOM.LINKLIB. Define a standard. If customers want to deviate, it is their problem, not all ours.

  1. Standardized software distribution.

My goodness everyone has invented something to get code from the installation to their LPARs because there’s nothing there. Develop/define a standard. (Oh, and we do not need SMP/E on production systems).

  1. Radically remove innovation hurdles.

For example stop (near) eternal downward compatibility (announce everything must be 64 bit from 2025 forward). Abandon assembler. Force customers to clean up their stuff too.

  1. Radical new pricing.

Ok if it applies to innovative/renovated applications. (But keep pricing SIMPLE please. No 25 new license models, just 1.)

  1. Quality first, speed next.

Slower is not always worse, even into day’s agile world… Fast is good enough.

  1. Support a real sharing community.

We need the next-gen CBT Tape.

  1. A radical innovation mindset.

Versus “this is how we have done things the past 30 years so it must be good”. Yawn.

  1. Everything radically dynamic by design.

Remove need for (unnecessary) IPLs, rebinds, restarts (unless in rippling clusters), …Kill all Exits (see later).

  1. Delete the assembler.

Remove assembler interfaces (anything is better than assembler). Replace with open APIs. Remove all (assembler) exits (see later).

  1. Dynamic SQL.

By default.

  1. More memory.

As much memory as we can get (at a reasonable price). (Getting there, kudo’s.)

  1. Cheap zIIPs.

Smaller z/OS sites crave zIIPs to run the innovative tools like z/OSMF, ZOWE, Ansible, python, etcetera seamlessly.

  1. Removeexits.

Give us parameters, inifiles yaml, JSON, properties, anything other than exits. Better even: give us no options for customization. Standardize.

For our mainframe users:

Kill old technology. Assembler, custom build ISPF interfaces, CLISTs, unsupported compilers, applications using VSAM or worse BDAM, …

Modernise your DevOps pipeline. Use modern tools with modern interfaces and integrations. They are available.

Re-architect applications. Break silo’s into (micro) services while rebuilding applications. (Use Microservices where useful, not as a standard architecture)

Retire old applications. Either revamp or retire. If you have not touched your application for more than 2 years, it has become an operational risk. 

Hire young people, teach them new tools, forbid them to use old tools. Yes I know we need 3270, but not to edit code.

Task your young people with technology modernization.

What am I missing?