I often realize I’m doing repetitive tasks that could easily be automated. Money transfers, reminders, invoices-these are simple, low-effort activities that don’t deserve to consume my time. Every time this happens, I tell myself, “I should automate this to save time and mental effort.” And yet, somehow, I don’t. I tell myself I have no time to automate.
Automation frees up time
In IT, especially when automating mainframe processes, we encounter the same hesitation:
“We don’t need to automate this; once it’s done, we’ll never do it again.”
Which almost never turns out to be true.
Repetitive tasks-whether personal or IT-related-are often simple to automate but remain manual due to perceived time constraints.
In IT, automation is critical. It reduces manual errors, improves consistency, and frees up time for more strategic work.
A Shift in Mindset
Automation requires a different engineering mindset. Instead of the familiar cycle:
Do → Fix → Fix → Fix
We move to:
Engineer process → Run process / Fix process → Run process → Run process
Once engineered, automated processes run with minimal intervention, saving both time and effort.
When to Automate
If you find yourself performing a task more than twice, consider automating it. Whether through shell scripting, JCL, utilities, or tools like Ansible, automation quickly pays off.
Automation is not optional-it’s essential for efficient IT operations and professional growth. Start automating today to work smarter, not harder.
Don’t waste time doing things more than twice. If you do something for the third time, automate it-you’ll likely have to do it a fourth or fifth time as well.
The interviewers, Herbert Blankesteijn and Ben van der Burg, were surprised to find that COBOL is not bad and is very good for programming administrative automation processes. Legacy is not an issue. Not allowing time for maintenance is a management issue. He mentioned the Lindy effect which tells us that the life expectancy of old code increases with time. The established code is anti-fragile.
Anyone in the product chain can pull the Andon Cord to stop production when he notices that the product’s quality is poor.
Stopping a system when a defect is suspected originates back to Toyota. The idea is that by blocking the system, you get an immediate opportunity for improvement or find a root cause instead of letting the defect move further down the line and be unresolved.
A crucial aspect of Toyota’s “Andon Cord” process was that when the team leader arrived at the workstation, they thanked the team member who pulled the Cord.
The incident would not be a paper report or a long-tail bureaucratic process. The problem would be immediately addressed, and the team member who pulled the cord would fix it.
For software systems, this practice is beneficial as well. However, the opposite process is likely the practice we see in our drive for quick results.
We don’t stop the process in case of issues. We apply a quick fix, and ‘we will resolve it later’.
The person noticing an issue is regarded as a whistle-blower. Issues may get covered in this culture, leading to even more severe problems.
When serious issues occur, we start a bureaucratic process that quickly becomes political, resulting in watered-down solutions and covering up the fundamental problems.
In software systems, backward compatibility is a blessing and a curse. While backward compatibility discharges users from mandatory software updates, it is also an excuse to ignore maintenance. For software vendors, omitting backward compatibility is a means to get users to buy new stuff; “enjoy our latest innovations!”.
1980s software on 64-bit hardware
DS Backward compatibility
You can not run Windows 95 software on Windows 11.
You can not Run MacOS X software on a PowerBook G4 from 2006.
You can not use Java version 5 software on a Java 11 runtime.
You can, however, run mainframe software compiled in 1980 for 16-bit hardware on the latest z/OS 64-bit operating system and the latest IBM Z hardware. This compatibility is one of the reasons for the success of the IBM mainframe.
Backward compatibility in software has significant benefits. The most significant benefit is that you do not need to change applications with technology upgrades. This saves large amounts of effort and, thus, money for changes that bring no business benefit.
The dangers of backward compatibility
Backward compatibility also has very significant drawbacks:
Because you do not need to fix software for technology upgrades, backward compatibility leads to laziness in maintenance. Just because it keeps running, the whole existence of the software is lost out of sight. Development teams lose the knowledge of the functionality and sometimes even the supporting business processes. Minor changes may be made haphazardly, leading to slowly increasing code complexity. Horrific additions are made to applications, using tools like screen scraping, leading to further complexity of the IT landscape. Then, significant changes are suddenly necessary, and you are in big trouble.
Backward compatibility hinders innovation. Not only can you not take advantage of modern hardware capabilities, but you also get stuck with programming and interfacing paradigms of the past. You can not exploit functionality trapped inside old programs, and it is tough to integrate through modern technologies like REST APIs.
The problem may be even more significant. Because you do not touch your code, other issues may appear.
Over the years, you will change from source code management tools. During these transitions, code can get lost, or insight into the correct versions of programs gets lost.
Also, compilers are upgraded all the time. And the specifications of the programming languages may change. Consequently, the code you have, which belongs to the programs running in your production environment, can not be compiled any longer. When changes are necessary, your code suddenly needs to catch up with all these changes. And that will make the change a lot riskier.
How to avoid backward compatibility complacency?
Establish a policy to recompile, test, and deploy programs every 2 or 3 years, even if the code needs no functional change. Prevent a pile of technical debt.
Is that a lot of work? It does not need to be. You could automate most, if not all, of the compilation and testing process. If nothing functionally changes, modern test tools can help support this process. With these tools, you can automate running tests, compare results with the expected output, and pinpoint issues.
This process also has a benefit: your recompiled code will run faster because it can use the latest hardware features. You can save money if your software bill is based on CPU consumption.
Don’t let backward compatibility make you backward.
My proverbial neighbor asked me some time ago if he could have a zero data loss ransomware recovery solution for his IT shop. He is not a very technical guy, yet responsible for the IT in his department, and he is wise enough to go seek advice on such matters. My man next door could very well be your boss, being provoked by a salesperson from your software vendor.
What is a zero data loss ransomware recovery solution?
A ransomware recovery solution is a tool that provides you the ability to recovery your IT systems from the incident in which a ransomware criminal has encrypted a crucial part of your IT systems. A zero data loss solution promises to provide such a recovery without the loss of any data. The promise of zero data loss must be approached with the necessary skepticism. A zero data loss solution requires you to be able to decrypt the data that your ransomware criminal has encrypted with the keys that he offers to give you for a nice sum of money. To get these keys you have two options:
Pay the criminal and hope he will send you the keys.
Create the keys yourself. This would require some highly advanced algorithm, possibly using a tool based on Quantum computing technology. This is a fantasy of course. This first person to know about the practical application of such technology would be your ransomware criminal himself, and he will have applied this in his encryption tooling.
So getting the keys is not an option, unless you are in the position to save up a lot of money, or find an insurer that will carry your ransomware risk. Although I expect that will come at an excruciating premium.
The next best option is to recover your data from a point in time just before the event of the ransomware attack. This requires a significant investment in advanced backup technology, and complex recovery procedures, while giving you little guarantee as to what state your systems can be recovered. And, setting the expectations, will come with the loss of all data that your ransomware criminal managed to encrypt. We cannot make it more beautiful.
Best practices stem from the practical, not from the theoretical.
A theory explains reality. The current theory explains reality best. A theory is valid as long as there is no theory explaining reality better.
Best practices are ways of doing things. The practice is based on a year-long experience in the real world. Grandmother told us how she did it. It is not a theory. It is not proven formally by mathematics. It is proven by action and results.
Best practices are perennial. They change very infrequently. Theories change frequently.
In IT best practices are independent of technologies. Examples are: separation of concerns, layering, encapsulation, decoupling.
Best practices exist for a reason: they work.
A theory may explain why they work. But it is not necessary.
Best practices have been around for years. They were not invented half a year ago. They may be theories. More often then not theories about the applicability of technologies.
I think we need to question “new best practices” .
A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.
Risks addressed
Risks of old technology in your software portfolio typically are:
The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates.
The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.
Business case development
A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:
An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
An assessment of the technical knowledge gap in your teams in the area of technical debt.
The legacy renovation project
Then, how to approach a legacy renovation project.
Make an inventory of your legacy.
With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
Clean up everything that is not used.
Migrate strategic applications.
The inventory
Make an inventory of the artifacts in your application portfolio:
Source code: what old technology source program do you have in your source code management tools?
Load module: what load modules do you have in our runtime environment, and in which libraries do these reside?
Runtime usage: what load modules are used, and by which batch jobs, or application servers.
Assess the business risk
Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.
Clean up
Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have.
Migrate
Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”. This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.
*Note:
Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.
Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted.
For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.