The myth of zero data loss ransomware recovery

My proverbial neighbor asked me some time ago if he could have a zero data loss ransomware recovery solution for his IT shop. He is not a very technical guy, yet responsible for the IT in his department, and he is wise enough to go seek advice on such matters. My man next door could very well be your boss, being provoked by a salesperson from your software vendor.

What is a zero data loss ransomware recovery solution?

A ransomware recovery solution is a tool that provides you the ability to recovery your IT systems from the incident in which a ransomware criminal has encrypted a crucial part of your IT systems. A zero data loss solution promises to provide such a recovery without the loss of any data. The promise of zero data loss must be approached with the necessary skepticism. A zero data loss solution requires you to be able to decrypt the data that your ransomware criminal has encrypted with the keys that he offers to give you for a nice sum of money. To get these keys you have two options:

  1. Pay the criminal and hope he will send you the keys.
  2. Create the keys yourself. This would require some highly advanced algorithm, possibly using a tool based on Quantum computing technology. This is a fantasy of course. This first person to know about the practical application of such technology would be your ransomware criminal himself, and he will have applied this in his encryption tooling.

So getting the keys is not an option, unless you are in the position to save up a lot of money, or find an insurer that will carry your ransomware risk. Although I expect that will come at an excruciating premium.

The next best option is to recover your data from a point in time just before the event of the ransomware attack. This requires a significant investment in advanced backup technology, and complex recovery procedures, while giving you little guarantee as to what state your systems can be recovered. And, setting the expectations, will come with the loss of all data that your ransomware criminal managed to encrypt. We cannot make it more beautiful.

Programming languages and what’s next

My review of programming languages I learned in during my years in IT.

BASIC

On the Texas Instruments TI99-4a. 

Could do everything with it. Especially in combination with PEEK and POKE. Nice for building small games.

Impossible to maintain.

GOTO is unavoidable.

Assembler

In various variants.

Z80, 6802, PDP 11, System 390.

Fast, furious, unreadable, unmaintainable.

Algol 68

Loved this language. REF!

Have only seen it run on DEC 10. Mainly used in academic environments (in the Netherlands?)?

Pascal

Well. Structured. Pretty popular in the early 90s. 

Again is this widely adopted?

COBOL

Old. Never programmed extensively in it – just for year 2000.

Totally Readable.

Funny (rediculous) numbering scheme.

Seems to be necessary to use GOTO in some cases which I do not believe.

Smalltalk

Beautiful language.

Should have become the de facto OO programming language but failed for unclear reasons.

Probably because it was way ahead of it’s time with it’s OO base.

Java

Totally nitty gritty programming language.

Productivity based on frameworks, which no one knows which to use.

Never understood why this language was so widely adopted – besides it’s openness and platform independency.

Should never have become the de facto OO programming language but did so because Sun made it open (good move).

Far too many framework needed. J(2)EE add more complexity than it resolves.

Always upgrade issues. (Proud programmer: We run the application in Java! Fed up IT manager: Which Java?)

Rexx

Can do everything quickly.

But nothing structurally.

Ugly code. Readable but ugly.

Some very very strong concepts.

Php

Hodge podgy language of programming concepts and html.

Likely high programmer productivity if you maintain a stark discipline of programming standards. Stark danger of creating unmaintainable crap code mix of html and php.

Python

Nice structured language.

Difficult to set up and reuse.

Can be productive if nitty gritty setup issues can be overcome.

Ruby (on Rails or off-track)

Nice, probably the most elegant OO language. Too nitty gritty to my taste still. Like it though.

I would start with this language if I had to start today.

What is next

Visual programming? Clicking building blocks together?

In programming we should maybe separate the construction of applications from the coding of functions (or objects, or whatever you call the lower level blocks of code.

Programming complex algorithms (efficiently) will probably always remain a craft for specialists.

Constructing applications from the pieces should be brought to a higher level.

The industry (well – the software selling industry) is looking at microservices but that gives operational issues and becomes too distrubuted.

We need a way to build a house from software bricks and doors and windows and roof elements.

Probably we need more standards for that. 

Another bold statement.

AI systems “programming” themselves is nonsense (I have not seen a shred of evidence). 

AI systems are stochastical systems. 

Programming is imperical.

In summary, up to today you can not build software without getting into the nitty gritty very quickly. 

It’s like building a house but having find your own tree and rocks first to cut wood and blicks from. 

And then contruct nails and screws.

A better approach to that would help.

What do you think is the programming language of the future? What need should it address?

The Internet of Everything – from toilet seats to human bodies

  • Post category:General
  • Post author:
  • Reading time:3 mins read

I walked into the restroom. A mechanic stood at the sink fixing something. It saw him holding a toilet seat. He was fooling around with the wiring of the apparatus. Then he replaced some electronics components and rewired the seat.

Toilet sensors

It never occurred to me that even toilets could be usefully equipped with electronic features. I asked the mechanic. He explained that the toilets in the building are all connected to the Internet. If there is something wrong with the antiseptic fluid produced by the toilet, it starts calling out for help. He told me that the towel dispenser was also connected to the Internet, so that when it runs out, a maintenance operator is called in. Makes sense.

Never has technology so much helped improve the The Loo.

To cell sensors

So all things will be supplied with sensors. And it looks like these sensorized things are getting smaller and smaller and a reaching the nano space.

Sensors are gtheetting so small that they can flow through our blood and mend our bodies. And maybe fix cancer cells in the future. Or detect issues with blood vessels. Or measure the chemistry in our bodies. They can be injected in plants to protect themselves from diseases. Or be used in constructions to measure stability at smaller scales than we had ever assumed possible. Possibilities beyond imagination.

Neb sensors surveilling the body 

Imagine what it would mean if we could instrument every cell we like to. I would like a surveillance team of bot swimming through my body, like the Nebuchadnezzar in the Matrix flows through the sewers and tunnels of the abandoned cities.

To signal when my internals run out of supplies.

The Lindy effect and technology

Stuff that has been around for x years, can be expected to be around for another x years.

That is what the Lindy effect tells us.

Read about Lindy in Nassim Taleb’s Skin in the Game.

This informs how we should approach legacy technology: 

  • Maintain it motherly, or
  • Decommission it aggressively

The Lindy effect also informs us how to approach the adoption of new technology: with care.

Simple, complex, quality

  • Post category:General
  • Post author:
  • Reading time:1 mins read

How to set an incentive to create/buy simple solutions.

The problem is that complex solutions are perceived better than simple solutions.

“It can’t be that simple”.

And complex solutions have more features. 

And new technologies make complex solutions even more attractive (reverse grandmother and Lindy effect), and intellectually more interesting. 

We can wrap a complex solution and new technology in Newspeak.

A solution based on existing technology can’t beat that.

But simpler solutions can beat on quality: fit-for-purpose. Simpler means cheaper, easier to design and develop, and easier to use and maintain.

Managing the open source software complexity with platforms?

The last couple of days I was working on a new setup for software development. I was surprised (actually somewhat irritated) by the efforts needed to get things working.

All the components I needed did not seem to work together: Eclipse, PHP plugin, Git plugin, html editor.

The same happened earlier when setting up for a Python project and some APIs (one based on Python 2, the other on Python 3).

I am still trying to think through what is the core problem. So far I can see that the components and platform are designed to integrate, but the tools all depend on small open-source components in the back, which we find incompatible between the components. 

Maybe there should be a less granular approach to these things, and we should move to (application) platforms. Instead of picking components from GitHub while building our software, get an assembled platform of components. Somebody, or rather, somebody, would assemble and publish the open source platforms, periodically, say every 6 months.

Status quo discomfort

  • Post category:General
  • Post author:
  • Reading time:1 mins read

A thought:

The status quo should feel more uncomfortable than the uncertainty of the future.

Best practices, theories, grandmother

  • Post category:General
  • Post author:
  • Reading time:2 mins read

Best practices stem from the practical, not from the theoretical. 

A theory explains reality. The current theory explains reality best. A theory is valid as long as there is no theory explaining reality better.

Best practices are ways of doing things. The practice is based on year long experience in the real world. Grandmother told us how she did it. It is not theory. It is not proven formally, by mathematics. It is proven by action and results.

Best practices are perennial. They change very infrequently. Theories change frequently.

In IT best practices are independent of technologies. Examples are: separation of concerns, layering, encapsulation, decoupling. 

Best practices exist for a reason: they work.

A theory may explain why they work. But it is not necessary.

Best practices have been around for years. They were not invented half a year ago. They may be theories. More often then not theories about the applicability of technologies.

I think we need to question “new best practices” .

Instead we must rely on grandmother’s wisdom. 

*All of this very likely inspired by (or rather, stolen from Nassim Taleb’s Anti-fragile writings, and the Lindy effect)

An approach to settling technical debt in your application portfolio

A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.

Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates*.
  • The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:

  • An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of technical debt.

The legacy renovation project

Then, how to approach a legacy renovation project.

  • Make an inventory of your legacy.
  • With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
  • Clean up everything that is not used.
  • Migrate strategic applications.

The inventory

Make an inventory of the artifacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools?
  • Load module: what load modules do you have in our runtime environment,  and in which libraries do these reside?
  • Runtime usage: what load modules are used, and by which batch jobs, or application servers.

Assess the business risk

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.

Clean up

Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have. 

Migrate

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”.  This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.

*Note:

Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.

Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted. 

For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.

In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggle with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Business innovation capacity is limited by legacy applications and technology.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must decide whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a good judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.