The Internet of Everything – from toilet seats to human bodies

  • Post category:General
  • Reading time:3 mins read

I walked into the restroom. A mechanic stood at the sink fixing something. It saw him holding a toilet seat. He was fooling around with the wiring of the apparatus. Then he replaced some electronics components and rewired the seat.

Toilet sensors

It never occurred to me that even toilets could be usefully equipped with electronic features. I asked the mechanic. He explained that the toilets in the building are all connected to the Internet. If there is something wrong with the antiseptic fluid produced by the toilet, it starts calling out for help. He told me that the towel dispenser was also connected to the Internet, so that when it runs out, a maintenance operator is called in. Makes sense.

Never has technology so much helped improve the The Loo.

To cell sensors

So all things will be supplied with sensors. And it looks like these sensorized things are getting smaller and smaller and a reaching the nano space.

Sensors are gtheetting so small that they can flow through our blood and mend our bodies. And maybe fix cancer cells in the future. Or detect issues with blood vessels. Or measure the chemistry in our bodies. They can be injected in plants to protect themselves from diseases. Or be used in constructions to measure stability at smaller scales than we had ever assumed possible. Possibilities beyond imagination.

Neb sensors surveilling the body 

Imagine what it would mean if we could instrument every cell we like to. I would like a surveillance team of bot swimming through my body, like the Nebuchadnezzar in the Matrix flows through the sewers and tunnels of the abandoned cities.

To signal when my internals run out of supplies.

The Lindy effect and technology

  • Post category:Modernization
  • Reading time:1 mins read

Stuff that has been around for x years, can be expected to be around for another x years.

That is what the Lindy effect tells us.

Read about Lindy in Nassim Taleb’s Skin in the Game.

This informs how we should approach legacy technology: 

  • Maintain it motherly, or
  • Decommission it aggressively

The Lindy effect also informs us how to approach the adoption of new technology: with care.

Simple, complex, quality

  • Post category:General
  • Reading time:1 mins read

How to set an incentive to create/buy simple solutions.

The problem is that complex solutions are perceived better than simple solutions.

“It can’t be that simple”.

And complex solutions have more features. 

And new technologies make complex solutions even more attractive (reverse grandmother and Lindy effect), and intellectually more interesting. 

We can wrap a complex solution and new technology in Newspeak.

A solution based on existing technology can’t beat that.

But simpler solutions can beat on quality: fit-for-purpose. Simpler means cheaper, easier to design and develop, and easier to use and maintain.

Managing the open source software complexity with platforms?

  • Post category:Uncategorized
  • Reading time:2 mins read

The last couple of days I was working on a new setup for software development. I was surprised (actually somewhat irritated) by the efforts needed to get things working.

All the components I needed did not seem to work together: Eclipse, PHP plugin, Git plugin, html editor.

The same happened earlier when setting up for a Python project and some APIs (one based on Python 2, the other on Python 3).

I am still trying to think through what is the core problem. So far I can see that the components and platform are designed to integrate, but the tools all depend on small open-source components in the back, which we find incompatible between the components. 

Maybe there should be a less granular approach to these things, and we should move to (application) platforms. Instead of picking components from GitHub while building our software, get an assembled platform of components. Somebody, or rather, somebody, would assemble and publish the open source platforms, periodically, say every 6 months.

Status quo discomfort

  • Post category:General
  • Reading time:1 mins read

A thought:

The status quo should feel more uncomfortable than the uncertainty of the future.

Best practices, theories, grandmother

  • Post category:General
  • Reading time:2 mins read

Best practices stem from the practical, not from the theoretical. 

A theory explains reality. The current theory explains reality best. A theory is valid as long as there is no theory explaining reality better.

Best practices are ways of doing things. The practice is based on year long experience in the real world. Grandmother told us how she did it. It is not theory. It is not proven formally, by mathematics. It is proven by action and results.

Best practices are perennial. They change very infrequently. Theories change frequently.

In IT best practices are independent of technologies. Examples are: separation of concerns, layering, encapsulation, decoupling. 

Best practices exist for a reason: they work.

A theory may explain why they work. But it is not necessary.

Best practices have been around for years. They were not invented half a year ago. They may be theories. More often then not theories about the applicability of technologies.

I think we need to question “new best practices” .

Instead we must rely on grandmother’s wisdom. 

*All of this very likely inspired by (or rather, stolen from Nassim Taleb’s Anti-fragile writings, and the Lindy effect)

An approach to settling technical debt in your application portfolio

A small summary of some key aspects of the approach to fixing the technical debt in your legacy application portfolio.

Risks of old technology in your software portfolio typically are:

  • The development and operations teams have little or no knowledge of the old technologies and/or programming languages.
  • Program sources have not been compiled for decades; modern compilers can not handle the old program sources without (significant) updates*.
  • The source code for runtime programs is missing, or the version of the source code is not in line with the version of the runtime. The old technology to do static calls (meaning, including every called program statically in the runtime module) makes things even more unreliable.
  • Programs use deprecated or undocumented low-level interfaces, making every technology upgrade a risky operation for breaking these interfaces.

A business case for a project to update your legacy applications can then be based on the risk identified in an assessment of your portfolio:

  • An assessment of the technical debt in your application portfolio, in technical terms (what technologies), and volume (how many programs).
  • An assessment of the technical debt against the business criticality and application lifecycle of the applications involved.
  • An assessment of the technical knowledge gap in your teams in the area of technical debt.

The legacy renovation project

Then, how to approach a legacy renovation project.

  • Make an inventory of your legacy.
  • With the inventory, for every application make explicit what the business risk is, in the context of the expected application lifecycle and the criticality of the application.
  • Clean up everything that is not used.
  • Migrate strategic applications.

The inventory

Make an inventory of the artifacts in your application portfolio:

  • Source code: what old technology source program do you have in your source code management tools?
  • Load module: what load modules do you have in our runtime environment,  and in which libraries do these reside?
  • Runtime usage: what load modules are used, and by which batch jobs, or application servers.

Assess the business risk

Consult the business owners of the applications. You may find they do not even realize that they own the application, or that there is such a risk in their application. The application owner then must decide to invest in updating the application, expedite the retirement of the application, or accept the risk in the application. In highly regulated environments, and for business-critical applications in general, the risks described above are seldom acceptable.

Clean up

Next, unclutter your application portfolio. Artifacts that are not used anymore must be removed from the operational tools, throughout the entire CI/CD pipeline. It is ok to move things to some archive, but they must be physically removed from your source code management tools, your runtime libraries, your asset management tools, and any other supporting tool you may have. 

Migrate

Then, do the technical migration for the remaining applications. If the number of applications that must be updated is high, you often see that organization set up a “migration factory”.  This team is a combination of business and technical expertise, that develops tools and methodologies for the required technology migrations. The remark here is that experience shows that more than 50% of the effort of such migrations will be in testing, and maybe more if test environments and test automation for applications do not exist.

*Note:

Most compilers in the 1990s required modifications to the source programs to be compilable. The runtime modules of the old compiler, however, remained functioning. Many sites choose not to invest in the recompilation and testing effort.

Nowadays we accept we have to modify our code when a new version of our compiler or runtime becomes available. For Java, for example, this has always been a pain in the back, which is accepted. 

For the mainframe, backward compatibility has always been a strong principle. Which has its advantages, but certainly also its disadvantages. The disadvantage of being an obstacle to technological progress, or in other words, the building up of technical debt, is often severely underestimated.

In or Out: let’s get the mainframe legacy over with now

In many mainframe shops – organizations using applications that run on the mainframe – senior management struggle with their isolated, expensive, complicated mainframe environment. 

  • The mainframe investment is a significant part of your IT budget, needing to board-level decision making.
  • It is unclear if the value of your mainframe investment is in line with its value.
  • Mainframe applications are often core applications, deeply rooted in organizational processes.
  • Business innovation capacity is limited by legacy applications and technology.
  • Too much is spent on maintenance and continuity, too little on innovation.

At the same time, the misalignment increases.

The organization moves to a cloud model for their IT – what is the mainframe position in that respect?

The mismatch between Enterprise Architecture and mainframe landscape is increasing.

Organizational and technical debt is building up in the mainframe environment, maintenance and modernization are postponed and staff is aging.

A decision must decide whether to throw out the mainframe legacy or revitalize the environment, but they have far from complete information to make a good judgment.

Divestment options

Let’s look at the divestment options you have when you are stuck in this situation.

Rehost the platform, meaning, move the infrastructure to another platform or to a vendor. 

This solves a small part of the problem, namely the infrastructure management part. Everything else remains the same. 

Retire all applications on the mainframe platform. Probably the cleanest solution in the divestment strategy. However, this option is only viable if replacing applications is available and speedy migration to these applications is possible.

Replace through repurchase, meaning replace your custom solution with another off-the-shelf solution, whether on-premise or in a SaaS model. This is only an option if, from a business perspective, you can live with the standard functionality of the package solution. 

Replace through refactoring is an option for application where special functionality support distinguishing business features that can not be supported in off-the-shelf applications. Significant development and migration efforts may be needed for this approach.

A total solution will likely be a combination of these options, depending on application characteristics and business needs.

The investment option

The Investment option is a stepwise improvement process, in multiple areas, depending on the state of the mainframe applications and platform. Areas include application portfolio readjustments, architecture alignments, application and infrastructure technology updates, and processes and tools modernization.

Depending on the state of the environment, investments may be significant. Some organization have neglected their mainframe environment for a decade or longer, and have a massive backlog to address. In some cases the backlog is so big, that divestment is the only realistic option. (As an example, one organization needed to support multiple languages, including Chinese and Russian, in their business applications. After 10 years of maintenance neglect of the middleware, the only option they had was to abandon their strategic application platform. This brings excessive costs, but more important for the organization’s competitiveness, technical debt hits at the most inconvenient moments.)

To find the best option for your organisation you should consider at least the following aspects:

  • Cost of value.
  • Alignment of business goals, enterprise architecture, and mainframe landscape.
  • Position of the mainframe landscape in your application portfolio.
  • Mainframe application portfolio lifecycle status and functional and strategic fit.
  • Technical vitality of your mainframe environment.
  • The operational effectiveness of DevOps and infra teams.
  • Cloud strategy and mainframe alignment.

A thorough analysis of these aspects funnels into a comprehensive improvement plan for business alignment, architectural adjustments, and operational fit. Execution of this plan must be not just agreed, upon but actively controlled by senior business and IT management. A steering body is needed to address challenges quickly. Senior business and IT management and controlling business and enterprise architects should be represented in the steering body to make sure agreed goals remain on target.

Thus, you reseize control over your mainframe again.

IT Architecture: a mini business case

  • Post category:IT architecture
  • Reading time:2 mins read

Organizations heavily depend on software. Millions, billions of lines of code are produced every year.

This only accelerates in the future. (This may even be a self-accelerating process. I was looking to find an estimated for the numbers of lines of code produced every year, but could not find firm figures. Nevertheless who would doubt the number software based solutions is growing.)

Architecture provide a sense of how all the software pieces fit together.

For organisations this means (enterprise, business, IT)  architecture define and assure the organization’s ability the serve customer needs. 

Architecture is the organizing principle. 

You can survive for sometime without an explicit architecture, but at some point you will hit a wall. Inefficiencies in business processes and the supporting software systems will bring an organization to a halt. Problems with functional alignment, business process and application integration, application support, scalability, changeability and what have you (yes I am being lazy now) will need to be addressed to get back on track. 

It is better to get things organized.

architecture or mess

On efficiency (what’s a nanosecond, what’s a microsecond)

  • Post category:IT architecture
  • Reading time:2 mins read

I heard Mark Andreessen predict that programmers need to get very efficient on programming again (At about 17:30) in this very interesting interview with Kevin Kelly.

https://a16z.com/2019/12/12/why-we-should-be-optimistic-about-the-future/

If we do not get more efficient in programming, things might get stuck.

Another insteresting perepctive was already provided by Grace Hopper, the lady that invested COBOL, amongst other things.

See this video: How long is a nanosecond 

This all reminds me of a small test we did recently to check resource consumption of programming languages, by writing just a very small Hello World program. On ein COBOL, one in Java and one in Groovy.

The following summarizes how many CPU seconds these programs needed to run:

COBOL 0,01 msec (basically it was unmeasurable).

Java 1 second.

Groovy: 3 seconds.

And then we are only looking at very inefficient programming languages. Much more could be gained when looking at application architectures. Microservices architectures, especially when applied radically, are incredibly inefficient compared to traditional tightly coupled applications in C, COBOL or even Java.

Of course I do not want to advertise stovepipe applications, history has proven the maintenance issues to be inhibitive, but a more balanced architecture with more eye for efficiency seems inevitable.