Automation: From Repetition to Engineering

I often realize I’m doing repetitive tasks that could easily be automated. Money transfers, reminders, invoices-these are simple, low-effort activities that don’t deserve to consume my time. Every time this happens, I tell myself, “I should automate this to save time and mental effort.” And yet, somehow, I don’t. I tell myself I have no time to automate. In IT, especially when automating mainframe processes, we encounter the same hesitation: “We don’t need to automate this; once it’s done, we’ll never do it again.” Which almost never turns out to be true. Repetitive tasks-whether personal or IT-related-are often simple to automate but remain manual due to perceived time constraints. In IT, automation is critical. It reduces manual errors, improves consistency, and frees up time for more strategic work. A Shift in Mindset Automation requires a different engineering mindset. Instead of the familiar cycle: Do → Fix → Fix → Fix We move to: Engineer process → Run process / Fix process → Run process → Run process Once engineered, automated processes run with minimal intervention, saving both time and effort. When to Automate If you find yourself performing a task more than twice, consider automating it. Whether through shell scripting, JCL, utilities, or tools like Ansible, automation quickly pays off. Automation is not optional-it’s essential for efficient IT operations and professional growth. Start automating today to work smarter, not harder. Don’t waste time doing things more than twice. If you do something for the third time, automate it-you’ll likely have to do it a fourth or fifth time as well. Automate everything.

The Cathedral Effect: Designing Engineering Spaces for Creativity and Focus

  • Post category:Uncategorized
  • Reading time:3 mins read

In software engineering, as in many creative and technical fields, the environment shapes how we think and work. An intriguing psychological phenomenon known as the Cathedral Effect offers valuable insights into how physical and virtual workspaces can be designed to optimize both high-level creativity and detailed execution. What Is the Cathedral Effect? The Cathedral Effect reveals how ceiling height influences cognition and behavior. High ceilings evoke a sense of freedom and openness, fostering abstract thinking, creativity, and holistic problem-solving. In contrast, low ceilings create an enclosure that encourages focused, detail-oriented, and analytical work. Research shows that exposure to high ceilings activates brain regions associated with visuospatial exploration and abstract thought, and confirm that people in high-ceiling environments engage in broader, more creative thinking, while low ceilings prime them for concrete, detail-focused tasks Applying the Cathedral Effect to Software Engineering Software development involves both high-level architectural design and detailed coding and testing. The Cathedral Effect suggests that these phases benefit from different environments: High-level work (system architecture, brainstorming, innovation) thrives in “high ceiling” spaces- whether physical rooms with tall ceilings or metaphorical spaces that encourage free-flowing ideas and open discussion. Detailed work (analysis, programming, debugging) benefits from “low ceiling” environments that support concentration, precision, and deep focus. Matching the workspace to the task helps teams think and perform at their best. Practical Suggestions for IT Teams and Organizations Create Dedicated Physical and Virtual Spaces If possible, design your office with distinct zones: High-ceiling rooms for architects and strategists to collaborate and innovate. These spaces should be open, well-lit, and flexible. Low-ceiling or enclosed rooms for developers and analysts to focus on detailed work without distractions. For remote or hybrid teams, replicate this by: Holding open, informal video sessions and collaborative whiteboard meetings for high-level ideation. Scheduling “deep work” periods with minimal interruptions, supported by quiet virtual rooms or dedicated communication channels. Match People to Their Preferred Environments We should recognize that some team members excel at abstract thinking, while others thrive on details. Assign roles and tasks accordingly, and respect their preferred workspace to maximize productivity and job satisfaction. Facilitate Transitions Between Modes Switching between big-picture thinking and detailed work requires mental shifts. Encourage physical or virtual “room changes” to help reset focus and mindset, reducing cognitive fatigue. Foster Cross-Pollination While separation is beneficial, occasional collaboration between high-level thinkers and detail-oriented workers ensures ideas remain practical and grounded. Why This Matters Ignoring the Cathedral Effect can lead to mismatched environments that stifle creativity or undermine focus. For example, forcing detail-oriented developers into open brainstorming sessions can cause distraction and frustration. Conversely, confining architects to cramped spaces can limit innovation. By consciously designing workspaces and workflows that respect the Cathedral Effect, organizations can foster both creativity and precision, leading to better software and more engaged teams.

zopen community – open source tools for Z

  • Post category:Utilities
  • Reading time:1 min read

I must shamefully admit I was not aware of the zopen community initiative before it recently became part of the Open Mainframe project. The zopen community provides a great set of open source tools ported for Z. Such as the dos2unix utility I wrote about earlier here.

dos2unix on z/OS

  • Post category:Utilities
  • Reading time:1 min read

On z/OS UNIX, the dos2unix utility is not included. You can achieve similar functionality using other tools available on z/OS UNIX, such as sed or tr. These tools can be used to convert DOS-style line endings (CRLF) to Unix-style line endings (LF). For example, you can use sed to remove carriage return characters: sed 's/\r$//' inputfile > outputfile Or you can use tr tr -d '\r' < inputfile > outputfile

The 4 Eyes Principle

  • Post category:Principles
  • Reading time:2 mins read

Another principle today. In the realm of software development, the four-eyes principle dictates that an action can only be executed when it is approved by two individuals, each providing a unique perspective and oversight. This principle is designed to safeguard against errors and misuse, ensuring the integrity and quality of the software. The four eyes principle can help during the construction of software systems by finding weaknesses in architecture, design or code and can help to improve the quality. In every phase of the software development cycle, this principle can be applied, from the requirements analysis phase to the detailed coding phase. Software architecture, design, and code could be co-developed by two people or peer-reviewed. In the design of software systems, the four-eye principle applies to the process of validating design decisions on various levels. Pair programming is a software development technique in which two programmers work together on code, one usually doing the coding and the other doing the validation. In other engineering industries, dual or duplicate inspection is a common practice. In regulated environments such as Financial Institutions, compliance requirements may dictate that code is always peer-reviewed to prevent backdoors in code. In software systems itself, the four-eyes principles may be implemented when supporting business processes requiring this for security or quality validation reasons. Change management, a critical aspect of software development, often relies on the four-eyes principle. When code changes are transitioned into production, a formal change board may mandate a signed-off peer review, ensuring that all changes meet the required standards. Change and Configuration Management tools for software systems are often designed to support this four-eyes principle process, further enhancing the quality and security of the production environment. Further assurance can be added by adding a (random) rotation scheme of authorized individuals to serve as the second pair of eyes. This may provide additional assurance as it will not be known beforehand which two individuals will be dealing with a given decision. Related / similar: Dual Inspection, Code Review.

System Z Enthusiasts Discord

  • Post category:Uncategorized
  • Reading time:1 min read

Quick one. Just joined the System Z Enthusiasts Discord community. They are a pretty awesome group.

Continuous availability presentation in 2006, updated

  • Post category:Uncategorized
  • Reading time:9 mins read

Continuous availability The slide deck tells me that it was in 2006 that I created a set of slides for "Kees" with an overview of the continuous availability features of an IBM mainframe setup. The deck's content was interesting enough to share here, with some enhancements. What is availability? First, let's talk a little bit about availability. What do we mean when we talk about availability? A highly available computing setup should provide for the following: A highly available fault-tolerant infrastructure that enables applications to run continuously. Continuous operations to allow for non-disruptive backups and infrastructure and application maintenance. Disaster recovery measures that protect against unplanned outages due to disasters caused by factors that can not be controlled. Definitions Availability is the state of an application service being accessible to the end user. An outage (unavailability) is when a system is unavailable to an end user. An outage can be planned, for example, for software or hardware maintenance, or unplanned. What causes outages? A research report from Standish Group from 2005 showed the various causes of outages. Causes of outages It is interesting to see that (cyber) security was not part of this picture, while more recent research published by UpTime Intelligence shows this growing concern. More on this later. Causes of outages 2020 - 2021 - 2022 The myth of the nines The table below shows the availability figures for an IBM mainframe setup versus Unix and LAN availability. Things have changed. Unix (now: Linux) server availability has gone up. Server quality has improved, and so has software quality. Unix, however, still does not provide a capability similar to a z/OS sysplex. Such a sysplex simply beats any clustering facility by providing built-in, operating system-level availability. Availability figures for an IBM mainframe setup versus Unix and LAN At the time of writing, IBM publishes updated figures for a sysplex setup as well (see https://www.ibm.com/products/zos/parallel-sysplex): 99.99999% application availability for the footnote configuration: "... IBM Z servers must be configured in a Parallel Sysplex with z/OS 2.3 or above; GDPS data management and middleware recovery across Metro distance systems and storage and DS888X with IBM HyperSwap. Other resiliency technology and configurations may be needed." Redundant hardware The following slides show the redundant hardware of a z9 EC (Enterprise Class), the flagship mainframe of that time. The redundant hardware of a z9 EC Contrasting this with today's flagship, the z16 (source https://www.vm.ibm.com/library/presentations/z16hwov.pdf), is interesting. Since the mainframe is now mounted in a standard rack, the interesting views have moved to the rear of the apparatus. (iPDUs are the power supplies in this machine.) The redundant hardware of a z16 Redundant IO configuration A nice, highly tolerant server is insufficient for an ultimately highly available setup. Also, the IO configuration, a.k.a. storage configuration, must be highly available. A redundant SAN setup The following slide in the deck highlights how this can be achieved. Depending on your mood, what is amusing or annoying and what triggers me today are the "DASD CU" terms in…

System management

The z/OS operating system is designed to host many applications on a single platform. From the beginning, efficient management of the applications and their underlying infrastructure has been an essential part of the z/OS ecosystem. This chapter will discuss the regular system operations, monitoring processes, and tools you find on z/OS. I will also look at monitoring tools that ensure all our automated business, application, and technical processes are running as expected. System operations The z/OS operating system has an extensive operator interface that gives the system operator the tools to control the z/OS platform and its applications and intervene when issues occur. You can compare these operations facilities very well with the operations of physical processes like in factories or power plants. The operator is equipped with many knobs, buttons, switches, and meters to keep the z/OS factory running. Operator interfaces and some history By design, the mainframe performs operations on so-called consoles. Consoles originally were physical terminal devices directly connected to the mainframe server with special equipment. Everything happening on the z/OS system was displayed on the console screens. A continuous newsfeed of messages generated by the numerous components running on the mainframe streamed over the console display. Warnings and failure messages were highlighted so an operator could quickly identify issues and take necessary actions. Nowadays, physical consoles have been replaced by software equivalents. In the chapter on z/OS, I have already mentioned the tool SDSF from IBM or similar tools from other vendors available on z/OS for this purpose.  SDSF is the primary tool system operators and administrators use to view and manage the processes running on z/OS. Additionally, z/OS has a central facility where information, warnings, and error messages from the hardware, operating system, middleware, and applications are gathered. This facility is called the system log. The system log can be viewed from the SDSF tool. SDSF options Executing an operator command through SDSF The system log viewed through SDSF An operator can intervene with the running z/OS system and applications with operator commands. z/OS itself provides many of these operator commands for a wide variety of functions. The middleware tools installed on top of z/OS often also bring their own set of operator messages and commands. Operator commands are similar to Unix commands for Unix operating systems and functions provided by the Windows Task Manager and other Windows system administration functions. Operator commands can also be issued through application programming interfaces, which opens possibilities for building software for automated operations for the z/OS platform. Automated operations In the past, a crew of operators managed the daily operations of the business processes running on a central computer like the mainframe. The operators were gathered in the control room, also called a bridge, from where they monitored and operated the processes running on the mainframe. Nowadays, daily operations have been automated. All everyday issues are handled through automated processes; special software oversees these operations. When the automation tools find issues they cannot resolve, an incident process is…

Noise reduction

  • Post category:Principles
  • Reading time:2 mins read

The principle of noise reduction in software systems improves software systems by removing inessential parts and options and/or making them invisible or only visible to selected users. Reducing the options in a software solution increases usability. This goes for user interfaces as well as technical interfaces. We decide what an interface looks like and stick to it. All-too-famous examples of noise reduction are the Apple iPod and the Google search page. Adding features for selected users means adding features and under-the-hood complexities for all clients. Reducing options also makes the software more robust. If we build fewer interfaces, we can improve them. We can focus on really doing well with the limited set of interfaces. In practice, we see hardware and software tools have many options and features. That is not because software suppliers desperately want to give their customers all the options but because we, their customers, are requesting these options. Software suppliers may view all these requests more critically. Some do. Let's aim to settle for less. We shouldn’t build more every time we can do with less just because we can. Also, we shouldn’t ask our suppliers to create features that are nice to have. There are always more options, but let’s limit the options to 4 or better: 1.

Hypes

  • Post category:Principles
  • Reading time:2 mins read

Some companies have made a business model out of technology hypes. These are the same companies that tell the market what it needs by asking the market. Of course, this comes with an invoice mentioning generous compensation. These companies write classy reports with colorful graphics in which they advise organizations to do what the organizations tell them to do. But hypes are for techies. Techies may feast on technology, but for organizations, jumping on hypes can be a risky and costly pastime. There are two types of hypes. Hypes can be about something new. Other hypes are just reformulations of existing things, recycled ideas. But hypes are hypes: they will go away. The vast majority of hypes disappear into thin air. The techie may have learned from them. Some remain. It might be valuable if a technology is still around after a few years. But usually, the stuff will not be as groundbreaking and revolutionary as predicted when announced by the hype cycle company, that is, by the market itself. Blockchain, anyone? Some hypes are recycled ideas. We have no memory, and we don’t read textbooks. SOA, AI, microservices, and technical advancements are wrapped in shiny new names and gift papers, so they appear to be a gift from your software supplier or consultancy company.