Parallel sysplex

One of the most distinguishing features of the z/OS operating system is the way you can cluster z/OS systems in a Parallel Sysplex. Parallel Sysplex, or Sysplex in short, is a feature of z/OS that was built in the 90s that enables extreme scalability and availability.

In the previous post we highlighted the z/OS Unix part. Here we will dive into the z/OS Parallel Syplex.

A cluster of z/OS instances

With Parallel Sysplex you can configure a cluster of z/OS operating system instances. In such a sysplex you can combine the computing power of multiple of z/OS instances on multiple mainframe boxes into a single logical z/OS server.

When you run your application on a sysplex, it actually runs on all the instances of the sysplex. If you need more processing power for your applications in a sysplex, you can add CPUs to the instances, but you can also add a new z/OS system to the sysplex.

This makes a z/OS infrastructure is extremely scalable. Also, a sysplex isolates your applications from failures of software and hardware components. If a system or component in a Parallel Sysplex fails, the software will signal this. The failed part will be isolated while your application continues processing on the surviving instances in the sysplex.

Special sysplex components: the Coupling Facility

For a parallel sysplex configuration, a special piece of software is used: a Coupling Facility. This Coupling Facility functions as shared memory and communication vehicle to all the z/OS members forming a sysplex.

The z/OS operating system and the middleware can share data in the Coupling Facility. The type of data that is shared are the things that members of a cluster should know about each other since they are action on the same data: status information, lock information about resources that are accessed concurrently by the members, and caching of shared data from databases.

A Coupling Facility runs in a dedicated special operating system, in an LPAR of its own, to which even system administrators do not need access. In that sense it is a sort appliance.

A sysplex with Coupling Facilities is depicted below. There are multiple Coupling Facilities to avoid a single point of failure. The members in sysplex connect to the Coupling Facilities. I have not included all the required connections in this picture, as that would become a cluttered view.

A parallel sysplex

Middleware exploits the sysplex functions

Middleware components can make use of the sysplex features provided by z/OS, to create clusters of middleware software.

Db2 can be clustered into so-called Datasharing Group. In a Datasharing Group you can create a database that can process queries on multiple Db2 for z/OS instances on multiple z/OS systems.

Similarly WebSphere MQ can be configured in a Queue Sharing Group, CICS in a CICSPlex, IMS in an IMSPlex and other software like WebSphere Application Server, IDMS, Adabas and other middleware use parallel sysplex functions to build highly available and scalable clusters.

This concept is illustrated in Figure 15. Here you see a cluster setup of CICS and Db2 in a sysplex. Both CICS and Db2 form one logical middleware instance.

A parallel sysplex cluster with Db2 and CICS
A parallel sysplex cluster with Db2 and CICS

You can see the big benefit of parallel sysplex lies in it’s a generic facilties to build scalable and high available clusters of middleware solutions. You can achieve similar solutions on other operating systems, but every middleware component needs to supply its own clustering features to achieve such a scalable and highly available configuration. This often needs additional components and leads to more complex solutions.

How is this different from other clustering technologies?

What is unique about a parallel sysplex is that it is a clustering facility that is part of the operating system.

On other platforms you can build cluster of middleware tools as well, but these are always specific solution and technologies for that piece of middleware. The clustering facilities are part of the middleware. With parallel sysplex, clustering is solved in a central facility, in the operating system of z/OS.

GDPS

An extension to Parallel Sysplex is Geographically Dispersed Parallel Sysplex, GDPS for short.  GDPS provides an additional solution to assure your data remains available in case of failures. With GDPS you can make sure that even in the case of a severe hardware failure, or even a whole data centre outage, your data remains available in a secondary datacentre, with minimal to no disruption of the applications running on z/OS.

In a GDPS configuration, your data is mirrored between storage systems in the two data centres. One site has the primary storage system, the storage system in the other data centre receives a copy of all updates. If the primary storage system, or even data centre fails, GDPS automatically makes the secondary storage device the primary, usually without disrupting any running applications.

The Unix parts of z/OS

In the previous DBAOTM post I have introduced you to the z/OS operating system, the flagship operating system for the mainframe. In this post I will introduce you into the Unix side that z/OS has been equipped with over the past two decades.

Since the 1990s IBM has added Unix functionality to z/OS. The first extension was z/OS Unix System Services – z/OS Unix in short – and recently we IBM have added z/OS Container Extensions.

z/OS Unix

The Unix part of z/OS is a runtime environment that is an integral part of the z/OS operating system. z/OS Unix is fully Unix (Posix) compliant. z/OS Unix provides an interactive command interface that is called a shell in Unix terminology.

IBM has developed this part in the 1990s to make it easier to port applications from other platforms to z/OS. Many off-the-self and open source middleware and application packages that are available on z/OS make use of z/OS Unix. Examples are IBM’s own WebSphere Application Server and IBM Integration Bus, the banking product BASE24-eps from ACI, and open source tools like PHP and Git.

z/OS Unix has regular files the same as other Unix systems. In the z/OS Unix shell you can use normal Unix commands like ls, cd, more and many more of the standard Unix commands. You can set up a telnet or SSH session with z/OS Unix and do many more things you can also do with other Unix environments.

z/OS Container Extensions

A very recent development on z/OS (it came with z/OS 2.4, end 2019) is the possibility to run Linux Docker Containers on z/OS.

Docker containers are a hardware independent and lightweight manner to run many applications in a virtualized configuration. The Docker technology has been available on x86 platforms for a long time. With Docker containers you get a virtualization solution that does not need a complete virtual machine with an operating system running in it for every application. Instead your application is running in a small container that provides only a limited set of virtualization facilities. Just the things that the application needs. You can run many Docker containers – so applications – on a single real operating system image.

The interesting thing is that in a conceptual way Docker is a quite like z/OS as we have seen it in section Address Spaces are processes. On a z/OS operating system you can run many applications in “Address Spaces”. With Docker you run many container processes on a single real operating system image.

I will talk a bit more about Docker in section Linux in z/OS Container Extensions.

z/OS Address Spaces versus Docker containers

All Unix variants

A small elaboration, as you may already get confused with the Unix on the mainframe. I mentioned Linux for the mainframe, now I talk about z/OS Unix and Linux in Containers.

It is important to understand the difference between z/OS Unix, z/OS Container Extensions and Linux for Z.

z/OS Unix and z/OS Container Extensions are an integral part of z/OS. You get these options with z/OS.

z/OS Unix applications use the Unix flavour that z/OS provides, which is not Linux.

In z/OS Container Extensions you get the option to run applications with the Linux flavour of Unix, and run these in a containerized setup.

Linux for Z is an operating system that is in no way related to z/OS. Applications running on Linux for Z use the Linux flavour, and in an LPAR or Virtual Machine of its own.

I have tried to put all the Unix variants on z/OS in the picture below. You see z/OS Unix, as part of a z/OS operating systems, you see a z/OS container process running a Linux Docker container, and separate from z/OS there is an LPAR running Linux for Z.

The Unix flavours of z/OS and Linux for Z

What’s next

Now we have seen the basics of z/OS, we can turn to the more specialized and specific parts. In the next post I will discuss the unique clustering feature of z/OS, called parallel sysplex.

DBAOTM – z/OS – the mainframe flagship operating system

In the previous DBAOTM post I have described the operating systems available for the IBM mainframe and seen z/OS is the flagship in this category. In this post I will introduce you into the z/OS operating system concepts and terminology. The goal of this piece is to give you a good idea what is special about z/OS are and how the peculiarities relate to more commonly known concepts in Unix and Windows operating systems.

z/OS: MVS and Unix combined

I will describe the z/OS operating system into two parts, and discuss these separately. The traditional part is MVS and this part deviates most from what we know from Windows and Unix. The second part is z/OS Unix, an extension that can run Unix applications, very much similar to other Unix flavours.

Next in my discussion, in upcoming posts about z/OS, I will talk about the unique clustering facility that z/OS brings, called parallel sysplex. Finally I will cover the green screens that the mainframe is often associated with, and discuss where this is still used. I will discuss modern tools and IDE’s for z/OS that come with modern user interfaces, replacing the old green-screen based tools.

The MVS part of z/OS

The MVS side of z/OS is the traditional mainframe side. This is the part that has its roots in the 1960s. MVS and its predecessors were built in the era in which batch processing with punch cards was the basic way of interacting with the mainframe. In MVS we find the features that today look a bit awkward.

Basic operation of the MVS part with batch and JCL

First let’s have a look at he basic operation of a batch process in z/OS. The batch process is core to the MVS part of z/OS. In essence, today it still works in the same way as it was when designed in the 1960.

To run a program on z/OS you need to have a means to tell it what to do. You do this by creating a script, which is usually quite small. The language for this script is called JCL – Job Control Language.

With the JCL script you tell the computer which program to run, and for that, what the input is for that program and where the output must be written to. This looks like a piece of code like this:

//RUNPROG  EXEC PGM=PROGRAM1
//INPUT    DD DISP=SHR,DSN=MY.INPUT.DATA
//OUTPUT   DD DISP=NEW,MY.OUTPUT.DATA

This code looks awful of course, despite its simplicity. This is because the language was designed for punch cards. Punch cards can carry 80 characters per line, and every line also needed some special positions to control the operation of the punch card reader. Also, to make things as simple as possible for the punch card reader, everything is in uppercase. All in all, JCL is probably easily readable for a punchcard reader device, but from an aesthetical and ergonomic perspective it is horrendous.

Anyway, with the above snippet of JCL you tell the computer to run PROGRAM1, use MY.INPUT.DATA as input file, and MY.OUTPUT.DATA as output file. If you feed this into the mainframe running z/OS it will try do execute this as such.

Figure – JCL to describe work to be done

In the old days the JCL was punched by the programmer on paper punch cards, one card for every line of JCL. The stack of cards were then inserted into a punch card reader attached to the mainframe.

Nowadays card reader devices do not exist anymore. They are replaced by a piece of software that is part of z/OS and called the internal reader.

The user starts his program by typing a “Submit” command with the JCL as a parameter or selecting the Submit option in the user interface of a modern IDE, as illustrated in Figure 7.

Figure – Submit JCL from a modern IDE

Now z/OS, the reader software, will read the JCL, find the program and the files needed,  and start the execution of the program.

Address Spaces are processes

The execution of the program is peformed by a dedicated process for the task at hand. In Unix and Windows you call these execution concepts simply: processes. But in the MVS part of z/OS this process concept is called an Address Space. You can imagine many tasks are running at the same time for many users. All these tasks are run in Address Spaces on z/OS.

Although the name Address Space seems to indicate this just has something to do with memory, it is actually the same concept as a process in Windows and Unix.

Virtual storage is memory

The Address Space gives the program the memory that is needs to execute. This is called virtual storage on z/OS.

The odd thing is that Mainframers often refer to memory as storage. They could do this without getting confused with disk storage because that was called DASD – an abbreviation of Direct Access Storage Device.

Take care when a mainframer talks about storage, he might mean memory, but also disk storage.

Datasets are files and catalogs organizes files like a directory structure does

On the MVS side data is stored in datasets. Datasets are files, but with some quirks I will discuss in a minute.

Datasets are administered in catalogs. Catalogs are designed to keep a record of dataset names and the place where these datasets are stored.

Datasets are record-oriented and can have different structures

Datasets are so-called record-oriented. This means a dataset is defined with details that prescribe a structure for how the data is organized.

When you define a data in z/OS you must define how big the records are, how z/OS must and how z/OS must organize the data in the dataset.

The organization of a dataset can be a linear layout. We then call these sequential datasets. A dataset can also have a more complex hierarchical layout. We call these VSAM datasets (I will spare you the meaning of the abbreviation).

In sequential files the data is order from first records to last record. To read or write a sequential dataset you start at the beginning and proceed with the next record until you are done.

VSAM datasets have a sort of database structure. The records you store in a VSAM files have a key value. You do not usually browse through the VSAM dataset, but you read a specific record based of its key value. z/OS finds the record in the file with that key value. To do this, VSAM keeps indexes of key values and records. The benefit of VSAM files is that they can provide very fast access to the records is has stored.

When you create and MVS dataset you also have to specify a number of physical characteristics. These are characteristics like record size, format of the records, the size of blocks on disk that will hold the records, and more. These things were mostly aimed to optimize access speed and storage on disk. But with modern storage hardware these things have become obsolete. Many of these details can be removed for the user of z/OS, but unfortunately oftentimes this often is not yet done.

The applications you build on z/OS nowadays would only use sequential files only. The things that VSAM solved for applications in the past can now be much easier done in a database management system like Db2. The main use of VSAM that remains is for special operating system and middleware datasets, when very fast access to data is required. Under the hood of z/OS and the middleware.

Figure – Sequential and VSAM file structures

When you compare a Unix or Windows file to a dataset, you observe that a file in Unix and Windows does not have this prescribed structure that an MVS dataset has. From the user program perspective a file is just a big string of bytes. The data in the files can be organized internally. Often this is done with control characters, like Carriage Return (CR) and Linefeed (LF). The applications themselves have to provide that structure to the files.

Figure – File structure in Unix

The EBCDIC character set

In the traditional part of z/OS, data in the datasets is encoded in the Extended Binary Coded Decimal Interchange (EBCDIC) character set. This is a character set that was developed before ASCII (American Standard Code for Information Interchange) became commonly used.

Most systems that you are familiar with use ASCII. By the way, we will see z/OS UNIX files are encoded in ASCII.

Normally you will not notice anything different when you read files on z/OS and you will be unaware of the EBCDIC representation of data. However, if you want to transfer files between Windows or Unix and z/OS datasets, you will need to convert the characters from their ASCII representation to their EBCDIC representation, and vice versa. Typically file transfer programs will do this operation for you, called code page transformation, as part of the transfer action.

Catalogs are like directories

In Unix and Windows files are organized in a directory structure, and a hierarchy of file systems to manage the various disks on which the files may be stored. Files systems are defined on hard disks and hold the physical files, and the directory structure provides an organizing hierarchical mechanism for the files in the file systems. The root file system holds the operating system’s core files, other files systems can be mounted to extend the file storage over multiple storage devices.

The operating systems can find a file if you give it the name of the file and the directory in which it is defined.

Figure – File system and directory structures

In the MVS part of z/OS, the datasets are organized in catalogs. A catalog is in essence an index of dataset names in the z/OS system and the disk names on which the datasets are stored. With the catalog mechanism, if the user knows the name of the dataset, the catalog can tell on which disk it can be found.

To make the mechanism of catalogs a bit easier for the system administrator, catalogs are divided in a master catalog and user catalogs. There is one master catalog and there can be many user catalogs. The master catalog contain the system datasets. The user catalogs contain the user and application datatsets. The system administrator can define which datasets are administered in which catalog. When the system administrator decides to administer datasets for application APP2 in a certain user catalog, he put a little information in the master catalog to inform z/OS about that. This information is called an ALIAS.

To find a dataset with a certain name, z/OS first looks in the master catalog. z/OS finds it there, or it finds an ALIAS reference. The ALIAS reference tells z/OS to look in the associated user catalog for a dataset with that name. When z/OS has found the antry for a dataset, it uses the disk name in the index entry to access the file on disk.

Again, comparing this to Unix, you can say that the master catalog has a similar function as the root filesystem: it is meant for the operating system and middleware datasets. The user catalogs are like mounted file systems: for application and user datasets.

A catalog is however a registration that is separate from the physical storage. In Unix, the registration of files is done on the filesystems themselves. With a catalog this is not the case. In fact, a catalog is just a dataset itself. A VSAM dataset to be precise. It can be stored on any disk.

Figure – The structure of master and user catalogs

This concludes an introduction to the MVS side of the z/OS operating system. In a next post I will turn to the Unix side of z/OS.

DBAOTM Operating systems for the big mainframe box

In the previous posts I have given an overview of the most important mainframe hardware components. In this article I will summarize what operating systems you can run on this hardware. But first…

This post appears as part of a number of articles in the category “Don’t Be Afraid Of The Mainframe”.

What is actually a mainframe

A little late to answer this question, but I thought it was good to address this here.

A mainframe is a large computer that is designed to run many different computing tasks at the same time. A mainframe stems from the time where hardware was expensive, and a single central solution for computing was the only way to economically run computing tasks.

A lot of characteristics of that design point are still prevalent today. Hence it is good to understand this background.

z/OS and Linux, and then some…

A number of operating systems can run on the mainframe. I will give a short description here of the operating systems you can run on a mainframe.

For the rest of the series of articles I will focus on the two most important ones today. z/OS is the most important mainframe operating system, but also the most different from today’s mainstream operating systems. I will discuss z/OS most extensively.  Linux for the mainframe is the second most important operating system and has gained popularity over the past decade. I will discuss Linux for the mainframe in a separate chapter Linux for the mainframe.

z/OS

IBM often calls z/OS their flagship mainframe operating system. The roots of z/OS date back to 1964, when the operating system OS/360 was designed for the System/360 computer, the predecessor of IBM’s modern mainframe computers. In the early 70s the successor of the OS/360 operating system was developed, and named MVS (it stands for Multiple Virtual Storage, but you can forget that immediately). MVS has evolved further into OS/390 and now it is called z/OS. The name changes may suggest fundamental different systems, but these are in fact only marketing-driven name changes for MVS and the technology base is still the same, although it has very significantly evolved.

z/VM, the mother of all hypervisors

z/VM, or VM (VM stands for Virtual Machine) as it was originally named, used to be a full-fledged operating system that was design to run business applications. The operating system included a unique technology that allowed users to virtualize the mainframe hardware and split it up into small virtual machines. Nowadays we have VMWare, KVM, Xen, Hyper-V and others that do the same for x86 and other platforms. But the technology in VM was developed in the 1960s. It was far ahead of it’s time. z/VM can be considered the mother of all hypervisors.

z/VM is nowadays it is only still used as a hypervisor for the mainframe, and is no longer as an operating system for business applications.

z/VSE

The z/VSE operating system is the small brother of z/OS. It was developed in parallel to MVS, and targeted smaller customers. Nowadays it is not used very much anymore, but it is still developed and supported by IBM.

z/TPF

The operating system z/TPF (Transaction Processing Facility) was developed especially for real-time computing. Organizations that needed to process large volumes of small transactions very fast, such as credit card companies, airline and hotel reservation systems, and banks have deployed this specialized mainframe operating system.

Linux

IBM has ported Linux to the IBM mainframe architecture and the first release was formally announced in the year 2000. Since that time many other software products, commercial as well as open source software, have been made available on Linux for Z, as it is now called.

Configuration for Linux for the mainframe are most often virtualized with z/VM, the hypervisor for the mainframe we saw above. With z/VM you can create and manage many virtual machines on the mainframe in which you can then run a Linux instance. I will discuss Linux for the mainframe separately in a number of special posts.

DBAOTM – Hardware – Specialty engines

A mainframe has a large number of CPUs. The CPUs can be configured in different modes, called specialty engines.

In this post I will discuss what these specialty engines are, and how they are used.

This post appears as part of a number of articles in the category “Don’t Be Afraid Of The Mainframe”.

General purpose CPUs versus specialty engines

The normal setup to use a CPU is as a general purpose CPU, and alternatively CPUs can be configured as so called specialty engines. This is a special sort of setup that only an IBM engineer can do because the CPU configuration is agreed when you acquire mainframe hardware. We will see why this is.

A general-purpose CPU can be used for, well, everything. But to make it easy, in reality a general-purpose CPU is used only for traditional workload types, like your COBOL and PL/I programs.

When a CPU is configured as a specialty engine, the CPU can only be used for a special function. So a specialty engine is not special in the sense that it is designed for a particular function. Rather, it is a regular CPU that is configured so that it can be used only for a particular function.

The purpose of specialty engines is to make it cheaper for organizations to run particular functions on the mainframe.  For traditional workloads the run on the general-purpose CPUs you pay your normal software bill based on MSUs, as we have seen in section Understanding the cost of software on z/OS, MLC and OTC. Functions that run on a specialty engine, however, are not accounted for in the MSU numbers. Therefore, it is up to your software vendor therefore decides whether to enable software to run on a specialty engine. IBM has enabled certain functions for speciality engines, and other vendors have done so similarly for selected mainframe software components.

What is also important to realize, is that even though the general purpose and the specialty engines are using exactly the same hardware, the acquisition cost of a specialty engine is significantly lower than the price for a general-purpose CPU.

Types of specialty engines

The most important types of specialty engines are called zIIP and IFL. I will spare you what the abbreviates mean – they are never used.

zIIP

A zIIP is a specialty engine that is only usable in a z/OS environment. The special functions you can run on a zIIP are: all Java programs, certain Db2 functions, and z/OS Containers (more on that in separate posts). There are a number of other mainframe software vendors that also enable their software for zIIPs, and they have special conditions for these software products.

IFL

An IFL is a specialty engine used to Linux on the mainframe. IFLs have nothing to do with z/OS. The CPUs configured as IFLs can only be used for Linux applications in LPARs running Linux. I will discuss Linux on the mainframe in a separate section Linux for the mainframe.

ICF and SAP

The other types of specialty engines enable the use of the pool of processors in a mainframe for computing needed to support “real” business workloads. These run “technical processing” tasks. The CPU usage is therefore also not accounted for in the software bill. These processor types are the following:

The Integrated Coupling Facility (ICF) processor is used with a Coupling Facility.  A coupling facility is a special LPAR that provides special operating system functions in a sysplex. We will discuss the Coupling Facility concepts briefly in section Special sysplex components: the Coupling Facility.

The System Assistance Processor (SAP) specialty engine is used to run I/O operations independently from your central CPUs, in a mainframe component called the IO Subsystem. This type of processors is used when moving data from memory to storage. This not only makes sure that this processing does not add to your software bill, but the IO Subsystem construct also gives the mainframe its extremely fast, high-volume IO processing capability.

DBAOTM – Big hardware, but partitioned in smaller parts

In the previous post I highlighted the most common peripherals. In this post I will describe how such a big piece of equipment is chopped up in smaller logical parts.

This post appears as part of a number of articles in the category “Don’t Be Afraid Of The Mainframe.

Logical partitions

We have seen above the mainframe machine can contain a huge amount of computing capacity. You will run all of your development, test, acceptance and production environments on this large box, so you need a way to spread all this computing capacity over these environments.  The mainframe technology provides many facilities to achieve this.

One of the main tools to setup the hardware in logical and physical parts is a tool called PR/SM (pronounced as “prism”). With this tool you can chop up the large mainframe box into smaller virtual parts called logical partitions or LPARs. These LPARs are a smaller version of the big hardware box. In an LPAR you can run your test or production system.

A common way to split the mainframe computing capacity is to distinguish separate LPARs for Development activities, for Testing activities, Acceptance activities and of course for production.

In larger computing environments, there may be separate LPARs for different types of applications, or for different business units. A bank may have separate LPARs for their wholesales business and for their retail business. Other organisations may have separate LPARs for their business analytics applications and their logistics applications.

More technical information on PR/SM can be found here

https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_mfhwsyspart.htm

DBAOTM – Hardware – Peripherals and other quirks

In the previous post I introduced the mainframe server hardware. In this post I will highlight the most common peripherals – other hardware like disk storage, tape and printers.

This post appears as part of a number of articles in the category “Don’t Be Afraid Of The Mainframe.

Mainframe peripherals

There are no hard disks in a mainframe server. This is also the case for some larger x86 servers. In our laptops and PCs, we always have a hard disks – or SSD nowadays – to store our data. Storage of the data for the mainframe is external to the mainframe box. Data is stored in separate equipment, called storage controllers or storage (sub)systems.

A special high-speed network connects a mainframe to its storage. A mainframe needs a lot more disk storage than our laptops. Normal amounts of storage easily exceed 1000 TB.

By the way, a quirky thing: disk storage on the mainframe is often referred to as DASD. This is an old abbreviation for Direct Access Storage Device.

To confuse you further, when mainframers refer to “storage”, they may actually mean memory, the RAM in the box. So be careful with the term storage, and make sure what is meant when it is used.

For backup and archiving of data, many organizations still use storage on tape. Tape units, often including a “library” to register and store the tape cartridges used, are also supplied in separate boxes.

A special fibre-optic network connects the mainframe servers with the storage hardware. For this connection mainframes use a proprietary SAN protocol called FICON.

You may still have printers connected to your mainframe. But you find this not so often anymore. Most printing facilities have been replaced by online applications. Where printing is still needed, this is often done by dedicated printing facilities or printing firms.

DBAOTM – The mainframe box, a big box

In this post and subsequent ones, I will discuss the main hardware concepts of mainframe environments. I will not go into the tiniest detail, but I must be a bit technical. To make things easier to understand, I will compare the mainframe technology with mainstream x86 and Unix technology. You will see there is often a difference in terminology.

The mainframe has a long history. Some hardware terminology is different from what we know. To get some understanding of this hardware we need to talk a little bit about mainframe jargon.

This post appears as part of a number of articles in the category “Don’t Be Afraid Of The Mainframe.

A box full of CPU and memory

A mainframe is a large refrigerator-size box with computing capacity. The box houses the computing units, the CPUs. These are not x86 CPU’s like in your PC. But a mainframe uses CPU’s build according to the processor architecture called IBM z/Architecture.

In your PC, the CPU, the memory and other chips are soldered on a motherboard. Like in your PC, you find a sort of motherboard in the big mainframe box. The mainframe motherboard is called a drawer.  The drawer is a bit bigger than your PC motherboard because it carries more components.

On the drawer the CPU and memory chips for the mainframe are soldered, and some more components.  A drawer can have a number of CPU chips. In the z14 model the number of CPU chips in a drawer can be 6.

Each CPU chip on the drawer has a number of processor cores, the actual CPUs. The number of processor cores varies per mainframe model. In the z14 mainframe model there are 10 cores on a chip.

Finally you can have multiple drawers in a mainframe box. In the z14 there can be 4 drawers.

Now let’s count. You can have a maximum of 4 drawers, each with a maximum of 6 CPU chips, each chip with 10 cores. Thus, you can have 240 processor cores in a mainframe box – the z14 model to be precise. The mainframe uses a number of these 240 cores for internal processing. For you as a mainframe user up to 170 processor cores in a single mainframe box. 

You also need memory. Every drawer can have a maximum of 8 TB of memory in the z14. So in total you can have 32TB of memory in your z14 mainframe.

Enfin, a lot of computing power.

What else is in the box

Besides the main computing elements, CPU and memory, the mainframe server contains almost everything else needed. Power supplies, network cards, cooling devices, I/O cards, and more .

To make sure the mainframe can continue running when one of these components fails, you find at least two items of these components in a mainframe.

In the picture of Figure 3 you can see the following components:

  • Processor drawers – as we saw, the motherboard of the mainframe. There can be multiple processor drawers in a machine, depending on the number of CPUs you have ordered.
  • PCIe Input Output drawers in which cards are configured for networking equipment, IO interfaces (disk, tape, server-to-server connections) and additional facilities such as encryption and compression. PCIe is a standard for interfaces in a computer.
  • Cooling components to regulate the temperature. A mainframe box can be water-cooled or air-cooled, by the way.
  • Power supplies to provide power for the components in the machine.

All in all, it looks very much like a normal computer, but a little bigger.

In the picture you also see two laptops. As we will see later, the big box needs to be configured. The two laptops are so-called support elements. With these support elements you can configure the hardware, and also monitor the state of the hardware.

More technical information on mainframe hardware can be found here:

https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_mfhardware.htm