Mainframe: (Still) the King of Computing

IBM’s recent troubles: it’s certainly not about the $^%#@! Mainframe


Way back in April of 1964 IBM invented ‘the Cloud’ – it’s only taken 50+ years for the rest of the world to get excited about it!

Mainframe computing is based on Cloud utilisation concepts and the delivery of large scale compute and comms infrastructure in an integrated and secure fashion for varied and independent workloads. Does this sound familiar to what some IT Cloud ‘innovators’ are selling?

It’s the Cloud issue that is allegedly giving IBM headaches as it once again, in its 100 year history, re-invents itself to retain and advance its global leadership position in the now very competitive business services arena. This week it was a little more doom and gloom as IBM announced its second quarter results, but let’s reflect a little on the facts as they relate to our good old friend the mainframe.

The mainframe didn’t die while you weren’t watching

The vast majority of the world’s truly great companies, the ones that touch each of our lives every day, (banking, insurance, retail, airlines, oil and gas) all rely on mainframes at their core. As a consequence the industry leaders continue to invest heavily into this sector. IBM itself invested over $1 billion developing the latest generation of z13 Processor that it announced in 2014.

These machines have massively scalable virtualisation, the world’s most advanced microprocessor design and exploit software defined technologies. They have the fastest processing capability of any commercial computer and when it comes to capacity the latest z13 has a single footprint that can deliver a jaw dropping 111,500 Million Instructions Per Second (MIPS). That’s an amazing figure but was the $1bn R&D investment by IBM worth it? Consider this humbling fact: in the year 2000, the maximum capacity of a single footprint was just 3,000 MIPS.

To put this in perspective, a single z13 mainframe could now host almost the entire mainframe compute capacity requirements of Australia’s largest companies and government agencies on just one single machine.

That’s all the banks, all the telcos, all the airlines, power, water, police, electricity and government departments – every mainframe in the country could today sit on a single box – now that’s ADVANCED technology and that is a mega cloud in anyone’s books.

MTBF & Security: still as important as ever

Then consider the fact that the Mean Time Between Failure of a mainframe is 40 years and that the security parameters are unequalled.

So – with all that, why the negative sentiment towards the Big Iron? In my experience, it boils down to basically two things:

  1. Software cost
  2. A perceived skills shortage

Mainframe Software Cost

If you are running a mainframe with the z/OS operating system software licenses can get expensive. However you have options: you can exploit innovative features that are inherent to the boxes and offload work to alternative processing engines; you can ‘cap’ your capacity to control your costs; you can introduce better Software License Management techniques by exploiting third party products like BMC’s Sub-Zero; or you can implement tailored load balancing. Better still, you can move to Linux! Yep – mainframes run large scale Linux better than anything, so stop wasting money, improve reliability, increase security, reduce management overhead and get those Linux workloads onto a Systems z.

Is there a mainframe skills shortage?

It is true that less graduates are entering or advancing in the workforce with Mainframe skills. The lure of Java, WebSphere and mobile app development sounds dead sexy alongside the likes of words like COBOL, CICS or IMS.

The reality is though that the mainframe skill set is manageable. Australia has plenty of skills available, it’s how these skills are organised that is creating the challenge.

For too long outsourcers have soaked up the vast majority of the mainframe skill set or large corporations have cloistered their mainframe teams into dedicated silos of expertise deep within their companies. Recent developments in how highly coveted skill sets are deployed in the enterprise technology space are challenging the status quo. With long term outsourcing contracts now on the nose and full time employee (FTE) head counts being challenged in the corporate sector, highly specialised technology experts are gravitating towards companies like ISI where they can perform a range of varied and interesting work on a contract, casual, part time or roaming basis. As a result companies are now actively looking to shift the bulk of their operational tasks to third parties who commit to service level agreements that guarantee delivery of outcomes, thereby eliminating primary concerns around skills shortages.

Like all good business ideas, if it’s hard to do something yourself then consider shifting that task to somebody better able to do it and obligate them to deliver: make it their problem, not yours.

The Mainframe is Dead: Long Live the Mainframe!

And so it will continue to be the Mainframe that remains the King of Computing for a long time to come. Long live the king. The king is far from dead: it’s just some attitudes that are obsolete.

ISI has a team of experts well versed in Systems z, Power Systems, IBM Storage and a range of Enterprise Technology services.  Please contact us if you would like more information.

Steven is the CEO and Chairman of ISI. He has over 25 years of senior management experience in the Asia Pacific region in the Enterprise Technology sector.

2 thoughts on “Mainframe: (Still) the King of Computing

  1. Yes, a single dminoant player is hard to imagine regarding something that would need to be this far reaching. IBM might fit the bill though with their dedication to the mainframe and Linux as you would probably need many small VMs operating from the mainframe serving different clients with data from a single datastore (so when a new gadget arrives it comes with a VM. You install this VM on your mainframe, it connects to the datastore and network on the mainframe and you’re up and running no settings necessary). It would be ironically poetic to have IBM kill both Google and Microsoft.Regarding where the hardware would be located I’m convinced that the ideal place for it is the home. Of course an online backup is needed (preferably distributed so your data is never available from a single provider) but I would still side with the hardware operating from the home. The home computers 10 years from now will be powerful enough for this and as the internet gets faster this sort of streaming will be easy to have from your home as providers would love to bill you for high throughput VDSL2 connections.When this happens and when providers shorten their lag the computer spreads out across the internet. Many see this happening already with the construction of huge datacenters which will offer online apps and much of the cpu cycles needed but this can also turn out the other way around. Instead of a huge processing nodes in the center of the internet we could just as well have a multitude of processing nodes on the outer ridge of the internet, the home connections. Large datacenters can never keep up with the demand for cycles in the long run but the home computers will with their huge amount of unused capacity.

  2. Hi
    I want to know whethere the mainframe will outdate in future.because i want to do mainframe course .please suggest me.

Comments are closed.