You are on page 1of 13

May 2009

InformationWeekanalytics.c om

I n f o r m e d C I O S e r i e s
BAL C
LO I
O
G

Mainframes and Virtualization


5 Q u e s t i o n s To As k Ab o u t U n i f i c at i o n

Our InformationWeek Analytics Informed CIO series


arms business technology chiefs with the questions they
must ask before dropping big bucks. In this installment,
we examine what part today’s mainframes—and the IT
professionals who manage them—play in developing an
enterprisewide unified virtualization strategy.
By Mike Healey
Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O BAL C
LO I

O
G
CO NTENT S 3
4
Author’s Bio
Tech Basics
6 Bridge the Divide
7 Figure 1: Data Center Consolidation Primary Power-Saving Method
10 Figure 2: Majority Use Tools to Manage Configuration
12 Your First Mainframe: Do the Math

5 Questions to Ask:
7 1 / How can we most effectively address organizational issues?
8 2 / What’s our long-term OS outlook?
9 3 / What are our uptime requirements and options to
achieve them?
10 4 / What virtualization-aware monitoring and management
tools do we use?
11 5 / What are the staffing implications?
F
O
E
L
B
A
T

2 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
Michael Healey is the chief technology officer at GreenPages
Technology Solutions. He has more than 20 years experience in
technology and software integration. Mike previously served as
Michael Healey president of network integrator TENCorp, which was acquired
GreenPages.com
by GreenPages in 2006. As lead technologist, he oversees both
the application development and implementation teams. GreenPages pro-
vides technology consulting, engineering, development and support servic-
es to commercial, education and health care clients throughout the United
States. Prior to founding TENCorp, Mike was an international project man-
ager for Nixdorf Computer and a Notes consultant for Sandpoint Corp.

Mike has has taught courses at MIT Lowell Institute and Northeastern
University and has served on the Educational Board of Advisers for several
schools and universities throughout New England. He has a BA in opera-
tions management from the University of Massachusetts Amherst and an
MBA from Babson College.

Mike is a regular contributor for InformationWeek, focusing on the business


challenges related to implementing technology. His work includes analysis
of the SaaS market, the challenge of green IT and operational readiness
related to virtualized environments.

3 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
Te c h B a s i c s
Mainframes are built around the core concept of virtualization. IBM
dominates the current mainframe landscape and supports two virtualiza-
tion levels. Logical Partitions (LPARs) enable IT to slice the system up into
multiple virtual mainframes that communicate with one another using
internal socket connections. Several core operating systems are supported,
including z/OS, the most common OS for the mainframe, as well as legacy
systems like z/VSE and z/TPF. LPARs can also run Linux or IBM’s hypervi-
sor, z/VM.

Most x86 folks will be familiar with z/VM, which lets IT run multiple
machines within an LPAR. Reed Mullen, IBM System z virtualization man-
ager, says the most common configuration he sees is one or more LPARs
for the traditional z/OS and associated applications, and one or more
LPARs for the Linux configurations running under z/VM.

The enabler is IBM’s Integrated Facility for Linux (IFL) engine. The IFL is
part technology, part change in licensing rules that lets a Linux OS run on
the mainframe, either natively or on top of z/VM. The IFL has the same
performance design as a system running z/OS and can access the same
resources, but it’s about one-fourth the price of a traditional z/OS configu-
ration. In addition, the price doesn’t increase as you add Linux instances.
An IFL is priced based on the setup of CPUs you license it for; IT can
carve the capacity up as it likes.

These aren’t new features—IBM introduced the IFL engine way back in
2000, probably to counter the “mainframes are dead” media rhetoric being
slung around at the time. Anyone investing in mainframes was thought to
have a severe case of “Concorde Effect” sunk-cost psychosis. To combat
this perception, IBM announced the program as part of a broader Linux
initiative it began in the late ‘90s. The company has continued to expand
its support for both IFL and z/VM, adding Sun’s OpenSolaris to the list of
supported operating systems late last year.

4 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
Te c h B a s i c s
Is it working? Over 80% of Fortune 1,000 organizations still have main-
frames in place. IBM owns about 90% of this market, with a few smaller
players such as Fujitsu and Hitachi supporting mainly legacy clients. And
according to IBM, 25% of the world’s mainframes are now running Linux,
accounting for 15% of the overall workload (measured in MIPS).

Back in the distributed world, there’s been a groundswell of activity around


virtualizing x86 servers using tools from VMware, Citrix and Sun to maxi-
mize ROI, reduce overall server footprint and cut associated expenses.
Meanwhile, mainframes have chugged quietly along, expanding their foot-
print in the enterprise. The main growth hasn’t been in legacy applications,
but Linux and Java apps running virtualized on the mainframe.

Unfortunately, these two groups haven’t always compared notes on their


projects: A recent survey by Unisphere Research showed only 22% of
organizations are looking at their virtualization adoption plans at the
enterprise level. Most are doing it within departments and other silos.

Now, we’re certainly not saying that a company with hundreds of x86
boxes should replace them with a mainframe to make virtualization more
efficient. However, if an organization has a mainframe as part of its infra-
structure, it should certainly look at the two platforms together when con-
sidering its long-term virtualization strategy. That sounds obvious, but
these are typically separate groups. CIOs must take the lead in breaking
down the walls and remind x86 jockeys that mainframes originated the
concept of virtualization, and as virtualization continues to transform the
way we run our networks, mainframe-centric technology has great value
as part of their core system designs.

It’s time to think about a unified enterprise virtualization strategy.

5 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
Bridge the Divide
Does your data center house hundreds of virtualized x86 boxes, yet you’re still maxing out on
cooling capacity and space while facing increasing demand? If yours is one of the over 80% of
Fortune 1,000 companies that run big iron, consider that not only do mainframes provide a
rock-solid platform for virtualization, they can yield consolidation ratios that dwarf the more
popular x86 virtualization path.

Today’s mainframes typically host many of the back-end systems we’ve come to take for grant-
ed over the years. The newer, sexier stuff—ERP and BI, e-commerce and Web apps, e-mail—
have tended to grow in the distributed world. But when your goal is virtualization nirvana,
mainframe teams are the Zen masters. They’ve been provisioning and dicing up their capacity
for years, even before the rise of z/VM and Linux. In this realm, best practices around opera-
tional policies and procedures, mature capacity planning discipline, and resource allocation
have matured through time, experience and necessity.

So as you expand your virtualization practice within distributed systems and face the chal-
lenges around virtual server sprawl, operational control and provisioning, it just makes sense to
tap mainframe knowledge and management resources.

Beyond expertise, should mainframes be part of your distributed virtualization plan? Short
answer, Yes. They offer a potentially more efficient and compact way to virtualize while leverag-
ing existing investments. Whether you use the mainframe’s virtualization capabilities will
depend on the specifics of your core applications, performance requirements, DR plan and the
staffing options you have available.

Speaking of staffing, in all likelihood, the mainframe team and your distributed networking/x86
group are islands unto themselves. In this report we’ll discuss best practices to overcome the
divide and address other consideration involved in developing a unified virtualization strategy.

1 / How can we most effectively address organizational issues?


“We’d welcome the opportunity to work closer with the x86 group,” says one mainframe engi-
neer at a pharmaceutical manufacturer, who asked not to be identified. “However, I expect that
there are serious misgivings on both sides about the marriage of the architectures. This ‘social
and political divide’ needs to be mitigated or mediated by management, not us.”

6 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O BAL C
LO I

O
G
Mainframe and x86 teams must agree on SLAs and decide who has rights for budgeting, provi-
sioning and security decisions. This is the biggest challenge when developing a unified virtual-
ization strategy. Make no mistake—these are two separate professional communities. While
interviewing business IT professionals for this report, I was often asked with a mix of disdain,
skepticism and confusion, “Are you a mainframe guy?” Like most of us who’ve been in this
industry for a while, I’m “mainframe reformed.” Anyone with 20+ years in technology started
with, or was heavily influenced by, midrange and mainframe systems. They were IT. When the
old order was usurped by the distributed world, most of us moved with the industry, splitting
mainframes and their keepers into a separate ecosystem.

Even vendors reflect this. Major mainframe players like BMC, CA, Compuware and IBM have

Figure 1

Data Center Consolidation Primary Power-Saving Method


Please rate the likelihood that your organization will take the following measures to save power.

4.2
Consolidate IT equipment through virtualization or other efforts
3.8
Rearrange IT equipment in data center for optimal cooling use
2.9
Upgrade to high-efficiency UPSes
2.9
Rearrange CRAC units for optimal cooling
2.9
Upgrade to high-efficiency PDUs
2.7
Use self-cooled racks for high-output equipment
2.6
Upgrade to high-efficiency chillers
2.5
Upgrade to high-efficiency CRAC units
2.5
Employ row-based cooling systems for high-demand systems
2.4
Enclose hot aisles to eliminate air mixing
Note: Mean averages based on a five-point scale where 1 is "won't do it" and 5 is "planned/already doing it"
Data: InformationWeek Analytics Data Center Survey of 279 business technology professionals

Data_Center01_Chart_14R1
7 May 2009 © 2009 InformationWeek, Reproduction Prohibited
Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
evolved with the industry and offer separate suites of distributed products. Most have separate
and distinct staffing, development and channel programs matching their customers’ organiza-
tional split identities.

Should CIOs take this on? Given the economy, the time may be right.

“There are a lot of forces in play that will bring teams together: technology advances, perform-
ance benefits and the ability to reduce costs,” says Reed Mullen, IBM System z Virtualization
Manager. “Linux on the mainframe is the perfect bridge to link the groups.” Mullen cites the
case of an insurance company that migrated its growing set of Linux apps to the mainframe.
The justification was more about letting the Windows team focus on its core systems, and less
about the virtualization platform itself. The Linux-based apps weren’t a core strength, and since
Windows doesn’t have a home on the mainframe, it was a more natural way to focus the teams.

Expect a lot of debate and handwringing here, especially when it comes to bridging the gap
between different SLAs and deciding who will control the actual provisioning process. But the
stakes are too high to let divisiveness prevail.

2 / What’s our long-term OS outlook?


The core OS to focus on for mainframe virtualization is Linux, with Novell SuSE and Red Hat
leading the pack. Linux has a nine-year track record on big iron and multiple options for
deployment. I/O volume and integration between Linux and legacy mainframe apps are two of
the main considerations as you look at migrating functions to the mainframe as part of a uni-
fied virtualization strategy. Running demanding, critical apps on the mainframe lets you lever-
age the speed benefits of LPAR connectivity and resource sharing without the traditional bottle-
necks incurred if Linux hosts are on a separate box.

Going up the stack, focus on your critical business functions and applications to analyze what
will provide the best use of mainframe capacity. An Apache server hosting a few thousand users
is not a great use of your mainframe. A cluster of servers that deliver the front end of an e-
commerce system linked to your back-end DB2 application? Now we’re talking.

Mainframe-centric development is continuing for other platforms as well. Novell recently


announced its SuSE Linux v11 “mono extension” that lets users run .NET-based applications

8 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
on System z mainframes. Sun and IBM announced recently that OpenSolaris is supported
under z/VM. But these are for test and pilot candidates at present. The core ROI will be in larg-
er-scale Linux application that have heavy I/O requirements or require communication with
other mainframe-based systems.

Any Linux engine that ties back to the mainframe database is an easy target—remember, the
LPAR essentially has its own internal network and direct connections to other LPARs, providing
higher-speed throughput.

One note: IT pros love to tinker and figure, “Maybe old reliable mainframes will bring some
stability to Windows.” Nice idea, but let’s clear this up once and for all: Windows doesn’t run
on the mainframe in production mode—never has, and probably won’t in our lifetimes.
Mainframe automation vendor Mantissa did lay out a framework for Windows support this
year at the mainframe Share conference, but the product is targeted at development groups.
IBM has publicly stated it has no plans in this regard.

3 / What are our uptime requirements and options to achieve them?


A core reason enterprises look to mainframes is reliability. Every machine has hiccups, but big
iron has an enviable track record and battle scars from multiple decades of system development
and failure. In short, the more painful downtime is, the more appealing the mainframe becomes.

However, don’t necessarily assume you’re getting an apples-to-apples model for uptime when
moving VMs from distributed systems to the mainframe. For example, one of VMware’s neatest
tricks is VMotion and High Availability, the ability to migrate virtual machines between hosts as
conditions dictate or in the event of a failure. Citrix Xen and others have a similar feature, but
the option isn’t in z/VM.

Why not? IBM touts the reliability stats of the mainframe itself, and there are failover processes
and systems core to the z Series. Most organizations with mainframes in operation have multi-
ple systems. However, for organizations with a single mainframe, the capacity-on-demand
option is a smart way to plan failover. This program is similar to the days when midrange sys-
tems shipped with their total drive capacity installed. Customers that wanted an “‘upgrade”
started paying once an engineer formatted the drives. Capacity on demand works the same
way: The mainframe ships with processors installed, but not active. They can be activated at

9 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
any time, either in an emergency or as part of planned expansion. You pay a discounted price
for the hardware, then settle up as you increase capacity. Try getting Dell to ship you a few
spare servers to have on hand, just in case.

4 / What virtualization-aware monitoring and management tools do we use?


One of the biggest challenges plaguing virtualized distributed networks is finding a set of tools
that can help IT monitor and manage the entire system—a true single pane of glass. Adding
mainframes to the mix won’t make this easier; there’s no single set of tools that covers all levels
and all platforms. However, there are robust applications on the mainframe side that can help
wrangle a Linux environment.

While there are fewer vendors with tools for mainframe management than you’ll find in the
distributed space, these vendors’ products are mature and fairly deep in terms of reporting and
functionality. For this you can thank the cost of running a mainframe—it’s such an expensive
asset that these systems have always been under a cost-control microscope. In response, main-
frame pros want all the meters they can get to make sure their systems are running properly,

Figure 2

Most Use Tools to Manage Configuration


Are you using tools to manage configuration and reliability of the applications running on your virtual environment?

Yes

66%

34%

No

Base: 226 respondents using virtualization technology


Data: InformationWeek Analytics Virtualization Survey of 348 business technology professionals

Virtualization_Chart_10
10 May 2009 © 2009 InformationWeek, Reproduction Prohibited
Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
and to justify their existence. The result is a very established set of applications from vendors
like CA, Compuware, IBM and others. Smaller vendors like Velocity Software (which assists
with Linux provisioning) have grown with the market.

The distributed side may grouse about the price of these virtualization management and moni-
toring tools, but reality is now rearing its ugly head in their environments. Enterprises are cur-
rently scrambling for manufacturer and third-party management suites that will help them get
control over their x86 virtualization infrastructures.

A nice benefit is these larger vendors have naturally migrated to the distributed world over the
years. While none has made the full leap to integrate all tools across all platforms, there’s a
keen focus on application performance monitoring and management, which helps force some
level of interoperability and standard performance metrics. We’d bet on mainframe vendors
incorporating x86 virtualization management functionality before the inverse scenario.

For now, you’ll likely need a suite of tools to manage the distributed network as well as a
smaller set for the mainframe. The key is to make sure any tool you have on either platform
can be recreated or migrated to the other side.

5 / What are the staffing implications?


CA issued a very popular survey last year, reporting that 72% of organizations have mainframe
pros eligible for retirement. This created a groundswell of questions, reports and hand-wring-
ing about the future of big iron. Shocking? Not really—the graying of the baby boom genera-
tion is hitting the ranks of teachers and civil engineers, too. What’s proven in IT is that if there’s
a demand and associated pay levels, folks will migrate. Think about it: There were no x86 vir-
tualization engineers six years ago, now you can’t swing a cat without hitting one. Advanced
management tools and the rise of Linux on the mainframe take some heat off the need for z/OS
and/or COBOL staff; we’re talking hardware engineers, admins, and yes, virtualization engi-
neers.

So how hard is it to migrate and retrain staff? We’d equate it to moving from Novell NetWare
to Microsoft Active Directory. VMware or Citrix Xen virtualization engineers require a unique
skill set that combines OS, infrastructure and hardware chops, regardless of platform.

11 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
In theory, these people can make the migration over to working with big iron virtualization
relatively painlessly.

What if you need someone now? A quick check of the rates for a senior virtualization engineer
and a senior mainframe engineer in New York and Los Angeles reveals that the average consult-
ing fee for both is $175 to $200 per hour, and people are available. Finally, once the IT pro is
in the core OS or application, the underlying hardware doesn’t matter. Vince Re, Senior VP at
CA, described an interesting experiment with its engineering teams. “We set up two worksta-
tions, one running mainframe Linux, one running x86,” Re says. “Challenged them to tell
which was which. Most couldn’t tell the difference.”

Your First Mainframe: Do the Math


Most of the growth in mainframe virtualization has come from organizations that already had a
mainframe infrastructure in place—easy pickings since more than 80% of the world’s govern-
ment and corporate data is stored on big iron. Once these organizations work through the
technical, operational and organizational challenges, they can leverage the investment and add
new performance.

But what if you don’t have a mainframe but want to go all in on virtualization?

This is where it gets a little tricky. Let’s do the math. The starting point for mainframe hardware
is $100,000. Once you add software, tools and backup, you’ll be over $250,000. Debates rage
as to the comparable number of distributed X86 servers, but we’ll set the bar at 100. Assume a
$2,500 average cost per virtual machine in an enterprise-level configuration, including high-
end servers, tools, backup system, licensing and infrastructure. Your 100 Linux servers could
be added to an enterprise-level virtual farm for, you guessed it, $250,000.

We love fuzzy math.

The truth is, in this case, we’re changing “lies, damn lies and statistics” to “lies, damn lies and
cost calculators.” Basic ROI and costing models are easy when you’re looking at moving a phys-
ical server and virtualizing it. By the time you’re considering a mainframe, you’ve moved
beyond basic ROI models.

12 May 2009 © 2009 InformationWeek, Reproduction Prohibited


Mainframes and Virtualization
I n f o r m a t i o n We e k a n a l y t i c s . c o m

I n f o r m e d C I O
BAL C
LO I

O
G
Factor in the considerations we’ve discussed and work with your preferred vendors to see what
a truly unified virtualization infrastructure will cost. Start with the core applications and func-
tions on the mainframe. Can any legacy applications be phased out? Next, do the same with
the Windows farm. Can any of these applications be eliminated? If not, then their associated
costs aren’t a factor.

Now you’re looking at the delta cost of adding particular servers on one platform or the other.
The grand arguments of power, cooling, green and the ability to turn water into wine are no
longer on the table. You need both, so what’s the best place to add another Linux server?

Suddenly, you’ve got an easier list for comparison:


> Application performance needs
> Uptime requirements
> Incremental licensing costs including OSes, management tools, backup software
> DR plan
> Staffing capabilities
> Organizational changes, including potential cost saving with a more consolidated approach

Both the x86 and mainframe platforms have proven their value over the years; neither is going
anywhere. CIOs need to move beyond the “either or” argument to find the right investment
level for each platform, then take the lead in bridging the operational, organizational and tech-
nical differences that exist between these teams.

13 May 2009 © 2009 InformationWeek, Reproduction Prohibited

You might also like