You are on page 1of 84

This research note is restricted to the personal use of inttelmexit@telmex.

com
G00214402

Hype Cycle for IT Operations Management,


2011
Published: 18 July 2011

Analyst(s): Milind Govekar, Patricia Adams

Adoption of new technology and increasing complexity of the IT


environment continue to attract IT operations vendors that provide
management technology. Use this Hype Cycle to manage your expectations
regarding these management technologies and processes.
Table of Contents
Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 3
Changes and Updates................................................................................................................4
The Priority Matrix.............................................................................................................................8
Off The Hype Cycle........................................................................................................................ 11
On the Rise.................................................................................................................................... 11
DevOps.................................................................................................................................... 11
IT Operations Analytics............................................................................................................. 12
Social IT Management.............................................................................................................. 14
Cloud Management Platforms.................................................................................................. 16
Behavior Learning Engines........................................................................................................17
Application Release Automation............................................................................................... 18
SaaS Tools for IT Operations.................................................................................................... 20
Service Billing........................................................................................................................... 22
Release Governance Tools....................................................................................................... 23
IT Workload Automation Broker Tools.......................................................................................26
Workspace Virtualization...........................................................................................................28
At the Peak.....................................................................................................................................29
Capacity Planning and Management Tools............................................................................... 29
IT Financial Management Tools.................................................................................................30

This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

IT Service Portfolio Management and IT Service Catalog Tools................................................. 32


Open-Source IT Operations Tools.............................................................................................34
VM Energy Management Tools................................................................................................. 35
IT Process Automation Tools.................................................................................................... 36
Application Performance Monitoring......................................................................................... 38
COBIT...................................................................................................................................... 40
Sliding Into the Trough....................................................................................................................41
IT Service View CMDB..............................................................................................................41
Real-Time Infrastructure............................................................................................................44
IT Service Dependency Mapping.............................................................................................. 46
Mobile Device Management......................................................................................................48
Business Service Management Tools........................................................................................50
Configuration Auditing.............................................................................................................. 52
Advanced Server Energy Monitoring Tools................................................................................54
Network Configuration and Change Management Tools........................................................... 55
Server Provisioning and Configuration Management................................................................. 56
ITIL........................................................................................................................................... 59
Hosted Virtual Desktops........................................................................................................... 61
PC Application Streaming......................................................................................................... 63
IT Change Management Tools.................................................................................................. 65
IT Asset Management Tools..................................................................................................... 67
PC Application Virtualization..................................................................................................... 69
Service-Level Reporting Tools.................................................................................................. 70
Climbing the Slope......................................................................................................................... 72
IT Event Correlation and Analysis Tools.....................................................................................72
Network Performance Management Tools................................................................................ 73
IT Service Desk Tools............................................................................................................... 75
PC Configuration Life Cycle Management.................................................................................76
Entering the Plateau....................................................................................................................... 77
Network Fault-Monitoring Tools................................................................................................ 77
Job-Scheduling Tools...............................................................................................................78
Appendixes.................................................................................................................................... 79
Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 81
Recommended Reading.......................................................................................................................83

Page 2 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

List of Tables
Table 1. Hype Cycle Phases.................................................................................................................81
Table 2. Benefit Ratings........................................................................................................................82
Table 3. Maturity Levels........................................................................................................................82

List of Figures
Figure 1. Hype Cycle for IT Operations Management, 2011.................................................................... 7
Figure 2. Priority Matrix for IT Operations Management, 2011...............................................................10
Figure 3. Hype Cycle for IT Operations Management, 2010.................................................................. 80

Analysis
What You Need to Know
This document was revised on 18 August 2011. For more information, see the Corrections page on
gartner.com.
As enterprises continue to increase adoption of dynamic technologies and styles of computing,
such as virtualization and cloud computing, the IT operations organization faces several challenges:
implementing, administering and supporting these new technologies to deliver business value, and
continuing to manage the complex IT environment. Consequently, the IT operations organization
plays a key role in becoming a trusted service provider to the business. This, however, is a journey
of service orientation and continuous improvement, which will result in favorable business
outcomes.
The expectations of the business in terms of agility, low fixed costs and service orientations have
risen due to increased visibility of offerings, such as cloud computing-based services. The promise
of new technology to deliver these business expectations continues; therefore, the hype associated
with IT operations technology used to ensure service quality, agility and customer satisfaction
continues. Thus, making the right choices and investments in this technology becomes important.
This Hype Cycle provides information and advice on the most important IT operations tools,
technologies and process frameworks, as well as their level of visibility and market adoption. Use
this Hype Cycle to review your IT operations portfolio, and to update expectations for future
investments, relative to your organization's desire to innovate and its willingness to assume risk.

The Hype Cycle


As the global economy has shown signs of recovery during the last 12 months, investments in IT
reflect this change. Cost optimization continues to be a key focus for many enterprises; yet there is
a strong drive to make incremental investments in innovation and best practices, especially in the

Page 3 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

area of IT service management. According to Gartner CIO polling, IT budgets have increased very
modestly; therefore, IT ops and cloud management platforms are seen as opportunities to reduce
cost through automation thus enabling IT to invest in growth and transformation projects.
Achieving this, in conjunction with reducing their spending, helps IT ops to "run the business."

Changes and Updates


Along with economic changes, IT infrastructure and operations has experienced significant
disruption, which is represented in this Hype Cycle. IT organizations have continued to build upon
their virtualization and cloud computing strategies. The value proposition and risks associated with
private, hybrid and external cloud is gaining traction in some organizations. The disruption is
occurring in several areas, most notably the emergence of the DevOps arena and the intersection
with release management. DevOps represents a fundamental shift in thinking that recognizes the
importance and intersection of application development with IT ops and the production
environment. Just as silos were broken down with the adoption of IT service management that
crossed IT operations disciplines, the shift is now at a higher level i.e., across IT application
development and IT operations to build a more agile and responsive IT organization. In
conjunction with cloud and virtualization, this shift is resulting in new approaches that make IT more
business-aligned and remove many road blocks. However, this is a cultural change that requires
employees to be willing and able to accept, adapt, implement and build upon these new
approaches.
Other changes that have occurred on the 2011 Hype Cycle are in the process and tool areas. With
respect to process, ITIL version 2 (ITIL V2) was retired from the Hype Cycle, as ITIL V3 began to
outweigh ITIL V2 relevance in some of the process design guidelines. The adoption of ITIL V3 has
grown since its introduction four years ago and has almost become mainstream, with a large
number of Gartner clients having adopted at least one component of the most recent books. Thus,
we have just one combined entry (ITIL) for the two ITIL versions.
In 2011, application performance monitoring (APM) technology became a consolidation of the
various related technologies, such as end-user performance monitoring, application transaction
profiling and application management, which are typically part of the APM market. We have also
added behavior learning engines (BLEs). When used with other IT operations tools, BLEs improve
the proactivity and predictability of the IT operations environment. Furthermore, we have combined
virtual resource capacity planning and resource capacity planning tools into capacity planning and
management tools, which reflect the market and evolution of tools and technologies to increasingly
provide a holistic and consolidated approach.
As IT operations increasingly is run like a business, IT financial management becomes important,
beyond just chargeback. Thus, we have introduced IT financial management tools to match this
change and subsume chargeback tools. Furthermore, we have added service billing, because it also
is important in this financial context. We have renamed run book automation to IT process
automation to show the precise process-to-process integration and the associated IT operations
tools integrations. We also have renamed network monitoring tools to network fault-monitoring
tools to more accurately describe their functions.

Page 4 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

To capture the "wisdom of IT operations specialists," social IT management tools that enable and
capture collaboration have emerged, particularly in the IT service desk area, and thus are on this
Hype Cycle. Energy management is always a concern for IT operations organizations and, therefore,
virtual machine (VM) energy management and advanced server energy monitoring tools, for use
specifically within data centers, have been positioned here. PC energy management is already
represented in the PC configuration life cycle management tools. Last, but not least, we are seeing
an emergence of IT operations analytics tools that more-advanced and mature IT operations
organizations are beginning to use to combine financial metrics and business value metrics that
form the platform for business intelligence (BI) for IT.
While the recovery from the recession is still under way, IT infrastructure and operations continues
to be a big part of IT budgets. Well-run IT operations organizations take a service management view
across their technology silos, and strive for excellence and continuous improvement. Thus, to
manage IT operations like a business requires a strong combination of business management
(added to model the ITScore dimensions), organization, processes and tools. This journey toward
business alignment, while managing cost, needs to be managed through a methodical, step-bystep approach. Gartner's ITScore for infrastructure and operations is a maturity model that has
been developed to provide this guidance.
Virtualization is ubiquitous in production environments to improve agility and resource utilization,
and to lower costs. Cloud computing has the potential for improving agility, and lowering capital
expenditure (capex) and fixed costs in the IT environment, as well as lowering operating expenditure
(opex) through automation with private clouds. However, both these approaches are increasing the
complexity of the IT environment. Managing end-to-end service delivery in this dynamic
environment is challenging, and has led to new management technology entering the marketplace,
with the promise to manage these dynamic, yet complex, environments. Running IT operations as a
business means having a balanced and adaptable combination of organization, processes and
technologies. Therefore, we position organizational governance methodologies (such as COBIT) and
process frameworks (such as ITIL) on the Hype Cycle and look at a wide range of IT operations
technologies, from new technologies (such as IT operations analytics and cloud management
platforms) to mature technologies (such as end-user monitoring and job-scheduling tools).
Most of the technologies and frameworks have moved gradually along the Hype Cycle. IT asset
management tools have dropped into the Trough of Disillusionment, because they have not
delivered expected benefits. Many of the technologies are climbing the Slope of Enlightenment, as
opposed to landing on the Plateau of Productivity, despite having 20% to 50% adoption, or more
than 50% adoption. The reason is that they have not fully delivered the benefits expected by users
who have implemented them.
This Hype Cycle should benefit most adoption profiles (early adopters of technology, mainstream,
etc.). For example, enterprises that are leading adopters of technology should begin testing
technologies that are still early in the Hype Cycle. However, risk-averse clients may delay adoption
of these technologies. The earlier or higher the technology is positioned on the Hype Cycle, the
higher the expectations and marketing hype; therefore, manage down your expectations and
implement specific plans to mitigate any risk from using that technology.

Page 5 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

We note three important considerations for using this Hype Cycle:

Creating a business case for new technologies driven by ROI is important for organizations with
a low tolerance for risk; whereas highly innovative organizations that have increased their IT
operations budgets likely will gain a competitive advantage from a technology's benefits.

Innovative technologies often come from smaller vendors with questionable viability. It is likely
that these vendors will be acquired, exit the market or go out of business, so plan carefully.

While budget constraints are slowly easing, organizations should consider the risks they're
willing to take with new, unproven technologies, as well as the timing of their adoption. Weigh
risks against needs and the technology's potential benefits.

Figure 1 depicts technologies on the Hype Cycle for IT Operations Management, 2011.

Page 6 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Figure 1. Hype Cycle for IT Operations Management, 2011

expectations
Open-Source IT Operations Tools
IT Service Portfolio Management
and IT Service Catalog Tools
IT Financial Management Tools
Capacity Planning and Management Tools
Workspace Virtualization
IT Workload Automation Broker Tools
Release Governance Tools
Service Billing
SaaS Tools for IT Operations
Application Release Automation
Behavior Learning Engines
Cloud Management Platforms

VM Energy Management Tools


IT Process Automation Tools
Application Performance Monitoring
COBIT
IT Service View CMDB
Job-Scheduling Tools
Real-Time Infrastructure
Network FaultIT Service Dependency Mapping
Monitoring Tools
Mobile Device Management
Business Service Management Tools
Configuration Auditing
PC Configuration Life Cycle
Advanced Server Energy
Management
Server
Monitoring Tools
Provisioning and
IT Service Desk Tools
Network Configuration
Configuration
Network Performance Management Tools
and Change
Management
IT Event Correlation and Analysis Tools
Management Tools
ITIL
Hosted
Virtual
Desktops

Social IT Management

Service-Level Reporting Tools


PC Application Virtualization
IT Asset Management Tools

IT Operations Analytics
DevOps

Technology
Trigger

Peak of
Inflated
Expectations

IT Change Management Tools


PC Application Streaming

Trough of
Disillusionment

Slope of Enlightenment

As of July 2011

Plateau of
Productivity

time
Years to mainstream adoption:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (July 2011)

Page 7 of 84

Gartner, Inc. | G00214402

This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

The Priority Matrix


The Priority Matrix maps a technology's and process's time to maturity on a grid in an easy-to-read
format that answers two questions:

How much value will an organization get from a technology?

When will the technology be mature enough to provide this value?

However, technology and processes have to be in lock-step to achieve full benefits; thus, we also
have process frameworks, such as ITIL and COBIT, on this Hype Cycle.
Virtualization, software as a service (SaaS) and cloud computing continue to broaden the service
delivery channels for IT operations. Business users are demanding more transparency for the
services they receive, but they also want improved service levels that provide acceptable availability
and performance. They are looking to understand the fixed and variable costs that form the basis of
the service they want and receive. They are also looking at data security, increased service agility
and responsiveness. Many business customers have circumvented IT to acquire public cloud
services, which has led many IT organizations to invest in private cloud services for some of their
most-used and highly standardized sets of services.
They often have sought a harmonious mix of organization, processes and technology that will
deliver IT operational excellence to run IT operations like a business. There are a few truly
transformational technologies and processes, such as DevOps, ITIL and real-time infrastructure
(RTI). Most technologies provide incremental business value by lowering the total cost of ownership
(TCO), improving quality of service (QoS) and reducing business risks.
Fairly mature technologies, such as job-scheduling tools and network fault monitoring tools, enable
IT operations organizations to keep IT production workloads running through continuous
automation and lowering network mean time to repair (MTTR). However, the complexity of IT
operations environments continues to rise, as the implementation and adoption of service-oriented
architecture (SOA), Web services, virtualization technologies and cloud computing increase. This
presents challenges in such areas as IT financial management and improving IT operations
proactive responses to deliver business benefits.
Meanwhile, due to cost pressures, open-source visibility has increased in IT operations
management. These tools provide basic monitoring capabilities, and most of the implementation of
automated actions is done by script writing, resulting in fairly high maintenance efforts and longer
implementation times. The focus of these tools has changed from monitoring individual
infrastructure components, including networks and servers, to monitoring activities across
infrastructure components, from an end-to-end IT services perspective. Furthermore, these tools
have evolved to automate processes such as incident management. Such tools have the potential
to lower the license costs of commercial tools, but pure-play open-source tools may increase the
total cost of support.
IT financial management, along with IT asset management, and automation technologies will take
center stage through 2011 as IT operations' quest for continuous improvement and lower costs
continues. The focus on the cost of data will provide cost transparency for costs associated with

Page 8 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

the service portfolio, catalog tools and APM tools to rise through 2011. The integration of IT
operations tools is continuing at the data level, using technologies such as the configuration
management database (CMDB), and at the process level, using technologies such as IT workload
automation broker and IT process automation (ITPA).
Overall, there is no technology on this Hype Cycle that will mature in more than 10 years, and there
is only one technology (social IT management) that has relatively low benefit, but this could change
in the coming years as it matures (see Figure 2). The advantages of implementing IT operations
management technologies continue to be to lower the TCO of managing the complex IT
environment, improve the quality of service and lower business risk.

Page 9 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Figure 2. Priority Matrix for IT Operations Management, 2011

benefit

years to mainstream adoption


less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

DevOps

transformational

ITIL
Real-Time Infrastructure

high

Behavior Learning
Engines

Application Performance
Monitoring

Business Service
Management Tools

Hosted Virtual Desktops

Capacity Planning and


Management Tools

IT Operations Analytics

IT Service Desk Tools


PC Application
Virtualization

Cloud Management
Platforms
Configuration Auditing
IT Change Management
Tools

IT Process Automation
Tools
IT Service Portfolio
Management and IT
Service Catalog Tools

IT Financial Management
Tools
IT Service Dependency
Mapping
IT Service View CMDB
IT Workload Automation
Broker Tools
Open-Source IT
Operations Tools
Server Provisioning and
Configuration
Management
Service Billing
VM Energy Management
Tools
Workspace Virtualization

moderate

Advanced Server Energy


Monitoring Tools

Application Release
Automation

Mobile Device
Management

Job-Scheduling Tools

COBIT

Network Fault-Monitoring
Tools

IT Asset Management
Tools

Release Governance
Tools

Network Performance
Management Tools

IT Event Correlation and


Analysis Tools

PC Configuration Life
Cycle Management

Network Configuration and


Change Management
Tools
PC Application Streaming
SaaS Tools for IT
Operations
Service-Level Reporting
Tools
Social IT Management

low
As of July 2011
Source: Gartner (July 2011)

Page 10 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Off The Hype Cycle


Some of the technologies on this Hype Cycle have been subsumed or replaced by other
technologies. For example, virtual server resource capacity planning and resource capacity tools
have been combined into one entry called capacity planning and management tools; chargeback
tools are now part of IT financial management tools; end-user monitoring tools, application
management tools and application transaction profiling tools are now part of APM tools; ITIL V2 and
ITIL V3 process frameworks have been combined into the ITIL entry; mobile service-level
management tools and business consulting services are off the Hype Cycle, due to their lack of
direct relevance to the IT operations area.

On the Rise
DevOps
Analysis By: Cameron Haight; Ronni J. Colville; Jim Duggan
Definition: Gartner's working definition of DevOps is "an IT service delivery approach rooted in
agile philosophy, with an emphasis on business outcomes, not process orthodoxy. The DevOps
philosophy (if not the term itself) was born primarily from the activities of cloud service providers
and Web 2.0 adopters as they worked to address scale-out problems due to increasing online
service adoption. DevOps is bottom-up-based, with roots in the Agile Manifesto and its guiding
principles. Because it doesn't have a concrete set of mandates or standards, or a known framework
(e.g., ITIL, CMMI), it is subject to a more liberal interpretation.
In Gartner's view, DevOps has two main focuses. First is the notion of a DevOps "culture," which
seeks to establish a trust-based foundation between development and operations teams. In
practice, this is often centered on the release management process (i.e., the managed delivery of
code into production), as this can be a source of conflict between these two groups often due to
differing objectives. The second is the leveraging of the concept of "infrastructure as code,"
whereby the increasing programmability of today's modern data centers provides IT an ability to be
more agile in response to changing business demands. Here, again, the release management
process is often targeted through the increasing adoption of automation to improve overall
application life cycle management (ALM). Practices like continuous integration and automated
regression testing should also be mastered to increase release frequency, while maintaining service
levels.
Position and Adoption Speed Justification: While DevOps has its roots in agile methodologies,
and, from that perspective, is not totally new, its adoption within traditional enterprises is still very
limited, and, hence, the primary reason for our positioning. For IT organizations, the early focus is
on streamlining release deployments across the application life cycle from development through to
production. Tools are emerging that address building out a consistent application or service model
to reduce the risks stemming from customized scripting while improving deployment success due
to more-predictable configurations. The adoption of these tools is not usually associated with
development or production support staff per se, but rather with groups that "straddle" both
development and production, typically requiring higher organizational maturity.

Page 11 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

DevOps does not preclude the use of other frameworks or methodologies, such as ITIL, and, in fact,
the potential exists to incorporate some of these other best-practice approaches to enhance overall
service delivery. Enterprises that are adopting a DevOps approach often begin with one process
that can span both development and operations. Release management, while not mature in its
adoption, is becoming the pivotal starting point for many DevOps projects.
User Advice: While there is growing hype about DevOps, potential practitioners need to know that
the successful adoption or incorporation of this approach is contingent on an organizational
philosophy shift something that is not easy to achieve. Because DevOps is not prescriptive, it will
likely result in a variety of different manifestations, making it more difficult to know whether one is
"doing" DevOps. However, this lack of a formal process framework should not prevent IT
organizations from developing their own repeatable processes that can give them both agility as
well as control. Because DevOps is emerging in terms of definition and practice, IT organizations
should approach it as a set of guiding principles, not as process dogma. Select a project involving
development and operations teams to test the fit of a DevOps-based approach in your enterprise.
Often, this is aligned with one application environment. If adopted, consider expanding DevOps to
incorporate technical architecture. At a minimum, examine activities along the existing developerto-operations continuum, and experiment with the adoption of more-agile communications
processes and patterns to improve production deployments.
Business Impact: DevOps is focused on improving business outcomes via the adoption of agile
methodologies. While agility often equates to speed (and faster time to market), there is a somewhat
paradoxical impact, as well as smaller, more-frequent updates to production that can also work to
improve overall stability and control, thus reducing risk.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging

IT Operations Analytics
Analysis By: Milind Govekar; David M. Coyle
Definition: IT operations analytics tools enable CIOs and senior IT operations managers to monitor
their "business" operational data and metrics. The tools are similar to a business intelligence
platform utilized by business unit managers to drive business performance. IT operations analytics
tools provide users the capabilities to deliver efficiency, optimize IT investments, correlate trends,
and understand and maximize IT opportunities in support of the business. These tools are capable
of integrating data from various data sources (service desks, IT ECA, IT workload automation
brokers, IT financial management, APM, performance management, BSM, service-level reporting,
cloud monitoring tools, etc.), and processing it in real time to deliver operational improvements.
These tools are not the same as data warehouse reporting tools. Data from IT operations analytics
tools will typically send data to data warehouses and other IT operations tools (e.g., service-level
reporting) for further non-real-time reporting. IT operations analytics tools may have engines, such
as complex-event processing (CEP), at their core to enable them to collect, process and analyze

Page 12 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

multidimensional business and IT data, and to identify the most meaningful events and assess their
impact in real time.
There are many business intelligence and analytics tools on the market (see "Hype Cycle for
Business Intelligence, 2010"), but these tools are not focused on the specific needs of the IT
operations organization. IT operations analytics tools must have IT domain expertise and
connectors to IT operations management tools and data sources in order to facilitate IT-specific
analytics and decision making. IT operations management vendors often partner with established
business intelligence vendors (for example, BMC Software partnering with SAP Business Objects)
to offer their customers advanced analytics capabilities.
Position and Adoption Speed Justification: As IT operations analytics tools provide real-time
analytics that can improve IT operations performance, IT management can use these tools to make
adjustments and improvements to the IT operational services they deliver to their business
customers. For example, they can delay the acquisition of hardware by showing the cost of unused
capacity or help in consolidation and rationalization of applications based on utilization and cost
data. Analytics tools are well-established, but have experienced slow adoption in the IT organization
due to their expense and lack of IT domain knowledge, and because IT organizations often lack the
maturity to consider themselves a business. Organizations at a maturity of at least Level 3 or above
in the I&O ITScore Maturity Model will be able to take advantage of these tools (see "ITScore for
Infrastructure and Operations"):

Rationalization of applications and services based on their utilization and their cost to and value
for the business

Delaying the acquisition of hardware (e.g., server/computer and storage) by revealing the cost
of unused capacity

Determining more-cost-effective platforms or architectures through what-if and scenario


analytics

Optimizing the vendor portfolio by redefining contractual terms and conditions based on cost
and utilization trends

User Advice: Users who are at least at ITScore Level 3 or above in IT operations maturity should
investigate where and how these tools can be used to run their operations more efficiently, provide
detailed trend analysis of issues and deliver highly visible and improved customer service. Most of
the tools available today take a "siloed" approach to IT operations analytics; i.e., the data sources
they support are rather limited to their domains, e.g., severs, databases, applications, etc., or are
specific to agents from a single vendor.
Additionally, their ability to integrate data from multiple vendor sources and process large amounts
of real-time data are also limited. However, the IT operations analytics tools that have emerged from
specific IT operations areas have the potential to extend their capabilities more broadly. Most of
these tools rely on manual intervention to identify the data sources, understand the business
problem they are trying to solve and build expertise in the tools for interpreting events and providing
automated actions or recommendations.

Page 13 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Investments in these tools are likely to be disruptive for customers, particularly as newer, innovative
vendors get acquired. This means that the product must have significant value for the customer
today to mitigate the risk of acquisition and subsequent disruptions to product advancements, or
changes to product strategy. A critical requirement for choosing a tool is understanding the data
sources it can integrate with, the amount of manual effort required to run analytics and the training
needs of the IT staff.
Business Impact: IT operations analytics tools will provide CIOs and senior IT operations managers
radical benefits toward running their own businesses more efficiently, and at the same time enable
their business customers to maximize opportunities. These tools that will provide a more accurate
assessment of the correlations among the business processes, applications and infrastructures in a
complex and multisourced environment.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Appnomic; Apptio; Hagrid Solutions; SL; Splunk; Sumerian

Social IT Management
Analysis By: David M. Coyle; Jarod Greene; Cameron Haight
Definition: Social IT management is the ability to leverage existing and new social media
technologies and best practices to foster the support of end users, capture collaboration among IT
staff members and collaborate with the business, promoting the value of the IT organization.
Position and Adoption Speed Justification: The impact on social media and as a mechanism for
collaboration, communication, marketing and engaging people is a well-known concept in personal
and business life (see "Business Gets Social"). It is only natural that social media would find its way
into the IT organization; however, the business value is only starting to be conceptualized.
We have identified three areas where social media can assist the IT organization with increasing
end-user satisfaction, increasing efficiencies and fostering crisper collaboration among IT staff:

Social networking acts as a medium for peer-to-peer support. End users have for years relied
on their immediate coworkers to better understand how to leverage internal systems, and for
help in solving technology issues. Often, this end user-to-end user support was done by email,
instant message or walking to the co-worker's cubicle. Social media tools now allow end users
to crowdsource other end users through the entire organization to receive assistance. This
allows end users to become more productive in using business technologies. The IT support
team has the opportunity to be part of this collaboration to offer improved IT service and
support.

Social media tools can also facilitate the capture of collaboration among IT staff that would not
typically be captured via traditional communication methods. For example, the discussion

Page 14 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

regarding the risk of an upcoming infrastructure change is often done via email and IM, but that
information is not captured in the change management ticket. These ad hoc and informal
collaborations and knowledge-sharing instances can now be turned into reusable and audited
work assets (see "Collaborative Operations Management: A Next-Generation Management
Capability"). Collaboration technology will also become increasingly important in the emerging
DevOps arena as development and operations begin to work more closely to coordinate
planning, build, test and release activities.

Social software suites and text-based posts can allow the IT organization to promote the value
of IT services to the business. Typically, IT organizations unidirectionally inform the business of
planned and unplanned outages, releases and new services via email or through an intranet
portal. This type of communication is often disregarded or ignored. Social media enables
dynamic communications whereby users can generate conversation within these notifications to
understand the specific impact of the message in a forum open to the wider community of
business end users. End users can follow the IT organization announcements, services and
configuration items that are important to them through social media tools.

User Advice: It is important to develop a strategy for social media for the IT organization, because
end users are increasingly expecting this type of collaboration, which is so prevalent in other
aspects in their lives. If IT doesn't embrace and create a social media initiative, the risk is that end
users and IT staff will create many often-conflicting social media tools and processes themselves. IT
should not expect the use of social media tools to be high, and investments should be tempered
since the ROI will be difficult to quantify in the beginning. Therefore, social media initiatives should
be accompanied by the development of project plans, usage guidelines and governance. Also, be
prepared to change tools and strategies as new technologies and collaborative methods emerge,
and since social media usage within businesses is still immature.
Business Impact: Social media tools and processes increase end-user productivity by leveraging
knowledge and best practices across the entire organization. Social media tools and processes with
enable the IT organization to increase end-user productivity, communicate easier with the business
and capture ad hoc and informal interactions that can be leveraged in the future. Additionally, a
social media strategy will provide the business with a perspective that the IT organization is
progressive and forward-thinking.
Benefit Rating: Low
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: BMC Software; CA Technologies; Hornbill; ServiceNow; Zendesk
Recommended Reading: "Collaborative Operations Management: A Next-Generation
Management Capability"

Page 15 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Cloud Management Platforms


Analysis By: Cameron Haight; Milind Govekar
Definition: Cloud management platforms are integrated products that provide for the management
of public, private and hybrid cloud environments. The minimum requirements to be included in this
category are products that incorporate self-service interfaces, provision system images, enable
metering and billing, and provide for some degree of workload optimization through established
policies. More-advanced offerings may also integrate with external enterprise management
systems, include service catalogs, support the configuration of storage and network resources,
allow for enhanced resource management via service governors and provide advanced monitoring
for improved "guest" performance and availability. A key ability of these products is the insulation of
administrators of cloud services from proprietary cloud provider APIs, and thus they help IT
organizations prevent lock-in by any cloud service or technology provider. The distinction between
cloud management platforms (CMPs) and private clouds is that the former primarily provide the
upper levels of a private cloud architecture, i.e., the service management and access management
tiers, and thus do not always provide an integrated cloud "stack." Another way of saying this is that
cloud management platforms are an enabler of a cloud environment, be it public or private, but by
themselves they may not contain all the necessary components for it (i.e., such as virtualization
and/or pooling capabilities). This distinction holds true for public clouds as well.
Position and Adoption Speed Justification: Although the demand for cloud computing services is
increasing, the market for managing the applications and services running with these environments
is still modest, including for infrastructure as a service (or IaaS) clouds, be they private or public.
This is likely because IT organizations adopting cloud-style technologies are not running many
mission-critical applications and services on these infrastructures, which might necessitate greater
monitoring and control.
User Advice: Enterprises looking at investing in cloud management platforms should be aware that
the technology is still maturing. Although some cloud providers are beginning to offer management
tools to provide more insight and control into their infrastructures, these are limited in functionality
and generally offer no support for managing other cloud environments. A small (but growing)
number of cloud-specific management platform firms is emerging; however, the traditional marketleading IT operations management vendors aka the Big Four (BMC Software, CA Technologies,
HP and IBM Tivoli) are also in the process of extending their cloud management capabilities. In
addition, virtualization platform vendors (e.g., Citrix Systems, Microsoft and VMware), not to
mention public cloud providers such as Amazon, are also broadening their management portfolios
to enhance the support of cloud environments.
Business Impact: Enterprises will require cloud management platforms to maximize the value of
cloud computing services, irrespective of external (public), internal (private) or hybrid environments
i.e., lowering the cost of service delivery, reducing the risks associated with these providers and
potentially preventing lock-in.
Benefit Rating: High
Market Penetration: Less than 1% of target audience

Page 16 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Maturity: Emerging
Sample Vendors: Abiquo; BMC Software; CA Technologies; Cloud.com; Cloupia; Gale
Technologies; HP; IBM Tivoli; Kaavo; Microsoft; Platform Computing; Red Hat; RightScale;
ServiceMesh
Recommended Reading: "The Architecture of a Private Cloud Service"

Behavior Learning Engines


Analysis By: Will Cappelli; Jonah Kowall
Definition: Behavior learning engines (BLEs) are platforms intended to enable the discovery,
visualization and analysis of recurring, complex, multiperiod patterns in large operational
performance datasets. If such engines are to realize their intent, they must support four layers of
cumulative functionality:

Variable designation this supports the selection of the system properties that are to be
tracked by the platform and how those properties are to be measured

Variable value normalization this is the automated ability to determine (usually by an


algorithm that regresses measurements) what constitutes the normal or usual values assumed
by the system property measuring variables

Observational dependency modeling this is a set of tools for linking the individual property
measuring variables to one another, where the links represent some kind of dependency among
the values taken by the linked variables; commercially available BLEs differ significantly with
regard to the degree to which the dependencies must be pre-established by the vendor or user
and the degree to which the dependencies are themselves discovered by BLE algorithms
working on the performance datasets being considered

Assessment the means by which, once the normalized values and dependency map are
determined, the resulting construct can be used to answer questions about the values assumed
by some variables, based on the values assumed by others

Position and Adoption Speed Justification: When using IT operations availability and
performance management tools, many IT organizations struggle to deliver a proactive monitoring
capability, due to the vast amounts of disparate data that needs to be collected, analyzed and
correlated. BLEs gather event and performance data from a wide range of sources, identifying
behavioral irregularities, which allows IT operations to understand the state of the IT infrastructure in
a more holistic way.
BLEs detect subtle deviations from normal activity and how these deviations may be correlated
across multiple variables. This enables more effective and rapid root-cause analysis of system
problems and encourages a more proactive approach to event and performance management. The
adoption of BLEs is being driven by the emergence of virtualization, an increased focus on crosssilo application performance monitoring and the need to gain a holistic understanding of the virtual

Page 17 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

IT infrastructure state that is, the ability to understand the health of a dynamic virtual server
environment based on behavior patterns, rather than static thresholds.
User Advice: IT organizations that continue to struggle to achieve a proactive monitoring state
using traditional availability and performance tools should investigate augmenting and enhancing
their investments with emerging BLEs. BLEs supply a consolidated, holistic view of the IT
infrastructure that provides early warning of potential outages, based on trends and patterns. A
focus on specific, defined objectives allows BLEs to quickly establish behavior patterns and to
associate "normal" with "good" behaviors. It is important to understand the challenges for using this
type of tool, because they require time to learn (through time spent collecting data or time spent
consuming data) to establish a normal state. It may also require time and effort to associate normal
behavior with good behavior.
Behavior patterns must be established on good data; poor data will produce spurious patterns.
These tools work well in IT infrastructures with a degree of "normalcy," because erratic or constant
IT infrastructure change will cause the behavior learning tool to constantly alert on abnormal
behaviors, causing them to relearn normal. However, with the appropriate expectations, approach
and preparation, enterprises will gain the value promised from a BLE. The more accurate data the
tools collect, the better the behavior analysis will be.
Business Impact: BLEs can improve business service performance and availability by establishing
normal behavior patterns, and allowing subtle deviations of IT normalcy to be detected before the
individual availability and performance tool thresholds are exceeded. This capability is focused on IT
services and applications, and ensures that potential issues can be investigated and remediated
before they affect the business, which is a key requirement when moving toward a real-time
infrastructure (RTI) state. IT organizations with increasingly dynamic virtual IT environments will
benefit from BLEs, especially when there are many performance and event sources to track and
understand, because they provide IT operations with a new way to comprehend the overall state of
the IT infrastructure.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: BMC Software (ProactiveNet); Netuitive; Prelert; VMware (Integrien)
Recommended Reading: "An Introduction to IT Operations Behavioral Learning Tools"
"Behavior Learning Software Enables Proactive Management at One of World's Largest Telecom
Companies"
"Tools Alone Won't Enable Proactive Management of IT Operations"

Application Release Automation


Analysis By: Ronni J. Colville; Donna Scott
Page 18 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Definition: Application release automation (ARA) tools focus on the deployment of custom
application software releases and their associated configurations, often for Java Platform,
Enterprise Edition (Java EE) and .NET applications for middleware. These tools also offer versioning
to enable best practices in moving related artifacts, applications, configurations and even data
together across the application life cycle. ARA tools support continuous deployment of large
numbers of small releases. Often, these tools include workflow engines to assist in automating and
tracking human activities across various tasks associated with application deployment for
auditability. Some tools focus on the build and development phases and are now adding the
capability to deploy to production, though many still provide only partial solutions.
Position and Adoption Speed Justification: IT organizations are often very fragmented in their
approach to application releases. Sometimes, the process is led by operations, although it can also
be managed from the development side of the organization. This means that different buyers will
look at different tools, rather than comprehensive solutions, which results in tool fragmentation. The
intent of these tools is fivefold:

Eliminate the need to build and maintain custom scripts for application updates by adding more
reliability to the deployment process with less custom scripting, and by documenting variations
across environments.

Reduce configuration errors and downtime.

Coordinate releases between people and process steps that are maintained manually in
spreadsheets, email or both.

Move the skill base from expensive, specialized script programmers to less costly resources.

Speed the time to market associated with agile development by reducing the time it takes to
deploy and configure across all environments.

Adoption and penetration of these tools are still emerging (1% to 5%), because they are being used
for only a small percentage of all applications. For large enterprises with mission-critical websites,
adoption is becoming more significant (5% to 20%), because of the criticality of improving agility
and reducing disruption. Even in this scenario, we still see the largest "competitor" of these tools
being in-house scripts and manual processes.
User Advice: Clients must remember that processes for ARA are not standardized. Assess
application life cycle management specifically deployment processes and seek a tool or tools
that can help automate the implementation of these processes. Organizational change issues
remain significant and can't be addressed solely by a tool purchase.
Understand your specific requirements for applications and platforms that will narrow the scope of
the tools to determine whether one tool or multiple tools will be required. Some vendors have
products that address application provisioning (of binaries: middleware, databases, etc.) and ARA.
When evaluating tools, consider your workflows and the underlying software you are deploying; this
will increase time to value for the tools. Determine whether life cycle tools can address the needs of
multiple teams (such as development and testing), while meeting broad enterprise requirements, as
this will reduce costs. Organizations that want to extend the application life cycle beyond

Page 19 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

development to production environments using a consistent application model should evaluate


development tools or point solutions that provide out-of-the-box integration with development
tools.
Business Impact: ARA tools can eliminate the need to build and maintain time-consuming custom
scripts for application updates. They can add more reliability to the deployment process with less
custom scripting and by documenting variations across environments, to reduce configuration
errors and downtime. In addition, with more consistency, there will be an increase in
standardization, which will enable all the benefits standardization brings in terms of economies of
scale, cross-training, documentation, backups, etc. ARA tools can supplement or enhance the
coordination of releases among people, as well as the process steps that are maintained manually
in spreadsheets, email or both. By reducing the scripts and manual interactions with automation, IT
organizations can move the skills base from expensive, specialized script programmers to less
costly resources. The most significant business benefit of ARA tools is that they improve the speed
associated with agile development by reducing the time it takes to deploy and configure
applications across all environments by creating consistent application models that improve the
likelihood of successful deployments to production.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: BMC Software; HP; MidVision; Nolio; Urbancode; XebiaLabs
Recommended Reading: "Cool Vendors in Release Management, 2011"
"Managing Between Applications and Operations: The Vendor Landscape"
"Best Practices in Change, Configuration and Release Management"
"MarketScope for Application Life Cycle Management"
"Are You Ready to Improve Release Velocity?"

SaaS Tools for IT Operations


Analysis By: David M. Coyle; Milind Govekar; Patricia Adams
Definition: Software as a service (SaaS) is software owned, delivered and managed remotely by
one or more providers, and purchased on a pay-for-use basis or as a subscription based on usage
metrics. All IT operations management tools that can be 100% managed through a Web client have
the potential of being licensed as SaaS. SaaS enables the IT organization to have new pricing,
delivery and hosting options when acquiring IT operations management tools, in addition to the
traditional perpetual model. Not all IT operations SaaS use multitenancy and elasticity, which are
the hallmarks in a cloud computing delivery model.

Page 20 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Position and Adoption Speed Justification: SaaS has been a viable pricing model for many
software products, such as CRM and HR, for several years; however, it is newer to the IT operations
management tool market. Fewer than 5% of IT operations management tools are bought utilizing
the SaaS licensing model. Few traditional software vendors have licensed their tools as SaaS, but
this model is increasingly being added to product road maps. However, growth in the acceptance of
SaaS as a licensing model in the software industry as a whole, plus the use of the Web-only client
for IT operations management tools especially IT service desk, resource capacity planning and
end-user monitoring tools will ensure that this model becomes pervasive in IT operations
management.
Mature resource capacity planning tools have been offered as a service since the mainframe days,
and can be good candidates to fit the SaaS delivery model in today's computing environment. For
IT organizations that are experiencing budget constraints, SaaS solutions that are paid for monthly
or quarterly and come from the operating budget can be viable alternatives to a large capital
software purchases.
In particular, small or midsize businesses (SMBs) have begun to favor SaaS-based capacity
management and planning. SaaS IT service desk vendors currently have 8% to 9% of the overall
market; however, by 2015, 50% of all new IT service desk tool purchases will use the SaaS model
(see "SaaS Continues to Grow in the IT Service Desk Market"). Because many of the IT service desk
vendors also offer IT asset management (ITAM) tools, which are tightly integrated, we expect to
begin to see SaaS extend to ITAM implementations where software license models aren't overly
complex.
Similarly, end-user monitoring tools have been consumed as a SaaS delivery model for more than
five years, and we are beginning to see increased interest in this delivery model. In addition,
chargeback tools are being sold as a SaaS model. However, security, high availability and stability
of the vendor infrastructure, heterogeneous support, integration and long-term cost are some of
enterprises' primary concerns. In some cases, the tools architecture may not lend itself to the
functionality being delivered in a SaaS model. Customers that need to customize software to meet
unique business requirements should be aware that there is risk associated with customizing SaaSdelivered software that may cause conflicts when new versions become available.
User Advice: Clients should evaluate the inherent advantages and disadvantages of SaaS before
selecting this model. Trust in the vendor, customer references, security compliance, contracts and
SLAs should be foundational when buying SaaS. Clients should compare the SaaS vendors'
products with the more-traditional, perpetually licensed products in terms of features, functionality,
adherence to best practices, total cost of ownership through the life cycle of the tool, high
availability, manageability and security. They should ensure that they don't choose one product over
the other, based solely on licensing model or cost. If choosing to operate the SaaS tool within your
data center, it is important to understand the hardware required, the architecture models and the
differences in pricing.
IT operations tools do not exist as an island, and some level of information, data and process
integration is required, which should be an important SaaS evaluation criterion. Finally,
organizations that have unique requirements may find that the one-size-fits-all approach to SaaS

Page 21 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

might not be a good fit for business requirements that require tool customization to increase the
value of the tool.
Business Impact: SaaS offers new options for IT organizations to purchase, implement and
maintain the IT operations management tools in their environment. More choices will enable IT
organizations to become agile, use IT budgets more appropriately and gain more flexibility with
vendor negotiations. Implementation time frames should also be faster, potentially leading to faster
payback.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Apptio; BMC; CA; Cherwell Software; Compuware; InteQ; Keynote Systems;
Service-now.com
Recommended Reading: "SaaS Continues to Grow in the IT Service Desk Market"

Service Billing
Analysis By: Milind Govekar
Definition: Service-billing tools differ from IT chargeback tools in that they use resource usage data
(on computing and people) to calculate the costs for chargeback and aggregate it for a service.
Alternatively, they may offer service-pricing options (such as per employee, per transaction)
independent of resource usage. When pricing is based on usage, these tools can gather resourcebased data across various infrastructure components, including servers, networks, storage,
databases and applications. Service-billing tools perform proportional allocation based on the
amount of resources (including virtualized and cloud-based) allocated and used by the service
for accounting and chargeback purposes.
Service-billing costs include infrastructure and other resource use (such as people) costs, based on
service definitions. As a result, they usually integrate with IT financial management tools and IT
chargeback tools. These tools will be developed to work with service governors to set a billing
policy that uses cost as a parameter, and to ensure that the resource allocation is managed based
on cost and service levels.
Due to their business imperative, these tools will first emerge and be deployed in service provider
environments and by IT organizations that are at a high level of maturity. These tools are also being
deployed for cloud computing environments, where usage-based service billing is a key
requirement.
Position and Adoption Speed Justification: Shared sets of resources for on-premises or offpremises implementations require the ability to track usage for service-billing purposes. As the IT
infrastructure moves to a shared-service model, resource-oriented chargeback models will evolve
into end-to-end collection and reporting approaches. Furthermore, the service-billing tools will work
Page 22 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

with service governors to proactively manage the costs of services and their associated service
levels.
User Advice: This is a new and emerging area, and its visibility has risen sharply. Most commercial
tools are being developed by cloud computing service providers for their own environments, or by
cloud infrastructure fabric vendors in private or public cloud computing environments, in addition to
commercial off-the-shelf tools. Service-billing tools will take a life cycle approach to services, will
perform service cost optimization based on underlying technology resource usage optimization
during the entire life cycle and will provide granular cost allocation.
The emergence of these tools, to accommodate the vision of dynamic automation of real-time
infrastructure, will involve their integration with virtualization automation tools that dynamically move
virtual environments, based on resource or performance criteria to assess the cost effects of such
movement. For example, enterprises that want to incorporate virtual server movement automation in
their environments should assess these tools, as they emerge to assist with controlling costs in their
data centers.
The available service chargeback tools aggregate costs mainly for static environments, where there
is no automation or dynamic control of resources. These tools will emerge from startups, as well as
traditional chargeback vendors, asset management vendors, virtualization management vendors
and software stack vendors.
Business Impact: These tools are critical to running IT as a business, by determining the financial
effect of sharing IT and other resources in the context of services. They also feed billing data back
to IT financial management tools and chargeback tools to help businesses understand the costs of
IT and to budget appropriately.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Apptio; Aria Systems; BMC Software; Comsci; Digital Fuel; IBM Tivoli; MTS;
Unisys; VMware

Release Governance Tools


Analysis By: Ronni J. Colville; Kris Brittain
Definition: Release governance tools enable coordinated, cross-organizational capability that
manages the introduction of changes into the production environment, including those affecting
applications, infrastructures, operations and facilities. The two main focuses are release planning
and release scheduling. Planning orchestrates the control of changes into a release by governing
the operational requirements definition (designing with operations in mind, versus just building to
business functional specifications alone early on) and integrated build, test, install and deploy
activities, including coordination with other processes required prior to rollout (such as system

Page 23 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

management and documentation). Scheduling activities are focused on ensuring that prerequisites'
and corequisites' requirements are met and that rollback planning is coordinated for the execution
of changes and release bundles, to ensure that no conflicts occur prior to production deployments.
Release governance tools have tended to develop as an extension to existing IT operations tools.
For example, change management tools have similar workflows that IT service desk vendors have
extended for release workflows. Sometimes, run book automation tools, which can supplement or
augment automation processes' and activities' interactions, can have release workflow templates.
In addition, application life cycle management release tools for tracking release requests are
sometimes being extended to include broader release workflows beyond an application focus.
Position and Adoption Speed Justification: There continues to be an increase in focus on
developing a comprehensive release management process across development and production
control, as well as on automating various aspects of release management (e.g., planning for
governance and release provisioning and deployment). This interest in formalizing the process and
supplementing manual efforts has four main drivers and approaches: (1) ITIL process improvement
initiatives are focused on several processes (such as problem, incident, change and configuration),
where the release is often one of the processes being addressed but, often, later in the timeline
of the initiative; (2) the increase in number of composite applications being deployed (such as Java
Platform, Enterprise Edition [Java EE] and .NET) into production; (3) continuous deployment
techniques (e.g., agile and DevOps) for the introduction and maintenance of applications to the
production environment; and (4) the rise in the frequency of changes and the need to bundle, align
and control them more efficiently. Release process adoption in ITIL initiatives tends to be at later
phase of the project, and most often will force a reshaping of early change and configuration
management process work.
Release governance tools become critical in ensuring that the appropriate planning and scheduling
has been done to reduce disruption to the production environment, along with appropriate
communication and reporting mechanisms. These tools can also reduce costs, but only when
viewed across the entire organization, from project management to application development to
release management to operational support. Release governance tools also are becoming more
commonly integrated with project and portfolio management (PPM), application life cycle
management, change management, incident/problem management, and IT operations management
tools. Configuration management works hand in hand with release management for distribution
activities of software packaging, deployment and rollback, if required.
The main benefits of release management include:

More consistent service quality as a result of better planning, coordination and testing of
releases, as well as automated deployment through configuration management tools

The ability to support higher throughput, frequency and success of releases

Reduced release risk through improved planning

Reduced cost of release deployment due to fewer errors

Despite these benefits, fewer than 25% of large enterprises will formalize release management
through 2014, up from less than 5% today. The reason that so few organizations will achieve

Page 24 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

success with release governance tools is twofold. First, success with formal release management
requires that implementations of change and configuration management be in place to integrate
with; thus, release management is normally implemented in the latter phases of an ITIL or IT
process improvement implementation. During the past five to 10 years, most of the focus on
process improvement has been on change management, and, more recently, on configuration
management surrounding configuration management databases (CMDBs). Release management
has only recently become a focus in ITIL programs as progress has been made on change and
configuration, and because there has been an increase in the number of changes.
Second, release governance tools require a coordinated release management process and
organizational role integration across business processes, applications, project management and
operations. As a result, it is one of the more-difficult disciplines to implement, requiring significant
top management commitment and, often, organizational realignment and/or the establishment of
competency centers. Contributing to the inhibitors is that organizations will need to build labs to
perform integration testing to ensure the validity and integrity of the release in the production
environment. Today, testing is done in the application life cycle management (ALM) process, but
these test environments rarely mirror the production environment. Therefore, it is critical to plan for
and establish an integration test lab prior to production rollouts. In some cases, organizations can
effectively use the existing independent test organization, which is usually affiliated with the
development team. Such independent quality assurance (QA) functions are a hallmark of Maturity
Level 3 development organizations, specifically in their use of methodologies, such as Capability
Maturity Model Integrated (CMMI). Funding process projects in this way is often easier to justify
than the inclusion of new hardware, software and resources to support a test lab.
User Advice: IT organizations need to ensure that solid objectives based on the needs of the
business are established for release planning and release distribution activities, and that those are
mapped to stakeholders' specifications. Because releases occur throughout and across the IT
application and infrastructure organization, release management will require the participation of
many IT groups, and may be considered part of development, operational change management,
production acceptance/control, and the tail end of the IT delivery life cycle. With the addition of
SOA-based applications, the granularity and frequency will add to the fluency with which releases
need to occur. Successful release management will require a competency center approach to
enable cross-departmental skills for release activities.
Release management takes on greater significance, much as production control departments did in
the mainframe era. Planned changes to applications, infrastructure and operations (such as system
and application management, documentation and job scheduling) processes are integration-tested
with the same rigor that occurs on the development side. Working with architecture groups, morerigorous policies are put in place for maintenance planning (such as patches and upgrades) and the
retirement of software releases and hardware platforms in adherence to standard policies. The
release management group also will be responsible for preproduction testing and postproduction
validation and adjustments to policies related to dynamic capacity management and priorities for
real-time infrastructure.

Page 25 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

IT organizations should:

Establish a project team composed of application development and operations resources to get
the foundation of a release management policy in place.

Be prepared to invest in infrastructure to establish integration testing for production control (as
is done for preproduction and development).

Organizations that have already implemented change and configuration management


processes should:Look to formalizing a release process as one of your next investments, and
one that will greatly improve overall service quality (in an additive way).

Implement release governance tools to provide a mechanism for tracking the life of a release
(singular or a bundle) for planning and scheduling.

Business Impact: Because new releases are the first opportunity for IT customers to experience
IT's capabilities, the success or failure of a release management process will greatly affect the
business-IT relationship. Release governance tools will provide automation that facilitates the many
stakeholders required to ensure successful deployments into the production environment. It is
important, therefore, that releases are managed effectively from inception to closure, and that all IT
groups work in concert to deliver quality releases as consistently as possible using release
governance tools.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: BMC Software; CA Technologies; HP; IBM Tivoli; Service-Now

IT Workload Automation Broker Tools


Analysis By: Milind Govekar
Definition: IT workload automation broker (ITWAB) technology is designed to overcome the static
nature of scheduling batch jobs. It manages mixed batch and other non-real-time workloads based
on business policies in which resources are assigned and deassigned in an automated fashion to
meet service-level objectives, for example, performance, throughput and completion time
requirements. These tools automate processing requirements based on events, workloads,
resources and schedules. They manage dependencies, (potentially, across multiple on-premises
and off-premises data centers) across applications and infrastructure platforms. ITWAB technology
optimizes resources (e.g., working with physical, virtual and cloud-based resource pools)
associated with non-real-time or batch workloads, and is built on architectural patterns that
facilitate easy, standards-based integration for example, using SOA principles across a wide
range of platforms and applications.
Position and Adoption Speed Justification: Some characteristics of ITWAB were defined in "IT
Workload Automation Broker: Job Scheduler 2.0." A federated policy engine that takes business
Page 26 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

policies and converts them into a technology SLA is over two years away from being deployed in
production environments. ITWAB can emerge in vertical industry segments for example,
insurance where a set of standardized, risk model calculation processes is driven by a common
definition of business policies. Alternatively, ITWAB is emerging in situations where decisions need
to be made on use and deployment of computing resources; for example, to finish processing
workloads associated with business processes by a certain deadline, ITWAB may make decisions
to use cloud-based computing resources, as needed in addition to on-premises resources. An
intermediate stage of tool development will be the manual conversion of business policies into
technology SLAs.
Visibility, discovery and optimization of resource pools across the entire physical, virtual and cloud
computing environment isn't possible today. Intermediate solutions based on targeted
environments, such as server resource pools through the use of virtualization management tools,
will emerge first. Some tools have integration with configuration management databases (CMDBs)
to maintain batch service data for better change and configuration management of the batch
service to support reporting for compliance requirements. Integration with run book automation
(RBA) tools, data center automation tools and cloud computing management tools to provide endto-end automation will also continue to evolve into enterprise requirements during the next two
years in order to facilitate growing or shrinking the pool of resources dynamically, i.e., in an RTI
environment to meet batch-based SLAs. Furthermore, critical-path analysis capabilities to identify
jobs that may breach SLA requirements are being adopted by many of these tools.
User Advice: Users should use these tools instead of traditional job scheduling tools where there is
a need to manage their batch or non-real-time environment using policies. Users should also keep
in mind that not all ITWAB capabilities have been delivered yet (as highlighted above). Tools that
have developed automation capabilities such as run book automation (or IT operations process
automation) and/or are able to integrate with other IT operations tools should be used to implement
an end-to-end automation.
Business Impact: ITWAB tools will have a big impact on the dynamic management of batch SLAs,
increasing batch throughput and decreasing planned downtime, and will play a role in end-to-end
automation. ITWAB will be involved in the initial stages of implementing the service governor
concept of real-time infrastructure.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: BMC Software; CA Technologies; Cisco (Tidal); IBM Tivoli; Orsyp; Redwood
Software; Stonebranch; UC4 Software
Recommended Reading: "IT Workload Automation Broker: Job Scheduler 2.0"
"Magic Quadrant for Job Scheduling"

Page 27 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Workspace Virtualization
Analysis By: Terrence Cosgrove; Mark A. Margevicius; Federica Troni
Definition: Workspace virtualization tools separate the user's working environment (i.e., data,
settings, personalization and applications) from the OS or any applications on the PC or Mac on
which it executes. This allows users to run a corporate-managed workspace (i.e., one that is
patched, provisioned, secured, etc.) on a corporate or user-owned PC or Mac. Because these tools
allow the workspace to execute on the local client (as opposed to hosted virtual desktops [HVDs],
which execute in the data center), users can have a secure, data-leakproof, device-independent
workspace, while leveraging local processing power and working offline.
Position and Adoption Speed Justification: Adoption of workspace virtualization tools was
originally driven by organizations that wanted to separate workspaces to prevent data leakage (e.g.,
financial services or government). Growing interest in this technology is driven by:

Employee-owned PC programs (see "Checklist for an Employee-Owned Notebook or PC


Program")

Organizations looking for new ways to manage mobile or distributed users (see "New
Approaches to Client Management")

Organizations that want to give nonemployees temporary access to a corporate system (e.g.,
contractors)

The technology has matured enough to support thousands of users; however, the large vendors do
not have mature products, and the mature products come from small vendors. We are starting to
see this change as major client computing industry players, such as Intel, HP and Lenovo, start to
bring workspace virtualization tools to market. This will drive adoption and awareness of the
technology, and secure the viability of this market.
User Advice: Users should begin to evaluate workspace virtualization tools in proofs of concept,
where user workspace isolation is needed and other approaches, like server-based computing or
HVDs, cannot be applied (for example, when offline requirements or network issues prohibit the use
of HVDs). Workspace virtualization tools hold particular promise for mobile users who are
connected intermittently to enterprise networks.
Current workspace virtualization vendors are at risk of being acquired through 2013. This likely will
be disruptive for customers. Therefore, the vendor's product must have significant value for the
customer today to mitigate the risk of acquisition and subsequent disruptions to product
advancements or changes to product strategy.
Business Impact: Workspace virtualization tools offer some of the management benefits of HVDs
without requiring the necessary HVD infrastructure build out, while allowing the user to work offline,
leverage local processing power and separate workspaces to prevent data leakage. Therefore, there
is a wide range of users: Mac users, employee-owned PC users, remote users connecting over slow
links, contractors, knowledge workers and mobile users. This technology offers potentially high
benefits due to its ability to support user-owned IT initiatives and the separation of user and
corporate workspaces. Organizations that do not have a particular need for virtualization
Page 28 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

capabilities and already manage their users effectively with current tools likely will see little value in
workspace virtualization tools.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Citrix Systems; MokaFive; Scense; Virtual Computer; VMware

At the Peak
Capacity Planning and Management Tools
Analysis By: Milind Govekar; Cameron Haight
Definition: Capacity planning tools help plan for optimal performance of business processes based
on planned variations of demand. These tools are designed to assist IT organizations achieve
performance goals and planning budgets, while potentially preventing overprovisioning of
infrastructure or the purchasing of excessive off-premises capacity. Capacity planning tools provide
value by enabling enterprises to build performance scenarios (models) that relate to the business
demand by often asking what-if questions, and assessing the impact of the scenarios on various
infrastructure components. Capacity also has to be managed in real time in a production
environment. Thus, the technology has evolved from purely a planning perspective to provide realtime information dissemination (and control) of workloads to meet organizational performance
objectives. This capacity management capability is becoming pervasive in virtual (and, to a lesser
extent, cloud) environments via embedded technologies, such as VMware's Distributed Resource
Scheduler (DRS). Increasingly, these technologies are being used to plan and manage capacity at
the IT service and business service levels.
Position and Adoption Speed Justification: While physical infrastructure and primarily
component-focused capacity planning tools have been available for a long time, products that can
support an increasingly dynamic environment are not yet fully mature. Throughout 2010 and early
2011, we have seen a rise in interest in and implementation of these tools. These offerings are
increasingly being used for standard data center consolidation activities, as well as related planning
and management of virtual (and cloud) infrastructures. These tools have historically come with high
price tags and long learning curves due to their complexity, leading to additional costs for
adequately trained personnel. However, some of these products have evolved to the point where
their use can be performed competently by individuals not associated with performance engineering
teams, with some of the capacity management tools requiring little human intervention at all.
User Advice: Capacity planning and management is becoming especially critical due to the
increase in shared infrastructures, where the potential for resource contention may be greater.
Users should invest in capacity planning and management tools to lower costs and manage risk.
Although some tools are easier to use and implement than others, many can still require a high level

Page 29 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

of skill, so ensure that adequate training is available to maximize the utility of these products.
Finally, determine your infrastructure and application management needs some organizations
may only require support of virtual (and cloud) environments, while others will seek to include
support for what may still be a substantial legacy installed base.
Business Impact: Capacity planning and management tools should be used by organizations that
rely heavily on the business processes of the IT organization to avoid performance issues and avoid
excessive infrastructure (including cloud service) costs, and, hence, plan and forecast budgets
accurately. These tools are usually successfully implemented by organizations that have high IT
service management maturity and a dedicated performance management group.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: BMC Software; CA Technologies; CiRBA; Opnet Technologies; Quest Software;
Solution Labs; Sumerian; TeamQuest; Veeam; VKernel; VMTurbo; VMware
Recommended Reading: "Toolkit: Planning Your Performance Management and Capacity
Planning Implementation"
"Toolkit: Business and IT Operations Data for the Performance Management and Capacity Planning
Process"
"Toolkit: Sample Job Description for a Capacity Planner"
"Toolkit Sample Template: Server Performance Monitoring and Capacity Planning Tool RFI"
"Toolkit: Guidance for Preparing an RFI for End-to-End Monitoring Tools"

IT Financial Management Tools


Analysis By: Milind Govekar; Biswajeet Mahapatra; Jay E. Pultz
Definition: Most IT organizations have to transform themselves to become a trusted service
provider to the business this means transforming to provide services as opposed to
technologies, understanding cost drivers in detail, providing transparency of IT costs and value
delivered, and generally "running IT like a business." IT financial management tools provide CIOs
and senior IT leaders with IT costs data and analytics that best support strategic decision making.
For example, an organization with a strategy of operational excellence might use a view detailing
the unit costs that constitute its IT services. Conversely, a business-process-oriented organization
might want to see IT costs in relation to key business processes. Thus, these tools have the ability
to provide costing (transparency at a defined unit of service and cost drivers, using multiple cost
allocation models), pricing, budgeting, forecasting (including what-if scenarios), benchmarking,
analytics, the ability to publish and/or subscribe costing and pricing with service catalog, reporting,
billing and reconciliation with enterprise financial systems, among other functionalities. These tools

Page 30 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

have the ability (adapters) to collect cost-related data from a heterogeneous and complex IT
environment, the ability to build a cost model and reporting and allocation capabilities.
Most enterprises have traditionally used spreadsheets and other homegrown tools for IT financial
management. These spreadsheets are inadequate to run IT like a business in a multisourced IT
service environment. Vendors now provide software-based tools that are more powerful than these
traditional approaches and enable a quick-start versus the time required to develop a do-ityourself tool.
Position and Adoption Speed Justification: Through 2010, and so far in 2011, we have seen a
significant rise in interest in the adoption these tools, mainly for showback and cost transparency.
New IT financial management tools have emerged over the last five years, and some of the existing
COTS tools that have traditionally been used as substitutes for spreadsheets have also transformed
to provide some of the key functionality needed for IT financial management. The demand for these
tools has also grown due to the increasing share of virtualization (shared infrastructure) in the
production environment, interest in cloud computing service delivery models and increased interest
in cost optimization and allocation. These tools will grow in capabilities, providing a wide range of
options, including simulations giving visibility to IT leaders on the impact of their decisions. These
tools are also moving from being isolated and stand-alone tools to being a more integrated part of
traditional service management tools.
These tools will continue to gain visibility and capability as the pressure on enterprise IT increases
to run IT like a business. Furthermore, increased interest in cloud computing is putting additional
pressure on IT operations to provide transparency of costs, billing and chargeback information, thus
increasing the demand for the tools. Most organizations are beginning to see the benefits of
commercial chargeback tools and are using these tools to provide greater transparency of costs,
particularly in a multisourced environment where service costs from internal, outsourced, external
and cloud computing environments need to be managed to make sourcing decisions.
User Advice: As IT departments start their transformation toward investment centers or profit
centers, they need more transparency in their operations, provide investment justifications, reduce
operational ambiguity, strive for better management and control of their operations. IT financial
management tools provide the business with improved cost transparency or showback in a
multisourced (internal, outsourced, cloud) IT service environment, allocate cost to the appropriate
drivers and help build cost as one of the key decision-making components. IT financial
management tools can help with this process, especially in showing where consumption drives
higher or lower variable costs. As IT moves toward a shared-service delivery model in a highly
complex computing environment, these tools will enable more-responsible and accountable
financial management of IT. However, users must have a functional financial management
organization in IT, along with a well-defined process for a successful implementation of these tools.
Business Impact: IT financial management tools mainly help run the IT organization as a business.
They affect the IT organization's ability to provide cost transparency and perform accurate cost
allocation, and have an impact on the value of the services it provides. The tools enable the
business and IT to manage and optimize the demand and supply of IT services. A major benefit is
that these tools enable enterprises to provide an insight into IT costs and fair apportionment of IT

Page 31 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

service costs, if needed, based on differentiated levels of business unit service consumption. They
also show how the IT organization contributes to business value.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Apptio; BMC Software; ComSci; Digital Fuel; IBM Tivoli; Nicus Software; VMware

IT Service Portfolio Management and IT Service Catalog Tools


Analysis By: Debra Curtis; Kris Brittain
Definition: IT service portfolio management (ITSPM) products document the portfolio of
standardized IT services, described in business value terms, along with their standard supported
architectures and contracts with internal and external service providers. ITSPM tools simplify the
process of decomposing IT services into an IT service catalog of specific offerings that meets the
majority of customer requirements and creates the service request portal, so that an end user or a
business unit customer can purchase them. The service catalog portal format includes space for
easy-to-follow instructions on how to order or request services, as well as details on service pricing,
service-level commitments and escalation/exception-handling procedures. ITSPM tools provide IT
service request handling capabilities to automate, manage and track the process workflow from a
service request (a customer order) that comes into the portal, through to service delivery, including
task definition, dissemination and approval. ITSPM tools provide reporting and, sometimes, a realtime dashboard display of service demand, service delivery and service-level performance for IT
analysis and for customers to track their service requests. Finally, ITSPM financial management
capabilities help the IT operations group analyze service costs and service profitability, and address
functional requirements specific to accounting, budgeting and billing for IT services in the portfolio
and catalog.
Position and Adoption Speed Justification: As IT organizations adopt a business-oriented IT
service management strategy, they seek greater efficiency in discovering, defining and documenting
IT services; automating the processes for delivering IT services; and managing service demand and
service financials. IT service catalogs are gaining new focus and impetus with the ITIL v.3 update.
However, the target market for ITSPM and IT service catalog tools is the 5% to 15% of IT
organizations that have attained the service-aligned level of the ITScore maturity model for
infrastructure and operations, a fact that slows adoption speed and lengthens the Time to Plateau.
IT organizations will proceed through a number of maturity steps, likely first documenting their
service catalog in a simple Microsoft Word document, then storing it in an Excel spreadsheet or a
homegrown database. A typical second stage of maturity appears with a homegrown IT service
catalog portal on the intranet, which is placed under change control. Finally, IT organizations mature
to using commercial, off-the-shelf ITSPM tools to present the IT service catalog online for
customers to place orders through a self-service portal.

Page 32 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

User Advice: Enterprises that want to automate the process workflow for ordering and delivering IT
services, as well as to track the financial aspects of the "business of IT," should evaluate these tools
only when they have mature IT service management processes and have documented IT
architecture standards that are already in place. Generally, ITSPM products do not directly measure
application availability or IT infrastructure components for service quality reporting. Instead, they
depend on data imports from service-level reporting, business service management or IT
component availability monitoring tools. Therefore, integration usually will be required with other
vendors/products to complete this function. Some functions of emerging ITSPM tools overlap with
more-mature IT service desk tools. There is a high potential for market consolidation and acquisition
as ITSPM features begin to blend with or disappear into other categories.
Business Impact: ITSPM products are intended to aid IT organizations in articulating the business
value of the IT services they offer, improve the customer experience by making it easier for
customers to do business with IT, increase IT operations efficiency through service delivery process
documentation and workflow automation, and help the IT operations group assess the profitability
and competitiveness of its services. By documenting its portfolio of value-based, business-oriented
IT services at different price points, the IT organization can present well-defined IT service offerings
to its business unit customers, which raises the organization's credibility with the business and
helps establish a foundation for service quality and IT investment negotiations that are based on
business value and results. Through standardization, along with a better understanding of customer
requirements and delivery costs (such as capital and labor requirements), the IT organization is in a
position to do an accurate cost/profit analysis of its service portfolio and to continually seek
methods to reduce delivery costs, while meeting customer service and quality requirements.
Once services are decomposed into standardized, orderable service catalog offerings, repeatable
process methodologies for service delivery can be documented and automated. This will reduce
errors in service delivery, help identify process bottlenecks and uncover opportunities for efficiency
improvements. In addition, service catalogs simplify the service request process for customers and
improve customer satisfaction by presenting a single "face" of IT to the customer for all kinds of IT
interactions (including incident logging, change requests, service requests, project requests and
new portfolio requests).
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: BMC Software; CA Technologies; Cisco-newScale; Digital Fuel; HP; IBM Tivoli;
PMG; USU
Recommended Reading: "Case Study: Insurance Provider Improves Service Delivery via a Service
Catalog"
"ITSM Fundamentals: How to Create an IT Service Portfolio"
"ITSM Fundamentals: How to Construct an IT Service Catalog"

Page 33 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

"The Fundamental Starter Elements for IT Service Portfolio and IT Service Catalog"

Open-Source IT Operations Tools


Analysis By: Milind Govekar; Cameron Haight
Definition: Open-source service management tools are products offered under several licensing
arrangements (GNU General Public License, Apache Software License, etc.), and designed to
provide similar IT service management (ITSM) and operations management capabilities as those
offered by traditional management providers. These capabilities include performance and
availability management, configuration management (including discovery, configuration
management databases [CMDBs] and provisioning), service desk, and service-level management.
The qualitative (feature richness) and quantitative (feature breadth) attributes of these products have
continued to improve, although they often require somewhat more-skilled users to maximize their
value. While these products often have been of interest to small or midsize organizations, due to
their cost sensitivity, we see more large enterprises viewing these tools as potential substitutes to
those from the larger enterprise management vendors. Accelerating this trend is the growing
adoption of cloud computing in large, public cloud organizations that have opted for open-sourcebased (or do-it-yourself) management products to support open-source server and networking
infrastructures. The rise of the DevOps movement is also creating interest in these products,
especially in application life cycle management (ALM), including release management and
configuration management.
Position and Adoption Speed Justification: Open-source service management products have
been available for several years, supporting a variety of IT requirements. Increasingly, many
organizations are beginning to use the tools in mission-critical environments, because their
functionality continues to improve, and standardized support and maintenance contracts have
become more widely available. The ongoing pressure to reduce IT costs is also continuing to
increase interest in these products. While we don't anticipate broad-based adoption in Global 2000
enterprises in the near term, we anticipate pockets of open-source management deployment
specific to the functions identified above to occur more frequently within these large firms, in
addition to their growing adoption in the cloud computing marketplace.
User Advice: Understand not only the potential feature limitations of some of these products, but
also the open-source licenses under which they may be offered. Be sure to assess the
supportability and maintenance provisions, especially for products being offered as "projects" or via
noncommercial concerns. You may need to plan for consulting services to address any feature
limitations requiring remedy. Potentially, the total cost of ownership (TCO) of implementing these
tools may be higher than commercial off-the-shelf (COTS) tools, depending on the maintenance,
integration, upgrades, customization and other requirements.
Business Impact: Open-source service management products can, in many cases, dramatically
reduce your IT service and operations management spending in license revenue and associated
maintenance. However, be prepared to actively manage your TCO downward, as this may turn out
to be a bigger factor in cost terms.
Benefit Rating: High
Page 34 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Market Penetration: 5% to 20% of target audience


Maturity: Early mainstream
Sample Vendors: Cfengine; GroundWork Open Source; Nagios; Opscode; Puppet Labs; Zabbix;
Zenoss

VM Energy Management Tools


Analysis By: Rakesh Kumar; John R. Phelps; Philip Dawson
Definition: Virtual machine (VM) energy management tools enable users to understand and
measure the amount of energy that is used by each VM in a physical server. This will enable users
to control operational costs and associate energy consumption with the applications running in a
VM.
Position and Adoption Speed Justification: Data center infrastructure management (DCIM) tools
monitor and model the use of energy across the data center. Server energy management tools track
the energy consumed by physical servers. Although these tools are critical to show real-time
consumption of energy by IT and facilities components, the need to measure the energy consumed
at the VM level is also important. This will provide more-granular management of overall data center
energy costs and allow association of energy across the physical to the virtual environments. As the
use of server virtualization increases, this association will become more important, as will be the
ability to associate the energy used by applications running the VM.
Distributed Power Management from VMware is available and is designed to throttle down inactive
VMs to reduce energy consumption. Coupled with Distributed Resource Scheduler, it can move
workloads at different times of the day to get the most-efficient energy consumption for a particular
set of VMs. Also, the core parking feature of Hyper-V R2 allows minimal use of cores for a given
application, even if multiple cores are predefined, keeping the nonessential cores in a suspend state
until needed, thus reducing energy costs.
The tools that provide VM energy management are at an early stage of development, but should
improve during the next few years.
User Advice: Start deploying appropriate tools to measure energy consumption in data centers at a
granular level. This includes information at the server, rack and overall site levels. Use this
information to manage data center capacity, including the floor space layout for new hardware, and
for managing costs through virtualization and consolidation programs.
Acquire energy management tools that report data power and energy consumption efficiency
according to power usage effectiveness (PUE) metrics as a measure of data center efficiency. The
Green Grid's PUE metric is increasingly becoming one of the standard ways to measure the energy
efficiency of a data center.
Where appropriate, evaluate and deploy VM energy management tools. Gartner also encourages
users to develop operational processes to maximize the relationships among the applications

Page 35 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

running in VMs and the amount of energy that is used by VMs. For example, VM energy
management tools could be used for chargeback or as the basis for application prioritization.
Business Impact: VM energy management tools will provide better management of data center
operations costs, and more-granular energy-based and application-specific chargeback.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: 1E; Emerson Network Power-Aperture; HP; IBM; VMware
Recommended Reading: "Pragmatic Guidance on Data Center Energy Issues for the Next 18
Months"
"Green Data Centers: Guidance on Using Energy Efficiency Metrics and Tools"

IT Process Automation Tools


Analysis By: Ronni J. Colville
Definition: IT operations process automation (IT PA) tools (previously called run book automation
[RBA]) automate IT operations processes across different IT management disciplines. IT PA
products have:

An orchestration interface to design, administer and monitor processes

A workflow to map and support the processes

Integration with IT elements and IT operations tools needed to support processes (e.g., fault to
remediation, configuration management and disaster recovery)

IT PA products are used to integrate and orchestrate multiple IT operations management tools to
support a process that may need multiple tools, and that may span many IT operations
management domains. IT PA tools can focus on a specific process (for example, server
provisioning), replacing or augmenting scripts and manual processes more broadly to processes
that cross different domains (cloud automation).
Position and Adoption Speed Justification: IT operations process automation continues to grow
as a key focus for IT organizations looking to improve IT operations efficiencies, and provide a
means to track and measure process execution. IT PA tools provide a mechanism to help IT
organizations integrate their disparate IT operations tool portfolios to improve process handoffs in
support of best practices. The IT PA market continues to attract new players from a wide range of
technology focus areas, including event and fault management, change management, configuration
management, job scheduling and business process management.

Page 36 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

The combination of a growing awareness of tool benefits and large IT management tool vendors
embracing IT PA across their own portfolios and for specific scenarios (e.g., incident resolution) is
increasing the introduction of new product capabilities (e.g., release management) and new
vendors. Success with IT PA tools will continue to accelerate market awareness, and will spur
further innovation in vendor solutions and in client use cases. In addition, IT PA tools are being used
to support key IT initiatives, such as the management of virtual infrastructures and cloud computing.
IT PA tools continue to be enhanced, especially around scalability, performance and usability, with
the most advanced providing embedded decision-making logic in their workflows to allow
automatic decisions on process execution. There are no signs that the adoption and visibility of
these tools will diminish, as they continue to be used to address some of today's key IT challenges,
including reducing IT operational costs, the automation of virtual infrastructure and supporting
private cloud initiatives.
The two biggest inhibitors to more widespread adoption are:

More out of-the-box workflow templates or building blocks that will enable faster time to
implement. Without specific content for various scenarios, IT resources are tasked with building
workflows and associated execution scripts, which is often time consuming and requires indepth tool knowledge.

Knowledge of the tasks or activities being automated. Many organizations try to use these tools
without the necessary process knowledge, and developing this process design often requires
cross-domain expertise and coordination. IT organizations that don't have their processes and
task workflows documented often take longer to gain success with these tools.

User Advice: IT PA tools that have a specific orientation (e.g., user provisioning, server
provisioning, etc.) and provide a defined (out-of-the-box) process framework can aid in achieving
rapid value. When used in this way, the IT PA tools are focused on a specific set of IT operations
management processes. However, using a more general-purpose IT PA tool requires more-mature,
understood process workflows. Select IT PA tools with an understanding of your process maturity
and the tool's framework orientation. Clients should expect to see IT PA tools being positioned and
sold to augment and enhance existing IT management products within a single vendor's product
portfolio (e.g., cloud management platform solutions).
However, when used to support a broader range of process needs that cross domains and cross
multiple processes, clients should develop and document their IT operations management
processes before implementing IT PA tools. IT operations managers who understand the challenges
and benefits of using IT operations management tools should consider IT PA tools as a way to
reduce risk where handoffs occur or to improve efficiencies where multiple tool integrations can
establish repeatable best-practice activities. This can only be achieved after the issues of
complexity are removed through standardizing processes to improve repeatability and
predictability. In addition, IT operations processes that cross different IT management domain areas
will require organizational cooperation and support, and the establishment of process owners.
Business Impact: IT PA tools will have a significant effect on running IT operations as a business
by providing consistent, measurable and better-quality services at optimal cost. They will reduce

Page 37 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

the human factor and associated risks by automating safe, repeatable processes, and will increase
IT operations efficiencies by integrating and leveraging the IT management tools needed to support
IT operations processes across IT domains.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Early mainstream
Sample Vendors: BMC Software; CA Technologies; Cisco; Citrix Systems; HP; iWave Software;
IBM; LANDesk; Microsoft; NetIQ; Network Automation; Singularity; UC4 Software; Unisys; VMware
Recommended Reading: "Best Practices for and Approaches to IT Operations Process
Automation"
"Run Book Automation Reaches the Peak of Inflated Expectations"
"RBA 2.0: The Evolution of IT Operations Process Automation"
"The Future of IT Operations Management Suites"
"IT Operations Management Framework 2.0"

Application Performance Monitoring


Analysis By: Jonah Kowall; Will Cappelli; Milind Govekar
Definition: Application performance monitoring (APM) tools were previously represented in the
Hype Cycle by three technologies: end-user monitoring tools, application management and
application transaction profiling. With this consolidation of APM products, we have selected the
position accordingly. Gartner's definition of APM is composed of five main functional dimensions:
1.

End-user experience monitoring: Tracking the end-user application performance experience,


and the quality with which an application is performing, including the use of synthetic
transaction-based software robots, network-attached appliance-based packet capture and
analysis systems, endpoint instrumentation systems based on classical manager agent, and
special-purpose systems targeted at complex Internet Protocol (IP)-based services.

2.

User-defined transaction profiling: Following a defined set of transactions as it traverses the


application stack and infrastructure elements that support the application. Possible methods
employed are automated transaction-centric event correlation and analysis, and transaction
tagging.

3.

Application component discovery and modeling: Discovery of software and hardware


component interrelationships as user-defined transactions are executed.

4.

Application component deep-dive monitoring: Technologies that allow for an in-depth view
of the critical elements that hold a modern, highly modular application stack together.

Page 38 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

5.

Application performance management database: Storage, correlation and analysis of


collected datasets to yield value.

Position and Adoption Speed Justification: Gartner has seen a rise in demand from clients
investing in these tools, as most enterprises continue their transformations from purely
infrastructure management to application management. In their journeys toward IT service
management, APM provides valuable IT operations for the rapid isolation and root cause analysis
(RCA) of problems. The increasing adoption of private and public cloud computing is stimulating the
desire for more insight into application and user behavior. This journey will require collaboration
with, and coordination among, the application development teams and the IT infrastructure and
operations teams, which don't always have the same priorities and goals. The visibility of APM tools
within this segment increased significantly during the past several years, and is continuing to do so.
User Advice: Enterprises should use these tools to proactively measure application availability and
performance. Most enterprises will need to implement several types of technology from the
multidimensional model to satisfy the needs of different IT departments, as well as demands of
business users and management. This technology is particularly suited for highly complex
applications and infrastructure environments. Enterprises should take into consideration that, on
many occasions, they will need support from consultants and application development
organizations to implement these tools successfully. Many organizations start with the end-user
experience monitoring tools to first get a view of end-user or application-level performance.
Business Impact: APM tools are critical in the problem-isolation process, thus shortening mean
time to repair and improving service availability. They provide correlated performance data that
business users can utilize to answer questions about service levels and user populations, often in
the form of easily digestible dashboards. These tools are paramount to improving and
understanding service quality, as users interact with applications. They allow for multiple IT groups
to share captured data and assist users with advanced analysis needs.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: AppNeta; AppSense; Arcturus Technologies; ASG Software Solutions; Aternity;
BMC Software; CA Technologies; Compuware; Correlsense; dynaTrace; Endace; ExtraHop
Networks; HP; IBM; Idera; Inetco Systems; InfoVista; Keynote Systems; ManageEngine; Microsoft;
Nastel Technologies; NetScout Systems; Network Instruments; Opnet Technologies; OpTier;
Oracle; Precise; Progress Software; Quest Software; SL Corp; Triometric
Recommended Reading: "Magic Quadrant for Application Performance Monitoring"
"The Future of Application Performance Monitoring"
"APM in 2011: Top Trends"

Page 39 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

"Keep the Five Functional Dimensions of APM Distinct"


"Magic Quadrant for IT Event Correlation and Analysis"

COBIT
Analysis By: George Spafford; Simon Mingay; Tapati Bandopadhyay
Definition: COBIT is an IT control framework used as part of IT governance to ensure that the IT
organization meets enterprise requirements. Although originally an IT audit tool, COBIT is
increasingly being used by business stakeholders and IT management to identify and create IT
control objectives that help mitigate risks, and for high-level self-assessment of IT organizations.
Process engineers in IT operations can leverage COBIT to identify controls to embed in processes
to better manage risks. Using COBIT may be part of an enterprise-level compliance program or an
IT process and quality program. COBIT is organized into four high-level domains (plan and organize,
acquire and implement, deliver and support, and monitor and evaluate) that are made up of 34 highlevel processes, with a variable number of control objectives for each.
The focus of this high-level framework is on what must be done, not how to do it. For example, the
COBIT framework identifies a software release as a control objective, but it doesn't define the
processes and procedures associated with the software release. Therefore, IT operations
management typically uses COBIT as part of a mandated program in the IT organization, and to
provide guidance regarding the kind of controls needed to meet the program's requirements.
Process engineers can, in turn, leverage other standards, such as ITIL, for additional design details
to pragmatically use.
COBIT 4.1 was released in May 2007 by the Information Systems Audit and Control Association
(ISACA) to achieve greater consistency with its Val IT program, and to address some discrepancies
in COBIT v.4. Supplementary guidance Tools consists primarily of spreadsheets and mappings to
other frameworks, such as the ITIL. COBIT v.5 will be released in 2012 to integrate COBIT, Val IT
and Risk IT frameworks, and to provide additional guidance.
Position and Adoption Speed Justification: COBIT will have a steadily increasing effect on IT
operations as IT operations organizations begin to increasingly realize its benefits, such as morepredictable operations, and as more enterprises adopt it and issue mandates for IT operations to
comply with it. As a control framework, COBIT is well-established. Its indirect effect on IT
operations can be significant, but it's unlikely to be a frequent point of reference for IT operations
management. As typical IT operations groups become more familiar with the implications of COBIT,
and awareness and adoption increase, the framework will progress slowly along the Hype Cycle.
We saw an increase in client inquiry calls in 2010, and expect interest to increase as IT operations
process engineers increasingly understand how to leverage the framework.
User Advice: IT operations managers who want to assess their controls to better mitigate risks and
reduce variations, and are aiming toward clearer business alignments of IT services, should use
COBIT in conjunction with other frameworks, including ITIL and ISO 20000. Those IT operations
managers who want to gain insight into what auditors will look for, or into the potential implications
for compliance programs, should also take a closer look at COBIT. Any operations team facing a

Page 40 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

demand for wholesale implementation should push back and focus its application in areas where
there are specific risks, in the context of its operation.
COBIT is better-positioned than ITIL in terms of managing IT operation's high-level risks and
controls; as such, enterprises that wish to put their IT service management program in the broader
context of a controls and governance framework should use COBIT. For example, in COBIT, the
business triggers the IT services demand scenario, with business goals determining the number of
IT goals, and then the IT goals determining the process goals and, subsequently, the activity goals.
These links can serve as audit trails to justify the IT activities and processes and services, and can
help build business cases around each of them at the different levels of detail as required. The
control strengths of COBIT are visible in the measurability of goals and process/service
performance (e.g., key goal indicators [KGIs]), defined as lagging indicators to measure whether
goals have been achieved, and the lead indicators of key performance indicators (KPIs) for planning
and setting targets on processes and services. Each COBIT process has links to business goals to
justify what it focuses on, how it plans to achieve the targets and how it can be measured (metrics).
An additional consideration is that service improvement programs that seek to leverage ITIL all too
frequently set themselves up as bottom-up, tactical, process engineering exercises, lacking a
strategic or business context. While ITIL encourages and provides guidance for a more strategic
approach, COBIT can help in achieving that, particularly by drawing business stakeholders into the
organizational change.
Business Impact: With a history as an auditing tool, COBIT is primarily concerned with reducing
risk. It affects all areas of managing the IT organization, including aspects of IT operations.
Management should review how the use of controls can help better manage risks and result in
improved performance. COBIT's usefulness has moved a long way beyond a simple audit tool,
particularly with the addition of the maturity models and the responsible, accountable, consulted
and informed (RACI) charts.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Maturity: Adolescent
Recommended Reading: "Leveraging COBIT for Infrastructure and Operations"
"Understanding IT Controls and COBIT"

Sliding Into the Trough


IT Service View CMDB
Analysis By: Ronni J. Colville; Patricia Adams
Definition: An IT service view configuration management database (CMDB) is a repository that has
four functional characteristics: service modeling and mapping, integration/federation, reconciliation,

Page 41 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

and synchronization. A CMDB maintains the dependencies among various IT infrastructure


components, and visualizes the hierarchical and peer-to-peer relationships that comprise an IT
service that's delivered to a business or to IT customers. Thus, a prerequisite for a CMDB is the
identification of business-oriented IT services and their associated components and
interrelationships.
A CMDB provides a consolidated configuration view of various sources of discovered data (as well
as manual data and documentation), which are integrated and reconciled into a single IT service
view to assist with the impact analysis of pending changes. There are two approaches to defining
the services and their component relationships:

Manually input or imported from a static document or source that is manually maintained. This
is tedious and often wrought with errors.

Using an IT service dependency mapping tool to populate CMDBs with discovered relationship
data, which is then reconciled with other data sources to illustrate a broader view of the IT
services.

A CMDB also has synchronization capabilities that provide checks and balances to compare
configuration data in various IT domain tools with one or more designated "trusted sources," which
are used to populate the CMDB. In addition, the CMDB provides federated links in context with
other sources such as problem, incident and asset management, as well as documentation to
enable a deeper level of analysis regarding proposed change impacts. Once the IT services are
modeled and mapped, other IT management tools will be able to "leverage" these views to enhance
their capabilities (for example, business service management, data center consolidation projects
and disaster recovery).
ITIL v.3 introduced a new concept called configuration management system (CMS). This can be a
CMDB or a completely federated repository. The concept of CMS offers a varied approach to
consolidating a view of all the relative information pertaining to an IT service. CMS tooling is actually
the same as a CMDB; because federation is still a nonstandard and immature functionality, the
reality of a CMS is still not technically viable.
Position and Adoption Speed Justification: Most IT organizations implement a CMDB as they
progress along their ITIL journey, which usually begins with a focus on problem, incident and
change management, and then evolves into a CMDB. Subsequently, there is often a focus on
configuration or IT asset management. Foundational to a successful CMDB are mature change and
configuration management processes. IT organizations that do not have a focus on governing
change and tracking and maintaining accurate configurations will not be successful with a CMDB
implementation.
During the past 12 months, Gartner has seen an increase in successful implementations
(approximately 20% to 40%, based on continued polling) that have achieved business value by
modeling and mapping at least three IT services. This has improved the ability to assess the impact
of a change on one component and others in the IT services. CMDB implementations can take from
three to five years to establish, but have no "end date" because they are ongoing projects. CMDB
tools have been available since late 2005 and 2006. The combination of maturing tools and
maturing IT organization processes, as well as the realization that this type of project does not have
Page 42 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

a quick ROI, has drawn out the planning, architecting and tool selection time frame. However, even
with longer time frames, IT organizations can achieve incremental benefits (for example, data center
visibility) while undergoing the implementation. Disaster recovery, business continuity management
planning and data center consolidation projects continue to be prevalent business drivers (along
with the main driver of improving change impact analysis) for justifying CMDB projects.
CMDB technology is foundational, and will provide input for other IT operational tools to provide a
trusted view of IT systems (for example, asset planning activities that need to implement a CMDB
require a significant amount of planning, which includes configuration process assessments and
organizational alignment). IT organizations must involve a broad set of stakeholders to ensure
cross-collaboration in defining the necessary configuration items that comprise a business service
view. This effort could take one to three years to complete. Because enterprise infrastructures will
include both internal and external resources (hybrid clouds), IT organizations will be challenged to
maintain a "living" source of the IT services their users consume, and to maintain change activity to
reduce and minimize disruption. A CMDB will play a key role in achieving this. As CMS and CMDB
technologies mature, they will become a critical part of an enterprise's trusted repository for
synchronizing the configuration service model with the real-time service model.
A CMDB is foundational and must be a prerequisite for a CMS. One significant inhibitor for CMS
"realization" is federation capability. Today, most successful implementations have fewer than three
sources of discovered data for integration and federation, because federation is still not robust
enough for true multivendor environments. Without standards of any significance being adopted by
CMDBs, CMS and the management data repository vendors that are suppliers of federated
information, IT organizations should continue to focus on CMDB implementations.
User Advice: Enterprises must have a clear understanding of the goals and objectives of their IT
service view CMDB projects, and have several key milestones that must be accomplished for a
successful implementation. Enterprises should start now to prevent scope creep, or it may result in
significant delays. Enterprises lacking change and configuration management processes are likely
to establish inventory data stores that don't represent real-time or near-real-time data records.
Trusted source data and reconciliation are essential components that require comprehensive
processes and organizational changes. IT organizations must know what trusted data they have,
and what data will be needed to populate the CMDB and IT service models that will achieve their
goals. Only data that has ownership and a direct effect on a goal should be in the CMDB IT service
configuration models; everything else should be federated (e.g., financial and contractual data
should remain in the IT asset management repository, and incident tickets should remain with the IT
service desk).
Enterprises should develop an incremental approach to implementing an IT service view CMDB by
focusing on one or two IT services at a time, rather than trying to define them all at once. In many
scenarios, an IT service dependency mapping tool is a good place to start establishing a baseline of
relationships across applications, servers, databases and middleware, and networking components
and storage, and to start gaining insights into the current configuration. An IT service dependency
mapping tool also is an effective vehicle for automating the data population of the CMDB.

Page 43 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Business Impact: A CMDB affects nearly all areas of IT operations. It will benefit providers (of data)
and subscribers (of IT service views). It's a foundation for lowering the total cost of operations and
improving the quality of service. An IT service view CMDB implementation improves risk
assessment of proposed changes and can assist with root cause analyses. It also facilitates a nearreal-time business IT service view.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: BMC Software; CA Technologies; HP; IBM Tivoli
Recommended Reading: "Top Five CMDB and Configuration Management System Market
Trends"
"Cloud Environments Need a CMDB and a CMS"
"Four Pitfalls of the Configuration Management Database and Configuration Management System"

Real-Time Infrastructure
Analysis By: Donna Scott
Definition: Real-time infrastructure (RTI) represents a shared IT infrastructure (across customers,
business units or applications) in which business policies and SLAs drive dynamic and automatic
allocation and optimization of IT resources (that is, services are elastic), so that service levels are
predictable and consistent, despite the unpredictable demand for IT services. Where resources are
constrained, business policies determine how resources are allocated to meet business goals. RTI
may be implemented in private and public cloud architectures, as well as hybrid environments
where data center and service policies would drive optimum placement of services and workloads.
Moreover, in all these configurations, RTI provides the elasticity functionality, as well as dynamic
optimization and tuning, of the runtime environment based on policies and priorities.
Position and Adoption Speed Justification: This technology is immature from the standpoint of
architecting and automating an entire data center and its IT services for RTI. However, point
solutions have emerged that optimize specific applications or specific environments, such as
dynamically optimizing virtual servers (through the use of performance management metrics and
virtual server live-migration technologies) and dynamically optimizing Java Platform, Enterprise
Edition (Java EE)-based shared application environments. It is also emerging in cloud solutions,
initially for optimizing placement of workloads or services upon startup based on pre-established
policies. Moreover, enterprises are implementing shared disaster recovery data centers, whereby
they dynamically reconfigure test/development environments to look like the production
environment for disaster recovery testing and disaster strikes. This type of architecture can typically
achieve recovery time objectives in the range of one to four hours after a disaster is declared.
Because of the advancement in server virtualization, RTI solutions are making some degree of
progress in the market, especially for targeted use cases where enterprises write specific
Page 44 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

automation, such as to scale a website up/down and in/out. However, there is low market
penetration, primarily because of lack of modeling the service (inclusive of runtime policies and
triggers for elasticity), lack of standards and lack of strong service governors/policy engines in the
market. This leaves customers that desire dynamic optimization to integrate multiple technologies
together and orchestrate analytics with actions.
User Advice: Surveys of Gartner clients indicate that the majority of IT organizations view RTI
architectures as desirable for gaining agility, reducing costs and attaining higher IT service quality,
and that about 20% of organizations have implemented RTI for some portion of their portfolios.
Overall progress is slow for internal deployments of RTI architectures because of many
impediments, especially the lack of IT management process and technology maturity levels, but
also because of organizational and cultural issues.
It is also slow for public cloud services, where applications may have to be written to a specific and
proprietary set of technologies to get dynamic elasticity. We see technology as a significant barrier
to RTI, specifically in the areas of root cause analysis (required to determine what optimization
actions to take), service governors (the runtime execution engine behind RTI analysis and actions),
and integrated IT process/tool architectures and standards. However, RTI has taken a step forward
in particular focused areas, such as:

Dynamic provisioning of development/testing/staging and production environments

Server virtualization and dynamic workload movement

Reconfiguring capacity during failure or disaster events

Service-oriented architecture (SOA) and Java EE environments for dynamic scaling of


application instances

Specific and customized automation written for specific use cases, such as scaling up/down or
out/in a website that has variable demand

Many IT organizations that have been maturing their IT management processes and using IT
process automation tools (aka run book automation) to integrate processes (and tools) together to
enable complex, automated actions are moving closer to RTI through these actions. IT
organizations desiring RTI should focus on maturing their management processes using ITIL and
maturity models (such as Gartner's ITScore for I&O Maturity Model), and their technology
architectures (such as through standardization, consolidation and virtualization). They should also
build a culture conducive to sharing the infrastructure, and should provide incentives such as
through reduced costs for shared infrastructures. Organizations should investigate and consider
implementing early RTI solutions in the public or private cloud or across data centers in a hybrid
implementation, which can add business value and solve a particular pain point, but should not
embark on data-center-wide RTI initiatives.
Business Impact: RTI has three value propositions expressed as business goals:

Reduced costs achieved by better, more-efficient resource use, and by reduced IT operations
management (labor) costs

Page 45 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Improved service levels achieved by dynamic tuning of IT services

Increased agility achieved by rapid provisioning of new services or resources, and scaling
capacity (up and down) of established services across both internally and externally sourced
data centers

Benefit Rating: Transformational


Market Penetration: 5% to 20% of target audience
Maturity: Emerging
Sample Vendors: Adaptive Computing; CA Technologies; IBM Tivoli; NetIQ; Univa; VMware
Recommended Reading: "Provisioning and Configuration Management for Private Cloud
Computing and Real-Time Infrastructure"
"Private Cloud Computing Ramps Up in 2011"
"Survey Shows High Interest in RTI, Private Cloud"
"Building Private Clouds With Real-Time Infrastructure Architectures"
"Cloud Services Elasticity Is About Capacity, Not Just Load"

IT Service Dependency Mapping


Analysis By: Ronni J. Colville; Patricia Adams
Definition: IT service dependency mapping tools enable IT organizations to discover, document
and track relationships by mapping dependencies among the infrastructure components, such as
servers, networks, storage and applications, that form an IT service. These tools are used primarily
for applications, servers and databases; however, a few discover network devices (such as
switches and routers), mainframe-unique attributes and virtual infrastructures, thereby presenting a
complete service map. Some tools offer the capability to track configuration change activity for
compliance. Customers are increasingly focused on tracking virtual servers and their relationships
to physical and other virtual systems, and this is becoming a differentiator among the tools. Some
tools can detect only basic relationships (e.g., parent to child and host to virtual machine), but
others can detect the application relationships across virtual and physical infrastructures.
Most tools are agentless, though some are agent-based, and will build either a topographical map
or tabular view of the interrelationships of the various components. Vendors in this category provide
sets of blueprints or templates for the discovery of various packaged applications (e.g., WebSphere
and Apache) and infrastructure components. The tools provide various methods for IT organizations
to develop blueprints or templates of internally developed or custom applications, which can then
be discovered with the IT service dependency mapping tools.
Position and Adoption Speed Justification: Enterprises struggle to maintain an accurate and upto-date view of the dependencies across IT infrastructure components, usually relying on data
Page 46 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

manually entered into Visio diagrams and spreadsheets, which may not reflect a timely view of the
environment, or not at all. There was no (near) real-time view of the infrastructure components that
made up an IT service or how these components interrelated. Traditional discovery tools provide
insight about the individual components and basic peer-to-peer information, but they did not
provide the necessary parent/child hierarchical relationship information of how an IT service is
configured to enable better impact analysis.
The last of the stand-alone, dependency mapping vendors was acquired in 2009 (BMC Software
acquired Tideway); the rest were acquired from 2004 through 2006 by vendors with IT service view
CMDB tools with a plan to jump-start the data population of the CMDB with the IT service or
application service models. However, for many enterprises, these tools still fall short in the area of
homegrown or custom applications. Although the tools provide functionality to develop the
blueprints that depict the desired state or a known logical representation of an application or IT
service, this task remains labor-intensive, which will slow enterprisewide adoption of the tools
beyond its primary use of discovery.
Over the last 18 months, two vendors have emerged with IT service dependency discovery
capability. One is an IT service management vendor, and one is a new BSM vendor. These new
tools have some capability for discovering relationships, but fall short in the depth and breadth of
blueprints and types of relationships (e.g., for mainframes and virtualized infrastructures). While they
don't compare competitively to the more mature tools, organizations with less complexity might find
them to be sufficient. Because no independent solutions have been introduced, or ones that are
(potentially) easier to implement, these new vendors' tools may be an accelerant to adoption.
To meet the demand of today's data center, IT service dependency mapping tools require
expanded functionality for breadth and depth of discovery, such as a broad range of storage
devices, virtual machines, mainframes and applications that crosses into the public cloud. While the
tools are expensive, they can be easily justified based on their ability to provide a discovered view
of logical and physical relationships for applications and IT services They offer dramatic
improvements, compared with the prior manual methods. The adoption of these tools has increased
in the last 12 months, because new stakeholders (e.g., disaster recovery planners) and business
drivers with growing use cases (e.g., enterprise architecture planning, data center consolidation and
migration projects) have emerged. Therefore, market awareness and sales traction have improved.
User Advice: Evaluate IT service dependency mapping tools to address requirements for
configuration discovery of IT infrastructure components and software, especially where there is a
requirement for hierarchical and relationship discovery. The tools should also be considered as
precursors to IT service view CMDB initiatives. If the primary focus is for an IT service view, then be
aware that if you select one tool, the vendor is likely to try to thrust its companion IT service view
CMDB technology on you, especially if the CMDB is part of the underlying architecture of the
discovery tool. If the IT service dependency mapping tool you select is different from the CMDB,
then ensure that the IT service dependency mapping vendor has an adapter to integrate and
federate to the desired or purchased CMDB.
These tools can also be used to augment or supplement other initiatives, such as business service
management and application management, and other tasks that benefit from a near-real-time view

Page 47 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

of the relationships across a data center infrastructure (e.g., configuration management, business
continuity). Although most of these tools aren't capable of action-oriented configuration
modification, the discovery of the relationships can be used for a variety of high-profile projects in
which a near-real-time view of the relationships in a data center is required, including compliance,
audit, disaster recovery and data center moves (consolidation and migration), and even in planning
activities by integrating with enterprise architecture tools for gap analysis. IT service dependency
mapping tools can document what is installed and where, and can provide an audit trail of
configuration changes to a server and application.
Large enterprises with change impact analysis as a business driver build IT service view CMDBs
(whereas SMBs prefer to use the tools for asset visibility), so the adoption rate of IT service DM
tools parallels that of IT service view CMDBs. Where the business driver is near-real-time data
center visibility, we have begun to see greater penetration in midsize and large organizations
without an accompanying CMDB project.
Business Impact: These tools will have an effect on high-profile initiatives, such as IT service view
CMDB, by establishing a baseline configuration and helping populate the CMDB. IT service
dependency mapping tools will also have a less-glamorous, but significant, effect on the day-to-day
requirements to improve configuration change control by enabling near-real-time change impact
analysis, and by providing missing relationship data critical to disaster recovery initiatives.
The overall value of IT service dependency mapping tools will be to improve quality of service by
providing a mechanism for understanding and analyzing the effect of change to one component and
its related components within a service. These tools provide a mechanism that enables a near-realtime view of relationships that previously would have been maintained manually with extensive time
delays for updates. The value is in the real-time view of the infrastructure so that the effect of a
change can be easily understood prior to release. This level of proactive change impact analysis
can create a more stable IT environment, thereby reducing unplanned downtime for critical IT
services, which will save money and ensure that support staff are allocated efficiently, rather than
fixing preventable problems. Using dependency mapping tools in conjunction with tools that can do
configuration-level changes, companies have experienced labor efficiencies that have enabled them
to manage their environments more effectively and improved stability of the IT services.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: BMC Software-Tideway; CA Technologies; HP; IBM; Neebula; Service-Now;
VMware-EMC
Recommended Reading: "Selection Criteria for IT Service Dependency Mapping Vendors"

Mobile Device Management


Analysis By: Leif-Olof Wallin; Terrence Cosgrove; Ronni J. Colville

Page 48 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Definition: Mobile device management (MDM) includes software that provides the following
functions: software distribution, policy management, inventory management, security management
and service management for smartphones and media tablets. MDM functionality is similar to that of
PC configuration life cycle management (PCCLM) tools; however, mobile-platform-specific
requirements are often part of MDM suites.
Position and Adoption Speed Justification: Many organizations use MDM tools that are specific
to a device platform or that manage a certain part of the life cycle (e.g., device lock or wipe),
resulting in the adoption of fragmented toolsets. We are now beginning to see more focus on and
adoption of MDM tools triggered by the attention and adoption of the iPad. While IT organizations
vary in their approaches to implementing and owning the tools that manage mobile devices (e.g.,
the messaging group, some other mobile group, the desktop group, etc.), there are still very few
that are managing the full life cycle across multiple device platforms. Organizations are realizing that
users are broadening their use of personal devices for business applications. In addition, many
organizations are using different ways to deploy MDM to support different management styles.
These factors will drive the adoption of tools to manage the full life cycle of mobile devices.
Gartner believes that mobile devices will increasingly be supported in the client computing support
group in most organizations, and become peers with notebooks and desktops from a support
standpoint. Indeed, some organizations are already replacing PCs with tablets for niche user
groups. An increasing number of organizations are looking for MDM functionality from PCCLM
tools.
Gartner has moved the position of MDM back (to the left) this year because there are new dynamics
that are affecting both the technology and user adoption. While MDM is not new, and some of the
technology used to manage mobile devices is not new, what has changed is that, now, IT
organizations will be looking to manage more types of mobile devices from a single management
framework (or tool). Some IT organizations will be able to extend this capability into their PCCLM
tools, as many of the functions will be similar; while, for others, MDM will be in a separate tool due
to organizational alignment challenges and the success or failure of their existing PCCLM tool.
User Advice: Organizations already manage notebooks similarly to office PCs; however, the needs
of smartphone and media tablet users must be assessed. If your MDM requirements are similar to
PCCLM tool capabilities, PCCLM tools should be leveraged wherever possible. Many PCCLM tools
do not have strong mobile device support, so third-party tools may be required for at least the next
two years to automate those functions and other mobile-device-specific functions, such as device
wipe.
Business Impact: As more users rely on mobile computing in their jobs, the number of handheld
devices and media tablets used for business purposes is growing, especially with the introduction
of the iPad. Therefore, MDM capabilities are likely to become increasingly important. Mobile devices
are being used more frequently to support business-critical applications, thus requiring morestringent manageability to ensure secure user access and system availability. In this regard, MDM
tools can have material benefits to improve user productivity and device data backup/recovery.
Initially, the benefits will be visible mostly in sales force and workforce management deployments,
where improved device management can increase availability and productivity, as well as decrease

Page 49 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

support costs. In the short term, MDM tools may add significant per-user and per-device costs to
the IT budget. Companies will be at odds to allocate funds and effort to put increasing numbers of
devices under management that seem far less expensive than notebooks and may be owned by the
user. The needs for security, privacy and compliance must be understood as factors beyond user
choice, and must be recognized as a cost of doing business in a "bring your own device" scenario.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: AirWatch; BoxTone; Capricode; Excitor; FancyFon Software; Fiberlink
Communications; Fixmo; Fromdistance; Good Technology; Ibelem; McAfee; Mobile Active Defense;
MobileIron; Motorola Solutions; Odyssey Software; Smith Micro Software; SOTI; Sybase; Symantec;
Tangoe; The Institution; Zenprise
Recommended Reading: "Magic Quadrant for Mobile Device Management Software"
"Mobile Device Management 2010: A Crowd of Vendors Pursue Consumer Devicesin the
Enterprise"
"Use Managed Diversity to Support Endpoint Devices"
"The Five Phases of the Mobile Device Management Life Cycle"
"Microsoft's Mobile Device Management Solution Could Attain Long-Needed Focus"
"Mobile System Management Vendors Consolidate Across Configuration Markets"
"Toolkit Best Practices: Plan for Convergence of Mobile Security and Device Management"
"Toolkit: Are You Ready for the Convergence of Mobile and Client Computing?"
"Toolkit Decision Framework: Mobile Device Management in the Context of PC Management"
"Toolkit Decision Framework: Selecting Mobile Device Management Vendors"
"PC Life Cycle and Mobile Device Management Will Converge by 2012"

Business Service Management Tools


Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall
Definition: Business service management (BSM) is a category of IT operations management
software that dynamically links the availability and performance events from underlying IT
infrastructure and application components to the business-oriented IT services that enable business
processes. To qualify for the BSM category, a product must support the definition, storage and
visualization of IT service topology via an object model that documents and maintains parent-child

Page 50 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

relationships and other associations among the supporting IT infrastructure components. BSM
products must gather real-time operational status data from underlying applications and IT
infrastructure components via their services or through established monitoring tools, such as
distributed system- and mainframe-based event correlation and analysis (ECA), job scheduling and,
in some cases, application performance monitoring. BSM products then process the status data
against the object model, using potentially complex service health calculations and weightings,
rather than straightforward inheritance, to communicate real-time IT service status. Results are
displayed in graphical business service views, sometimes referred to as dashboards.
Position and Adoption Speed Justification: Every company wants to assess the impact of the IT
infrastructure and applications on its business processes, to match IT to business needs. However,
only 10% of large companies have developed their IT operational processes to the point where
they're ready to successfully deploy a BSM tool to achieve this. Adoption speed will continue to be
slow, but steady, as IT organizations improve their IT management process maturity. BSM is
starting to slide toward the Trough of Disillusionment. IT organizations are discovering that BSM
tools aren't easy to deploy, because a manual effort is required to identify the IT service
relationships and dependencies, or implementation requires that a configuration management
database (CMDB) be in place, which is not the case in most companies.
User Advice: Clients should choose BSM tools when they need to present a real-time, businessoriented dashboard display of service status, but only if they already have a mature, serviceoriented IT organization. BSM requires that users understand the logical links between IT
components and the IT services they enable, as well as have good instrumentation for and
monitoring of these components.
Clients should not implement BSM to monitor individual IT infrastructure components or technology
domains. At its core, BSM provides the capability to manage technology as a business service,
rather than as individual IT silos. Thus, BSM should be used when IT organizations try to become
more business aligned in their IT service quality monitoring and reporting.
Business Impact: BSM tools help the IT organization present its business-unit customers with a
business-oriented display of how well IT services are performing in support of critical processes.
BSM tools identify the IT services affected by IT component problems, helping to prioritize
operational tasks and support efforts relative to business impact. By following the visual
representation of the dependencies, from IT services to business applications and IT infrastructure
components (including servers, storage, networks, middleware and databases), BSM tools can help
the IT department determine the root causes of service problems, thus shortening mean time to
repair, especially for critical business processes.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: ASG; BMC Software; CA Technologies; Compuware; eMite; HP; IBM Tivoli;
Interlink Software; Neebula; NetIQ; Quest Software; Tango/04; USU

Page 51 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Recommended Reading: "Business Service Management Versus Application Performance


Monitoring: Conflict and Convergence"
"Aligning ECA and BSM to the IT Infrastructure and Operations Maturity Model"
"Toolkit: How to Begin Business Service Management Implementation"

Configuration Auditing
Analysis By: Ronni J. Colville; Mark Nicolett
Definition: Configuration auditing tools provide change detection, configuration assessment
(comparing configuration settings with operational or security policies) and the reconciliation of
detected changes against approved requests for changes (RFCs) and mitigation. Discovered
changes can be automatically matched to the approved and documented RFCs that are governed
in the IT change management system, or to manually logged changes. Configuration settings are
assessed against company-specific policies (for example, the "golden image") or against industryrecognized security configuration assessment templates, which are used for auditing and security
hardening (such as those of the U.S. National Institute of Standards and Technology and Center for
Internet Security).
These tools focus on requirements that are specific to servers or PCs, but some can also address
network components, applications, databases and virtual infrastructures, including virtual machines
(VMs). Some of these tools provide change detection in the form of file integrity monitoring (FIM),
which can be used for Payment Card Industry (PCI) compliance, as well as support for other policy
templates (such as the U.S. Federal Information Security Management Act [FISMA] or the United
States Government Configuration Baseline [USGCB]). Exception reports can be generated, and
some tools can automatically return the settings to their desired values, or can block changes
based on approvals or specific change windows.
Position and Adoption Speed Justification: Broad configuration change detection capabilities are
needed to guarantee system integrity by ensuring that all unauthorized changes are discovered and
potentially remediated. Configuration auditing also has a major external driver (regulatory
compliance) and an internal driver (improved availability). Implementation of the technology is gated
by the process maturity of the organization. Prerequisites include the ability to define and implement
configuration standards. Although a robust, formalized and broadly adopted change management
process is desirable, these tools offer significant benefits for tracking configuration change activity
without automating change reconciliation. Without the reconciliation requirement, other tools (e.g.,
operational configuration tools or security configuration assessment tools) can be considered for
configuration auditing and can broaden the vendor landscape.
Configuration auditing continues to be one of the top three drivers for adopting server configuration
automation for three reasons. First, there are more tools available with a focus on varying levels and
capabilities for configuration auditing (operational-based and security-based). Second, there is a
heightened awareness of security vulnerabilities, and, third, there continues to be an increase in the
number of changes and types of changes being made across an IT infrastructure. Compounding
these is the growing number of audits that IT organizations need to be prepared for across a variety

Page 52 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

of industries. These conditions are compelling IT organizations to implement mechanisms to track


changes (to ensure there's no negative impact on availability), and to audit for missing patches or
other vulnerabilities.
Configuration auditing tools are most often bought by those in operational system administration
roles (e.g., system administrators and system engineers). In some cases, these tools are bought by
those responsible for auditing. Security administrators often implement subset functions of
configuration auditing (security configuration assessment and/or file integrity monitoring), which are
capabilities provided by a variety of security products.
The adoption of configuration auditing tools will continue to accelerate, but point-solution tools will
continue to be purchased to address individual auditing and assessment needs. The breadth of
platform coverage (e.g., servers, PCs and network devices) and policy support varies greatly among
the tools, especially depending on whether they are security-oriented or operations-oriented.
Therefore, several tools may end up being purchased throughout an enterprise, depending on the
buying center and the specific functional requirements.
User Advice: Develop sound configuration and change management practices in your organization
before introducing configuration auditing technology. Greater benefits can be achieved if robust
change management processes are also implemented, with the primary goal of becoming proactive
(before the change occurs) versus reactive (tracking changes that violate policy, introduce risk or
cause system outages). Process development and technology deployment should focus on the
systems that are material to the compliance issue being solved; however, broader functional
requirements should also be evaluated, because many organizations can benefit from more than
one area of focus, and often need to add new functions within 12 months.
Define the specific audit controls that are required before configuration auditing technology is
selected, because each configuration auditing tool has a different focus and breadth for
example, security regulation, system hardening, application consistency and operating system
consistency. IT system administrators, network administrators or system engineers should evaluate
configuration auditing tools to maintain operational configuration standards and provide a reporting
mechanism for change activity. Security officers should evaluate the security configuration
assessment capabilities of incumbent security technologies to conduct a broad assessment of
system hardening and security configuration compliance that is independent of operational
configuration auditing tools.
Business Impact: Not all regulations provide a clear definition of what constitutes compliance for IT
operations and production support, so businesses must select reasonable and appropriate controls,
based on reasonably anticipated risks, and build a case that their controls are correct for their
situations. Reducing unauthorized change is part of a good control environment. Define the
necessary audit controls before selecting a product, but look broadly across your infrastructure to
ensure that the appropriate tool is selected. Although configuration auditing has been tasked
individually in each IT domain, as enterprises begin to develop an IT service view, configuration
reporting and remediation (as well as broader configuration management capabilities) will ensure
reliable and predictable configuration changes and offer policy-based compliance with audit
reporting.

Page 53 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Benefit Rating: High


Market Penetration: 20% to 50% of target audience
Maturity: Mature mainstream
Sample Vendors: BMC Software (BladeLogic); Tripwire; VMware
Recommended Reading: "Server Configuration Baselining and Auditing: Vendor Landscape"
"Market Trends and Dynamics for Server Provisioning and Configuration Management Tools"
"Security Configuration Management Capabilities in Security and Operations Tools"

Advanced Server Energy Monitoring Tools


Analysis By: Rakesh Kumar; John R. Phelps; Jay E. Pultz
Definition: Energy consumption in individual data centers is increasing rapidly, by 8% to 12% per
year. The energy is used for powering IT systems (for example, servers, storage and networking
equipment) and the facility's components (for example, air-conditioning systems, power distribution
units and uninterruptible power supply systems). The increase in energy consumption is driven by
users installing more equipment, and by the increasing power requirements of high-density server
architectures.
While data center infrastructure management (DCIM) tools monitor and model energy use across
the data center, server-based energy management software tools are specifically designed to
measure the energy use within server units. They are normally an enhancement to existing server
management tools, such as HP Systems Insight Manager (HP SIM) or IBM Systems Director. These
software tools are critical to gaining accurate and real-time measurements of the amount of energy
a particular server is using. This information can then be fed into a reporting tool or into a broader
DCIM toolset. The information will also be an important trigger for the real-time changes that will
drive real-time infrastructure. Hence, for example, a change in energy consumption may drive a
process to move an application from one server to another.
Position and Adoption Speed Justification: Server vendors have developed sophisticated internal
energy management tools during the past three years. However, the tools are vendor-specific, and
are often seen by the vendors as a source of competitive advantage over rival hardware suppliers.
In reality, they provide pretty much the same information, and it's the use of that information in
broader system management or DCIM tools that generates enhanced user value. For example,
using the energy data to provide the metrics for energy-based chargeback is beginning to resonate
with users, but requires not just server-based energy management tools, but also the use of
chargeback tools.
User Advice: In general, users should start deploying appropriate tools to measure energy
consumption in data centers at a granular level. This includes information at the server, rack and
overall site levels. Use this information to manage data center capacity, including floor space layout
of new hardware, and for managing costs through virtualization and consolidation programs. Users
Page 54 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

should acquire energy management tools that report data power and energy consumption efficiency
according to power usage effectiveness (PUE) metrics as a measure of data center efficiency.
Specifically for servers, users need to ensure that all new systems have sophisticated energy
management software tools built into the management console. Users should ensure that the
functionality and maturity of these tools are part of the selection process. We also advise users to
give more credit to tools that provide output in a standard fashion that is easily used by the DCIM
products.
Business Impact: Server-based energy management software tools will evolve in functionality to
help companies proactively manage energy costs in data centers. They will continue to become
instrumental in managing the operational costs of hardware.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Dell; HP; IBM

Network Configuration and Change Management Tools


Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall
Definition: Network configuration and change management (NCCM) tools focus on discovering and
documenting network device configurations; detecting, auditing and alerting on changes;
comparing configurations with the policy or "gold standard" for that device; and deploying
configuration updates to multivendor network devices.
Position and Adoption Speed Justification: NCCM has primarily been a labor-intensive, manual
process that involves remote access (for example, telneting) to individual network devices and
typing commands into vendor-specific command-line interfaces that are fraught with possibilities
for human error, or creating homegrown scripts to ease retyping requirements. Enterprise network
managers rarely considered rigorous configuration and change management, compliance audits or
disaster recovery rollback processes when executing network configuration alterations, even though
they are the things that often cause network problems. However, corporate audit and compliance
initiatives are forcing a shift in requirements.
A new generation of NCCM vendors has created tools that operate in multivendor environments,
enable automated configuration management and bring more-rigorous adherence to the change
management process, as well as provide compliance audit capabilities. The market has progressed
to the point that many of these startups have been acquired, and new vendors have entered the
market using various angles to differentiate themselves, such as appliance-based products, cloudbased alternatives, integration with security, out-of-band device management and free entry-level
products to form a basis for upselling.

Page 55 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Nonetheless, NCCM tools are nearing the Trough of Disillusionment, but it is not because of the
tools themselves, which work well and can deliver strong benefits to a network management team.
The network configuration management discipline is held back by network managers that are often
reluctant to change their standard operating procedures. Network configuration management is
often practiced by router gurus who are the only ones familiar with the arcane command-line
interfaces for their various network devices. They believe it provides job security for network
managers who can use the arcane commands. It takes a top-down effort from senior IT
management and a change in personnel performance review metrics to convince network managers
of the business importance of documented network device configuration policies, rigorous change
management procedures and tested disaster recovery capabilities.
User Advice: Replace manual processes with automated NCCM tools to monitor and control
network device configurations, thus improving staff efficiency, reducing risk and enabling the
enforcement of compliance policies. Prior to investing in tools, establish standard network device
configuration policies to reduce complexity and enable more-effective automated change. NCCM
tends to be a discipline unto itself; however, in the future, it must increasingly be considered part of
the configuration and change management processes for an end-to-end IT service, and viewed as
an enabler for the real-time infrastructure (RTI). This will require participation in the strategic,
companywide change management process (usually implemented through IT service desk tools)
and integration with configuration management tools for other technologies, such as servers and
storage. In addition, network managers need to gain trust in automated tools before they let any
product go off and perform a corrective action without human oversight. With cost minimization and
service quality maximization promised by new, dynamically virtualized, cloud-based RTI,
automation is becoming a prerequisite, because humans can no longer keep up with problems and
changes manually.
Business Impact: These tools provide an automated way to maintain network device
configurations, offering an opportunity to lower costs, reduce human error and improve compliance
with configuration policies.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: AlterPoint; BMC Software-Emprisa Networks; EMC-Voyence; HP-Opsware; IBMIntelliden; Infoblox-Netcordia; ManageEngine; SolarWinds
Recommended Reading: "MarketScope for Network Configuration and Change Management"

Server Provisioning and Configuration Management


Analysis By: Ronni J. Colville; Donna Scott
Definition: Server provisioning and configuration management is a set of tools focused on
managing the configuration life cycle of physical and virtual server environments. Although these
tools manage the configuration changes to the virtual server software stack, managing virtual
Page 56 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

servers adds new dynamics that have not yet been adequately addressed by larger suppliers (for
example, physical to virtual [P2V], clones, templates, security configuration and system hardening),
but for which point solutions have emerged. Some suppliers offer functionality for the entire life
cycle of functionality across physical and virtual servers, others offer specific point solutions in one
or two areas, and still others focus solely on virtual servers (with no functionality to manage the
configuration of physical servers). Moreover, this is foundational technology for public and private
cloud computing, as it enables server and application provisioning and maintenance. The main
categories for managing servers (physical and virtual) are:
Server provisioning: Provisioning has historically focused on bare-metal installation of a new
physical server, as well as on deploying new applications to a server. This is usually performed with
imaging, scripting response files for unattended installations or by leveraging the platform vendor's
utilities (for example, Solaris's Jumpstart or Linux's KickStart. With virtual servers, you start with
provisioning the hypervisor (which, in many respects, is like bare-metal provisioning of an OS). Once
the physical server is installed with a hypervisor, virtual machines (VMs) need to be provisioned.
Provisioning VMs is done in several ways, including using similar methods as for physical servers
for the software stack, but there are also new methods (because of the platform capabilities of the
hypervisor). For example, you may choose to provision a virtual server from a physical server (P2V
migration), or VMs can be created from templates, from hardware- and virtualization-independent
images, or VMs can be created "empty" and then built layer by layer through unattended
installations. Provisioning the new guest can be done either from the hypervisor management
console (for example, vCenter), or through an existing server provisioning and configuration
management tool (for example, from BMC Software or HP) through APIs to the hypervisor
management console.
Application provisioning and configuration management (including patch management): This
includes a broad set of multiplatform functionality to discover and provision (that is, package,
deploy and install) OSs and application software; these tools also can make ongoing updates to
OSs or applications (for example, patches, new versions and new functionality), or they can update
configuration settings. This functionality applies to both physical and virtual servers; it often requires
a preinstalled agent for continued updates. Moreover, virtual servers have an additional nuance for
patch management specifically, the need for patching offline VMs (in addition to online VMs).
Application provisioning and configuration management could be augmented by application release
automation (to deploy application artifacts) and is represented by a separate technology profile.
Inventory/discovery, configuration modeling, audit and compliance: This enables the discovery
of software, hardware and virtual servers; some can discover dependency relationships across
servers and applications. Using modeling of application and OS configuration settings (that is, the
desired state, or gold standard), these tools can report, and may be able to remediate, variations by
modifying the actual state back to whatever the model requires, or the desired state for
applications, as well as security configuration settings, such as the U.S. National Institute of
Standards and Technology (NIST), the Center for Internet Security (CIS) and the U.S. National
Security Agency (NSA). For virtual servers, system-hardening guidelines for the hypervisor would
also apply. Moreover, additional requirements for virtual servers include dependencies on the
creation or lineage of the VMs, as well as the relationships between the VM and the physical
machines on which it is run (which vary due to mobility).

Page 57 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Position and Adoption Speed Justification: Server provisioning and configuration management
tools continue to mature in depth of function, as well as in integration with adjacent technologies
(such as change management, dependency mapping and run book automation). However, these
tools have been slow to add deep functionality for virtual server configuration management, and
new vendors have emerged that focus only on virtual servers included in this category. Although the
tools are progressing, the configuration policies, organizational structures and processes inside the
typical enterprise are causing implementations to move slower. There has been an uptick in
adoption of minisuites (not the entire life cycle) by both midsize and large enterprises to solve
specific problems (e.g., multiplatform provisioning, compliance-driven audits including improving
patch management). Therefore, adoption is improving, but to get real value from these tools
requires a level of standardization in the software stack. Many organizations don't have this level of
standardization, and build servers on a custom basis. Moreover, even if organizations begin to
standardize, for example, in the OS area, they often have different application, middleware and
database management system groups that do not employ the same level of rigor in standards.
Fortunately, some IT organizations are benefiting from viewing the public cloud providers, and how
their level of standardization can enable rapid provisioning, and are seeking to internalize some of
these attributes. In the meantime, however, server provisioning and configuration management
tools are nearing the Trough of Disillusionment not so much due to the tools, but due to IT
organizations' inability to standardize and use the tools broadly across the groups supporting the
entire software stack.
User Advice: With an increase in the frequency and number of changes to servers and applications,
IT organizations should emphasize the standardization of technologies and processes to improve
and increase availability, as well as to succeed in using server provisioning and configuration
management tools for (physical and) virtual servers. Besides providing increased quality, these tools
can reduce the overall cost to manage and support patching and rapid deployments, and VM policy
enforcement, as well as provide a mechanism to monitor compliance. Evaluation criteria should
include technologies that provide active capability (installation and deployment) and ongoing
maintenance, as well as auditing and reporting, and should include the capability to address the
unique requirements of virtual servers and VM guests. When standards have emerged, we
recommend that organizations implement these tools to automate manual tasks for repeatable,
accurate and auditable configuration change control. The tools help organizations gain efficiencies
in moving from a monolithic imaging strategy to a dynamic layered approach to incremental
changes. When evaluating products, organizations need to:

Evaluate functionality across the life cycle, and not just the particular pain point at hand.

Consider physical and virtual server provisioning and configuration management requirements
together.

Conduct rigorous testing to ensure that functionality is consistent across required platforms.

Ensure that tools address a variety of compliance requirements.

Business Impact: Server provisioning and configuration management tools help IT operations
automate many server provisioning tasks, thereby lowering the cost of IT operations, increasing
application availability and increasing the speed of modifications to software and servers. They also
provide a mechanism for enforcing security and operational policy compliance.
Page 58 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Benefit Rating: High


Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: BMC Software (BladeLogic); CA Technologies; HP (Opsware); IBM; ManageIQ;
Microsoft; Novell; Tripwire; VMware
Recommended Reading: "Server Provisioning Automation: Vendor Landscape"
"Provisioning and Configuration Management for Private Cloud Computing and Real-time
Infrastructure"
"Server Configuration Baselining and Auditing: Vendor Landscape"
"Market Trends and Dynamics for Server Provisioning and Configuration Management Tools"

ITIL
Analysis By: George Spafford; Simon Mingay; Tapati Bandopadhyay
Definition: ITIL is an IT service management framework, developed under the auspices of the
U.K.'s Office of Government Commerce (OGC), that provides process guidance on the full life cycle
of defining, developing, managing, delivering and improving IT services. It is structured into five
main books: Service Strategy, Service Design, Service Transition, Service Operation and Continual
Service Improvement. ITIL does not provide specific advice on how to implement or measure the
success of the implementation; rather, that is something that an organization should adapt to its
specific needs.
Position and Adoption Speed Justification: ITIL has been evolving for more than 20 years. It is
well-established as the de facto standard in service management, and is embedded in the formal
service management standard of ISO 20000. The current release, v.3, was introduced in 2007.
Based on our client discussions and conference attendee polls, uptake is slowly growing.
Organizations beginning service improvements are starting with v.3, and many groups that were
using the previous version, v.2, are in various phases of transition. As a result, Gartner has
consolidated its historical v.2 and v.3 Hype Cycle entries into a single entry for ITIL overall.
The current version of ITIL covers the entire IT service life cycle more comprehensively. The ITIL life
cycle begins with the development of strategies relating to the IT services that are needed to enable
the business. ITIL then introduces processes concerned with the design of those IT services, their
transition into production and ongoing operational support, and then continual service
improvement. In general, Service Transition and Service Operation are the most commonly used
books. ITIL v.3 has also incorporated more-proactive processes, like event management and
knowledge management, in addition to reactive ones, such as the service desk function and the
incident management process.

Page 59 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Despite some claims to the contrary, ITIL has a role to play in operational process design,
regardless of how it is sourced, even for cloud computing. All solutions require a blending of
people, processes and technologies. The question isn't whether processes are relevant; rather,
there must be an understanding of what is necessary to properly underpin the services that IT is
offering to the business. Based on these objectives, the relevant processes should be identified,
designed and transitioned into production, and then subject to continual improvement. ITIL will
continue to serve as a source of process design guidance for process engineers to draw from.
Overall, we continue to see a tremendous span of adoption and maturity levels. Some organizations
are just embarking on their journey, for a variety of reasons, whereas others are well on their way
and pursuing continual improvement, integrating other tool process improvement frameworks, such
as Six Sigma and lean manufacturing. In fact, a combination of process guidance from various
sources tends to do a better job of addressing requirements than any framework in isolation.
User Advice: ITIL provides guidance on putting IT service management into a strategic context and
provides high-level guidance on reference processes. To optimize service improvements, IT
organizations must first define objectives, and then pragmatically leverage ITIL during the design of
their own unique processes. ITIL has been widely adopted around the world, with extensive
supporting services and technologies. Finding staff that has worked in or been formally trained in
ITIL is now relatively easy.
An update to the ITIL guidelines will be forthcoming, either in late 2011 or early 2012. We
recommend that groups proceed with pragmatic adoption of the current version, and then leverage
the new guidance once it is released.
Business Impact: ITIL provides a framework of processes for the strategy, design, transition,
operation and continual improvement of IT services. IT organizations desiring more effective and
efficient outcomes should immediately evaluate this framework for applicability. Most IT
organizations need to start or continue the transition from their traditional technology and asset
focus to a focus on services and service outcomes. IT service management is a critical discipline in
achieving that change, and ITIL provides useful reference guidance for IT management to draw
from.
Benefit Rating: Transformational
Market Penetration: 20% to 50% of target audience
Maturity: Adolescent
Recommended Reading: "How to Leverage ITIL for Process Improvement"
"Top Six Foundational Steps for Overcoming Resistance to ITIL Process Improvement"
"Don't Just Implement CMMI and ITIL: Improve Services"
"Evolving Roles in the IT Organization: The IT Product Manage"

Page 60 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Hosted Virtual Desktops


Analysis By: Mark A. Margevicius; Ronni J. Colville; Terrence Cosgrove
Definition: A hosted virtual desktop (HVD) is a full, thick-client user environment, which is run as a
virtual machine (VM) on a server and accessed remotely. HVD implementations comprise server
virtualization software to host desktop software (as a server workload), brokering/session
management software to connect users to their desktop environment, and tools for managing the
provisioning and maintenance (e.g., updates and patches) of the virtual desktop software stack.
Position and Adoption Speed Justification: An HVD involves the use of server virtualization to
support the disaggregation of a thick-client desktop stack that can be accessed remotely by its
user. By combining server virtualization software with a brokering/session manager that connects
users to their desktop instances (that is, the operating system, applications and data), enterprises
can centralize and secure user data and applications, and manage personalized desktop instances
centrally. Because only the presentation layer is sent to the accessing device, a thin-client terminal
can be used. For most early adopters, the appeal of HVDs has been the ability to "thin" the
accessing device without significant re-engineering at the application level (as is usually required for
server-based computing).
While customers implementing HVDs cite many reasons for deployments, three important factors
that have contributed to the increase in focus on HVD: the desire to implement new client
computing capabilities in conjunction with Windows 7 migrations, the desire for device choice (in
particular, iPad use), and the uptick in adoption of virtualization in data centers, where this is now
more capacity for virtualized systems and greater experience in the required skills. Additionally,
during the past few years, adoption of virtual infrastructures in enterprise data centers has
increased (up from 10% to 20% to about 40% to 70%). With this increase comes both a level of
maturity and an understanding of how to better utilize the technology. This awareness helps with
implementations of HVD where both desktop engineers and data center administrators come
together for a combined effort.
Early adoption was hindered by several factors, one main one being licensing compliance issues for
the Windows client operating system, but that has been resolved through Microsoft's Windows
Virtual Desktop Access (VDA) licensing offerings. Even with Microsoft's reduced license costs for
Windows OS (offered in mid-2010, by adding it to Software Assurance), enabling an HVD image to
be accessed from a primary and a secondary device for a single license fee, other technical issues
have hindered mainstream adoption. Improvements in the complexity of brokering software and
remote-access protocols will continue to occur through 2011, extending the range of desktop user
scenarios that HVDs can address; yet, adoption will remain limited to a small percentage of the
overall desktop installed base.
Since late 2007, HVD deployments have grown steadily, reaching around 6 million at the end of
2010. Because of the constraints previously discussed, broad applicability of HVDs has been limited
to specific scenarios, primarily structured-task workers in call centers, and kiosks, trading floors and
secure remote access; about 50 million endpoints is still the current target population of the total
700 million desktops. Through the second half of 2011 and into 2012, we expect more-general
deployments to begin. Inhibitors to general adoption involve the cost of the data center

Page 61 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

infrastructure that is required to host the desktop images (servers and storage, in particular) and
network constraints. Even with the increased adoption of virtual infrastructure, cost-justifying HVD
implementations remains a challenge, because of HVD cost comparisons to those of PCs.
Additionally, availability of the skills necessary to manage virtual desktops is also an ongoing
challenge. Furthermore, deploying HVDs to mobile/offline users remains a challenge, despite the
promises of offline VMs and advanced synchronization technologies.
Through 2011, broader manageability of HVD VMs will improve, as techniques to reduce HVD
storage volumes lead to new mechanisms for provisioning and managing HVD images by
segmenting them into more-isolated components (including operating systems, applications,
persistent personalization and data). These subsequent manageability improvements will extend the
viability of HVD deployments beyond the structured-task worker community first to desk-based
knowledge workers, then to new use cases, such as improved provisioning and deprovisioning,
contractors, and offshore developers.
HVD marketing has promised to deliver diminishing marginal per-user costs, due to the high level of
standardization and automation required for successful implementation; however, this is currently
only achievable for persistent users where images remain intact a small use case of the overall
user population. As other virtualization technologies mature (e.g., brokers and persistent
personalization), this restraint will be reduced. This creates a business case for organizations that
adopt HVDs to expand their deployments, as soon as the technology permits more users to be
viably addressed. Enterprises that adopt HVDs aggressively will see later adopters achieve superior
results for lower costs, but will also need to migrate to new broker and complementary
management software as products mature and standards emerge. This phenomenon is set to
further push HVDs into the Trough of Disillusionment in late 2001.
User Advice: Unless your organization has an urgent requirement to deploy HVDs immediately for
securing your environment or centralizing data management, wait until late 2011 before initiating
deployments for broader (mainstream) desktop user scenarios. Through 2011, all organizations
should carefully assess the user types for which this technology is best-suited, with broader
deployments happening through 2012. Clients that make strategic HVD investments now will
gradually build institutional knowledge. These investments will allow them to refine technical
architecture and organizational processes, and to grow internal IT staff expertise before IT is
excepted to support the technology on a larger scale through 2015. You will need to balance the
benefits of centralized management with the additional overhead of the infrastructure and resource
costs. Customers should recognize that HVDs may resolve some management issues, but they will
not become panaceas for unmanaged desktops. In most cases, promised reductions in total cost of
ownership will not be significant and will require initial capital expenditures to achieve. The bestcase scenario for HVDs continues to be for securing and centralizing data management or for
structured task users.
Organizations must optimize desktop processes, IT staff responsibilities and best practices to fit
HVDs, just as organizations did with traditional PCs. Leverage desktop management processes for
lessons learned. The range of users and applications that can be viably addressed through HVDs
will grow steadily through 2011. Although the user population is narrow, it will eventually include
mobile/offline users as well. Organizations that deploy HVDs should plan for growing viability across
their user populations, but they should be wary of rolling out deployments too quickly. Diligence
Page 62 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

should be employed in testing to ensure a good fit of HVD capabilities with management
infrastructure and processes, and integration with newer management techniques (such as
application virtualization and software streaming). Visibility into future product road maps from
suppliers is essential.
Business Impact: HVDs provide mechanisms for centralizing a thick-client desktop PC without reengineering each application for centralized execution. This appeals to enterprises on the basis of
manageability and data security.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Citrix Systems; NEC; Parallels; Quest Software; Red Hat; VMware

PC Application Streaming
Analysis By: Terrence Cosgrove
Definition: Application streaming is a PC application delivery technology that allows users to
execute applications as they are delivered. The tools can deliver a shortcut that contains enough of
the application's resources to get the user started. As the user requests new functionality, the
necessary files are delivered in the background and loaded into memory. Users can also access the
application at a remote location, such as a Web portal or a network share. There are several delivery
options:

No cache The full application is streamed to the target PC each time it's launched by the
user; nothing is cached for subsequent use.

Partial cache Only the application components called by the user are streamed to the target
PC. Unused functions and menu components are not sent. This option minimizes network use,
but only hitherto requested functionality would be available for offline use.

Complete cache The full application is streamed to the target PC when first requested by
the user. The code required to start the application, as well as application settings, is cached
locally and remain available the next time the application is launched. This option is optimal for
offline use.

Typically, during user-initiated events (e.g., logon or application launch), the PC will check for ITadministered application updates. If the application is cached, the user will only pull down the delta
difference of the application. If the application is not cached, the user will simply access the newest
version of the application. The application streaming delivery model is usually combined with
application virtualization, and most products in the market combine both capabilities.

Page 63 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Position and Adoption Speed Justification: Application streaming has historically been a niche
application delivery technology. Organizations that virtualized applications have historically pushed
them down to users as full objects, rather than stream them. The reasons for this include:

Organizations have had concerns about the impact of application streaming on network and
application performance, and application availability for offline use.

Application streaming has had challenges with scalability.

The most obvious benefit suggested by streaming is the ability to allow applications to be used
immediately, rather than requiring users to wait for the application to be fully delivered. While this is
useful, it usually doesn't offset the challenges of the technology.
However, several factors are changing this situation. First, products are getting better, to improve
scale, minimize network impact, and improve application performance. Second, organizations are
increasingly using application streaming as a mechanism to better manage desktop application
licenses and reduce the amount of labor involved in deploying applications. Finally, hosted virtual
desktop projects are leading organizations to look at application streaming as one of the
technologies to "layer" applications on top of the base image and core applications.
User Advice: Consider streaming for any business application, especially one that must be updated
frequently and/or requires local execution on the target device.
Avoid network and performance issues by implementing some or all of the following measures: local
application caching, maintenance windows, configuring users to pull applications from local servers
and avoiding streaming applications that are typically in the base image.
Start using application streaming with applications that offer the most license savings potential.
Business Impact: Users can access the same application through multiple PCs. As long as the
application does not remain resident on any PC (that is, caches are flushed), one user can access
the same application from multiple PCs, while paying only one application license (depending on
the licensing terms of the vendor). However, application license management must be done
carefully because software vendors will have different rules regarding usage of their product.
Applications can launch and be used faster (incrementally) than with traditional software distribution
architectures, while the rest of the application functions will be delivered as needed. Application
streaming tools typically do not work independently of PC configuration tools.
Application streaming provides a method to quickly remove applications on the date of expiration
(forced license metering).
Application streaming can simplify application updates for intermittently connected users.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent

Page 64 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Sample Vendors: Citrix Systems; Microsoft; Symantec-AppStream


Recommended Reading: "Application Streaming Gains Traction in 2011"

IT Change Management Tools


Analysis By: Kris Brittain; Patricia Adams
Definition: IT change management (ITCM) tool functionality governs documentation, review,
approval, coordination, scheduling, monitoring and reporting of requests for change (RFCs). The
basic functional requirements begin in the area of case documentation, including the industrystandard assignment capabilities of classification and categorization (such as risk and priority), with
advanced functionality to manage "preapproved" or standard RFC-type lists. The tool must include
a solid workflow engine to manage embedded workflows (such as standard RFC life cycles), as well
as provide escalation and notification capabilities, which can be executed manually or automated
via business rules. RFC workflows are presented graphically, and are capable of managing
assessment and segmented approval, with the ability to adjust automatically, based on alterations
and multitask change records (see "The Functional Basics of an IT Change Management Tool").
To strengthen the ability to handle the large volume of RFC activity, ITCM tools enable multitasked
assignment, risk/impact assessment and multiviews of the change schedule, as well as the calendar
(including the ability to manage maintenance and freeze windows). Critical integrations with
configuration, release and configuration management database (CMDB) technologies are required
for change management tool success (see "Aligning Change Process and Tools to Configuration
and Release"). For example, the categorization of a configuration item in the case log in ITCM tools
becomes essential to identify near-real-time risk, impact and collision analysis capabilities through
integration with a CMDB.
Without configuration data, vendor tools will leverage survey mechanisms that can provide a risk
context (see "Improve Security Risk Assessments in Change Management With Risk
Questionnaires"). Automation and acceleration of change throughput has also added focus to
"preapproved" (aka standard) change activity. More tools offer integration among ITCM tools, server
configuration, run book automation and virtualization management tools to automate RFC
governance and deployment.
Integration with release management tools has exploded in the industry, because the quality and
efficiency of the ITCM workflow requires the coordination of RFC grouping and hand-offs to release
execution. This integration and functional coordination not only exists with emerging release
governance tools, but is also linked with the application development change to improve change
throughput. ITCM tools can be leveraged to address audit demands, such as the Sarbanes-Oxley
Act, by integrating with configuration audit tools. Most ITCM tools are a module in IT service desk
suites, offering integration with incident and problem management. ITCM tools must provide metric
analysis to deliver management reports covering SLAs, critical success factors (CSFs) and key
performance indicators.

Page 65 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Position and Adoption Speed Justification: Nearly 80% of enterprise-scale companies use ITCM
tools to govern ITCM process policies. Adoption barriers have been predominantly on rigid
technical-silo organizational structure, tribal process knowledge and siloed/departmental competing
tool strategies. Years of IT technical silo or department-specific processes, such as application
development and server management configuration and release processes, become confused with
change execution procedures. Another early driver has been the adoption of industry standards
such as ITIL v.3, Capability Maturity Model Integration (CMMI) and Control Objectives for
Information and Related Technology (COBIT) 4.0 where are accelerating the adoption of ITCM
tools.
During the past year, organizations have begun to view adoption as a way to address poor service
portfolio knowledge and the inability to build business relevance. ITIL process re-engineering may
have put the change process in the first phase of process redevelopment; however, with heightened
attention on improving continuous development, new methodologies are becoming the standard in
application development AD that place more emphasis on the mission-critical role of change
processes. This translates into change volume activity growth, compounded by more-aggressive
delivery timelines, which affects change release windows. With that said, release process adoption
is now happening as a later phase of ITIL re-engineering projects, causing IT organizations to
reshape their early-phase investments in change and configuration management processes.
Why release would influence change so much is easy to understand. Change workflows navigate
changes to a common macro work stage called "schedule and implement" change. Although the
change workflow helps technical and CAB groups review changes, it does not help orchestrate as
well the subtask activities required to develop the change. Release management workflows are the
critical hand-offs from these primary change macro stages and they feed back change details for
the final postimplementation review of changes. Without this last stage in the change process, the
system lacks a formal and independent evaluation of the service releases of change. Evaluation
checks are essential to determine actual performance and the outcomes of these changes. Lacking
this knowledge, management is unable to provide the fundamental analysis of cost, value, efficiency
and quality of change.
User Advice: The ITCM process and tools should be the sole source for governing and managing
the RFC life cycle. IT organizations looking for closed-loop and "end to end" change management
will require that the change tool be integrated with the configuration and release management tools
used to build, test, package and push the change into the production environment, including server
provisioning and configuration management, and application life cycle management. The vendor
market has often confused clients by blending the change and release "operational" governance
workflow into a common module. Unfortunately, this may not provide the optimum tool solution,
because they tend to be underdeveloped offerings that lead clients to treat change and release,
operationally, as one giant workflow. This is the biggest problem the industry faces.
ITCM tools and the IT service desk suites lack process-modeling capabilities similar to business
process management tools. There are too many places where constraints can materialize when
individuals or process committees "invent" their next-generation processes. Change and release
processes need to be managed and refined with some separation of effort. No one should silo the
efforts without collaboration, because integration is critical. An appropriate balance is required to

Page 66 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

break these down into digestible and optimized process "bites," then connect them to produce
end-to-end change management.
Other motivations, such as addressing compliance demands (e.g., Sarbanes-Oxley Act or COBIT),
can be addressed by ITCM tools to support control procedural documentation and the integration
with configuration auditing tools to produce a complete view of compliance and noncompliance
reporting. Tool adoption will require new responsibilities in the IT organization, such as adding IT
change manager and change coordinator roles. In addition, growing service complexity and
compliance requirements (i.e., the demand to adhere to governmental and industry regulations) will
influence ITCM tool implementation and depth of integration, (such as configuration auditing to
support compliance reporting. For the tool to be successful, organizational and cultural obstacles
need to be addressed.
Business Impact: A service-oriented IT organization needs to develop a business context for
investment in ITCM in which the IT and business organizations commit to strategy goals and CSFs,
aligning ITCM to service portfolio demands. From a fundamental process perspective, ITCM
implemented across all IT departments will deliver discernible benefits in service quality, IT agility,
cost reductions and risk management.
Benefit Rating: High
Market Penetration: More than 50% of target audience
Maturity: Early mainstream
Sample Vendors: BMC Software (Remedy); CA Technologies; HP; IBM Maximo; SAP; Servicenow.com
Recommended Reading: "The Functional Basics of an IT Change Management Tool"
"Aligning Change Process and Tools to Configuration and Release"
"Improve Security Risk Assessments in Change Management With Risk Questionnaires"
"Toolkit: IT Change Management Policy Guide, 2010"

IT Asset Management Tools


Analysis By: Patricia Adams
Definition: The IT asset management (ITAM) process entails capturing and integrating inventory,
financial and contractual data to manage the IT asset throughout its life cycle. ITAM encompasses
the financial management (e.g., asset costs, depreciation, budgeting, forecasting); contract terms
and conditions; life cycles; vendor service levels; asset maintenance; ownership; and entitlements
associated with IT inventory components, such as software, PCs, network devices, servers,
mainframes, storage, mobile devices and telecom assets, such as voice over IP (VoIP) phones.
ITAM depends heavily on robust processes, with tools being used to automate manual processes.

Page 67 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Capturing and integrating autodiscovery/inventory, financial and contractual data into a central
repository for all asset classes supports and enables the functions that are necessary to effectively
manage and optimize vendors, and a software and hardware asset portfolio from requisition
through retirement, thereby monitoring the asset performance throughout the day-to-day
management life cycle of the asset.
Position and Adoption Speed Justification: This process, when integrated with tools, is adopted
during business cycles that reflect the degree of emphasis that enterprises put on controlling costs
and managing the use of IT assets. With an increased focus on software audits, configuration
management databases (CMDBs), business service management (BSM), managing virtualized
software, developing IT service catalogs and tracking software license use in the cloud, ITAM
initiatives are gaining increased visibility, priority and acceptance, in IT operations procurement.
ITAM data is necessary to understand the costs associated with a business service. Without this
data, companies don't have accurate cost information on which to base decisions regarding service
levels that vary by cost or chargeback. We expect ITAM market penetration, currently at 45%, to
continue growing during the next five years.
User Advice: Many companies embark on ITAM initiatives in response to specific problems, such
as impending software audits (or shortly after an audit), CMDB implementations, virtual software
sprawl or OS migrations. Inventory and software usage tools, which feed into an ITAM repository,
can help ensure software license compliance and monitor the use of installed applications.
However, without ongoing visibility, companies will continue in a reactive firefighting mode, without
achieving a proactive position that diminishes the negative effect of an audit or provides the ability
to see how effectively the environment is performing.
ITAM has a strong operational focus, with tight linkages to IT service management, on creating
efficiencies and effectively using software and hardware assets. ITAM data can easily identify
opportunities, whether it is the accurate purchasing of software licenses, the efficient use of all
installed software or ensuring that standards are in place to lower support costs. To gain value from
an ITAM program, a combination of people, policies, processes and tools needs to be in place. As
process maturity occurs, ITAM will focus more on the financial and spending management related
to controlling asset investment, and will provide integration to project and portfolio management,
and enterprise architecture. In addition, ITAM processes and best practices are playing a role in
how operational assets are being managed. Companies should plan for this evolution in thinking.
Business Impact: All IT operations controls, processes and software tools are designed to achieve
at least one of three goals: lower costs for IT operations, improved quality of service and agility, and
reduced business risks. As more enterprises implement an IT service management strategy, an
understanding of costs to deliver business IT services will become essential. In addition, ensuring
that the external vendor contracts are in place to deliver on the specified service levels the business
requires is a necessity. Because ITAM financial data is a feed into a CMDB or content management
system, the value of ITAM will be more pronounced in organizations that are undertaking these
projects.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Page 68 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Maturity: Early mainstream


Sample Vendors: BMC Software; CA Technologies; HP; IBM; Provance Technologies; PS'Soft;
Staff&Line; Symantec (Altiris)
Recommended Reading: "Applying IT Asset and Configuration Management Discipline to OT"
"How to Build PC ITAM Life Cycle Processes"

PC Application Virtualization
Analysis By: Terrence Cosgrove; Ronni J. Colville
Definition: PC application virtualization is an application packaging and deployment technology
that isolates applications from each other and limits the degree to which they interact with the
underlying OS. Application virtualization provides an alternative to traditional packaging and
installation technologies.
Position and Adoption Speed Justification: PC application virtualization can reduce the time it
takes to deploy applications by reducing packaging complexity and scope for application conflicts
typically experienced when using traditional packaging approaches (e.g., MSI). It's an established
technology that's receiving high market exposure, primarily due to enterprise focus on planning for
Windows 7 migrations and, more recently, for hosted virtual desktop (HVD) projects. PC application
virtualization tools are most often adopted as supplements to traditional PC configuration
management solutions as a means of addressing application packaging challenges, and most
mainstream PC configuration management vendors have either added this capability via acquisition
or partnership (OEM and reseller). In addition, increased interest in virtualization technologies has
focused attention on this technology and solution sets.
Much of the current interest in PC application virtualization is driven by the promise that this
technology will alleviate some of the regression-testing overhead in application deployments and
Windows migrations (although it generally cannot be relied on to remediate application compatibility
issues with Windows 7). Other benefits include enabling the efficient and rapid deployment of
applications that have not been able to be deployed previously, and improving organizations' ability
to remove administrator access by enabling them to deploy a greater percentage of the applications
needed by users through installation automation.
What continues to impede widespread adoption is that application virtualization cannot be used for
100% of applications, and may never work with many legacy applications, especially those
developed in-house. More recently, HVD projects have led to increased interest in application
virtualization. Organizations are increasingly using application virtualization to layer on role- or userspecific applications. Gartner believes that application virtualization is critical to making HVD more
flexible and suitable for broad user scenarios.
The market continues to be dominated by two main vendors: Microsoft (App-V) and VMware
(ThinApp). Citrix Systems also has this technology, which is bundled in XenApp. Symantec also has
this technology, and most often sells it with Altiris Client Management Suite, its PC configuration life

Page 69 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

cycle management (PCCLM) tool. There are also several smaller vendors, such as InstallFree,
Endeavors Technologies and Spoon (formerly Xenocode). Because of Microsoft's go-to-market
approach (selling at low cost to organizations with software assurance on Windows via Microsoft
Desktop Optimization Pack) and VMware's visibility in virtual infrastructures, the viability of smaller
players will be at risk in the long term.
User Advice: Implement PC application virtualization to reduce packaging complexity, particularly if
you have a large number of applications that are not packaged. Analyze how this technology will
interface with established and planned PCCLM tools, to avoid driving up the cost of a new
application delivery technology and to ensure that virtualized applications are manageable. Test as
many applications as you can during the evaluation, but recognize that some applications probably
can't be virtualized.
Consider application virtualization tools for:

New application deployment needs where there's no legacy packaging

Applications that have not already been packaged, when the overhead (cost and time) of
current packaging tools is considered too high, or the number of users receiving the application
has been deemed too low to justify packaging

Applications that have not previously been successfully packaged and deployed using PCCLM
tools, because of application conflicts and the need for elevated users' permission

Pooled HVD deployments

However, enterprises must also consider the potential support implications. Not all application
vendors will support their applications running in a virtualized manner. Interoperability requirements
must also be understood; with some application virtualization products, applications that call
another application during runtime must be virtualized together or be manually linked.
Business Impact: PC application virtualization can improve manageability for corporate IT. By
isolating applications, IT organizations can gain improvements in the delivery of applications, and
reduce (perhaps significantly) testing and outages due to application conflicts.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Citrix Systems; Dell KACE; Endeavors Technologies; InstallFree; Microsoft;
Spoon; Symantec; VMware

Service-Level Reporting Tools


Analysis By: Debra Curtis; Milind Govekar; Kris Brittain

Page 70 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Definition: Service-level reporting tools incorporate and aggregate multiple types of metrics from
various management disciplines. At a minimum, they must include service desk metrics along with
IT infrastructure availability and performance metrics. End-user response time metrics (including
results from application performance monitoring [APM] tools) can enhance service-level reports,
and are sometimes used as a "good enough" proxy for end-to-end IT service quality. However, a
comprehensive service-level reporting tool should provide a calendaring function to specify service
hours, planned service uptime and scheduled maintenance periods for different classes of service,
as well as compare measured results to the service-level targets agreed-to between the IT
operations organization and the business units to determine success or failure. Very few tools in the
market today have this capability.
In addition to comparing measured historical results to service-level targets at the end of the
reporting period, more-advanced service-level reporting tools will keep a running, up-to-the-minute
total that displays real-time service-level results, and predicts when service levels will not be met.
This will forewarn IT operations staff of impending trouble. These tools will increasingly have to deal
with on-premises applications and infrastructure, as well as cater to off-premises cloud
infrastructures and applications.
Position and Adoption Speed Justification: Just about every IT operations management software
vendor offers basic reporting tools that they typically describe as "service-level management
module or software." In general, these tools do not satisfy all the requirements in Gartner's
definition of the service-level reporting category. Thus, the industry suffers from product ambiguity
and market confusion, causing this category to be positioned in the Trough of Disillusionment. Most
IT operations management tools are tactical tools for specific domains (such as IT service desk,
network management and server administration) in which production statistics are collected for
component- or process-oriented operational-level agreements, rather than true, business-oriented
SLAs.
Only the 5% to 15% of IT organizations that have attained the service-aligned level of Gartner's
ITScore maturity model for IT infrastructure and operations have the skills and expectations to
demand end-to-end IT service management capabilities from their service-level reporting tools. This
circumstance slows the adoption speed and lengthens the time to the Plateau of Productivity. Some
cloud computing vendors have developed simplistic service displays for their infrastructures and
applications, but they're not heterogeneous and do not include on-premises infrastructures and
applications.
User Advice: Clients use many different types of tools to piece together their service-level reports.
Although service-level reporting tools can be used to track just service desk metrics or IT
infrastructure component availability and performance metrics, they are most valuable when used
by clients who have defined business-oriented IT services and SLAs with penalties and incentives.
Monitoring alone will not solve service-level problems. IT organizations need to focus on changing
workplace cultures and behavior, so that employees are measured, motivated and rewarded based
on end-to-end IT service quality. Clients should choose SLA metrics wisely, so that this exercise
provides action-oriented results, rather than just becoming a reporting exercise.

Page 71 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Business Impact: SLAs help the IT organization demonstrate its value to the business. Once IT and
the business have agreed to IT service definitions and, thus, established a common nomenclature,
service-level reporting tools are used as the primary communication vehicles to corroborate that IT
service quality is in compliance with business customer requirements. Defining business-oriented IT
services with associated SLAs, proactively measuring service levels and reporting on compliance
can help IT organizations deliver more-consistent, predictable performance and maintain
customers' satisfaction with IT services. By tracking service levels and analyzing historical servicelevel trends, IT organizations can use service-level reporting tools to predict and prevent problems
before they affect business users.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: BMC Software; CA Technologies; Compuware; Digital Fuel; HP; IBM Tivoli;
Interlink Software; NetIQ
Recommended Reading: "The Challenges and Approaches of Establishing IT Infrastructure
Monitoring SLAs in IT Operations"

Climbing the Slope


IT Event Correlation and Analysis Tools
Analysis By: Jonah Kowall; Debra Curtis
Definition: IT event correlation and analysis (ECA) tools support the acceptance of events and
alarms from IT infrastructure components; consolidate, filter and correlate events; notify the
appropriate IT operations personnel of critical events; and automate corrective actions, when
possible.
Position and Adoption Speed Justification: These tools have widespread use as the generalpurpose event console monitoring IT infrastructure components such as servers (physical and
virtual), networks and storage, and are becoming critical to processes such as analyzing the root
causes of problems.
User Advice: ECA tools are mature, and a great deal of activity in this market has involved
acquisitions and consolidation in recent years. In addition to the vendor consolidation, IT operations
organizations are often trying to consolidate their monitoring investments to fewer tools with better
integration to other management disciplines, such as service desk and configuration management
databases (CMDBs). Some ECA innovation (such as predictive analysis) has been taking place at
smaller, startup companies, which have then been acquired by larger vendors, enabling them to
take advantage of new market opportunities in an otherwise mature market segment.

Page 72 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

It has become increasingly important for ECA tools to show their value as it pertains to supporting
the business. This requires the products to provide reports that show how the tools reduce outage
time and help avoid outages. With this level of information, IT organizations are able to demonstrate
ECA tools' investment value and align the value with increased IT operations efficiencies.
When dealing with large vendors, clients need to understand their strategic direction to ensure
continued product support and commitment. When dealing with small vendors, it's important to
understand their financial stability, vision, business and product investment strategies. Without this
understanding, clients risk being hurt by vendor acquisitions or corporate demise, as well as
adopting a product that won't support their IT operations initiatives.
Business Impact: ECA tools help lower the cost of IT operations by reducing the time required to
isolate a problem across a heterogeneous IT infrastructure.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Argent Software; BMC Software; CA Technologies; eG Innovations; EMC;
GroundWork Open Source; HP; IBM Tivoli; Interlink Software; Microsoft; NetIQ; Quest Software;
Tango/04; uptime software; Zenoss
Recommended Reading: "Magic Quadrant for IT Event Correlation and Analysis"
"Event Correlation and Analysis Market Definition and Architecture Description, 2010"
"Aligning ECA and BSM to the IT Infrastructure and Operations Maturity Model"

Network Performance Management Tools


Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall
Definition: Network performance management tools provide performance and availability
monitoring solutions for the data communications network (including network devices and network
traffic). They collect performance data over time and include features such as baselining, threshold
evaluation, network traffic analysis, service-level reporting, trend analysis, historical reporting and,
in some cases, interfaces to billing and chargeback systems.
There are two common methods of monitoring network performance:

Polling network devices to collect standard Simple Network Management Protocol (SNMP)
Management Information Base (MIB) data for performance reporting and trend analysis

Using specialized network instrumentation (such as probes, appliances [including virtual


appliances] and NetFlow) to analyze the makeup of the network traffic for performance
monitoring and troubleshooting

Page 73 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

The goal of collecting and analyzing performance data is to enable the network manager to become
more proactive in recognizing trends, predicting capacity problems and preventing minor service
degradations from becoming major problems for users of the network.
Position and Adoption Speed Justification: These tools are widely deployed and are useful for
identifying network capacity use trends.
User Advice: NetFlow instrumentation has grown in popularity as an inexpensive data source, with
details about the distribution of protocols and the makeup of application traffic on the network.
However, NetFlow summarizes statistics and can't analyze packet contents. Broad NetFlow
coverage should be balanced with fine-grained packet capture capabilities for critical network
segments. Expect new form factors for traffic analysis, such as virtual appliances and microprobes,
that piggy-back on existing hardware and interfaces in the network fabric, providing the depth of a
probe at a much lower cost, while approaching the ubiquity of NetFlow.
Clients should look for network performance management products that not only track
performance, but also automatically establish a baseline measurement of "normal" behavior for time
of day and day of week, dynamically set warning and critical thresholds as standard deviations off
the baseline, and notify the network manager only when an exception condition occurs. A simple
static threshold based on an industry average or a "rule of thumb" will generate false alarms.
Clients that are looking for the utmost efficiency should link network performance management
processes to network configuration management processes so that bandwidth allocation and traffic
prioritization settings are automatically updated based on changing business demands and servicelevel agreements.
Business Impact: These tools help improve network availability and performance, confirm network
service quality and justify network investments. Ongoing capacity use analysis enables the
reallocation of network resources to higher-priority users or applications without the need for
additional capital investment, using various bandwidth allocation, traffic engineering and quality-ofservice techniques. Without an understanding of previous network performance, it's impossible to
demonstrate improving service levels after changes, additions or investments have been made.
Without a baseline measurement for comparison, a network manager can't detect growth trends
and be forewarned of expansion requirements.
Benefit Rating: Moderate
Market Penetration: More than 50% of target audience
Maturity: Mature mainstream
Sample Vendors: AccelOps; AppNeta; CA Technologies; HP; IBM Tivoli; InfoVista; ManageEngine;
NetScout Systems; Network Instruments; Opnet Technologies; Riverbed Technology; SevOne;
SolarWinds; Visual Network Systems
Recommended Reading: "The Virtual Switch Will Be the Network Manager's Next Headache"
"Manage Your Videoconferencing Before It Manages You"
Page 74 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

IT Service Desk Tools


Analysis By: David M. Coyle; Jarod Greene
Definition: IT service desk (also known as IT help desk) tools document, review, escalate, analyze,
close and report incidents and problem records. Foundational functionalities include classification,
categorization, business rules, workflow, reporting and search engines. These tools manage the life
cycles of incidents and problem records from recording to closing. IT service desk tools automate
the process of identifying the individual or group responsible for resolution, and of suggesting
possible resolution scenarios and escalation, if necessary, until the service and support request is
resolved.
Typical IT service desk suites will extend incident and problem management to Web self-service,
basic service-level agreements, end-user satisfaction survey functionality, basic knowledge
management and service request management. IT service desk suites often integrate with
inventory, change, configuration management database (CMDB), asset and PC life cycle
configuration management modules. In addition, the process components of the IT service desk are
considered important functions in the Information Technology Infrastructure Library (ITIL) bestpractice process frameworks. IT service desk tools are still purchased, the majority of the time, with
the on-premises perpetual model, but the software-as-a-service (SaaS) model is a fast-growing
trend for SaaS IT service desk solutions (see "SaaS Continues to Grow in the IT Service Desk
Market").
Position and Adoption Speed Justification: Market penetration in midsize to large companies
exceeds 98%, and most toolset acquisitions are rip-and-replace endeavors occurring roughly every
five years. IT service desks are increasingly being sold as part of a larger IT service management
(ITSM) suite purchase, which will make switching just IT service desk vendors more difficult in the
future. As companies move from fragmented, department-based implementations to morecontrolled and centralized IT service support models, the consolidation of tools within companies
will continue.
User Advice: The market is saturated with vendors' tools having similar sets of features. It is very
important to develop a well-developed business case and selection criteria prior to tool selection
(see "Managing Your IT Service Desk Tool Acquisition: Key Questions That Must Be Addressed
During the Acquisition Process"). Evaluation criteria include functional features, prices, integration
points with various ITSM modules (such as change management), ease of implementation, available
out-of-the-box best practices, reporting and ease of configuration.
Business Impact: IT service desk tools, processes and metrics help improve the quality of IT
service and support delivered to end users. They increase business satisfaction, lower the cost of
end-user support and increase end-user productivity.
Benefit Rating: High
Market Penetration: More than 50% of target audience
Maturity: Mature mainstream

Page 75 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Sample Vendors: Axios Systems; BMC Software; CA Technologies; FrontRange Solutions; Hornbill;
HP; IBM; LANDesk; Numara Software; ServiceNow; VMware
Recommended Reading: "SaaS Continues to Grow in the IT Service Desk Market"
"Magic Quadrant for the IT Service Desk"
"The 2010 IT Service Desk Market Landscape"
"Managing Your IT Service Desk Tool Acquisition: Key Questions That Must Be Addressed During
the Acquisition Process"

PC Configuration Life Cycle Management


Analysis By: Terrence Cosgrove; Ronni J. Colville
Definition: PC configuration life cycle management (PCCLM) tools manage the configurations of
client systems. Specific functionality includes OS deployment, inventory, software distribution,
patch management, software usage monitoring and remote control. Desktop support organizations
use PCCLM tools to automate system administration and support functions that would otherwise be
done manually. The tools are used primarily to manage PCs, but many organizations use them to
manage their Windows servers, smartphones, tablets and non-Windows client platforms (e.g., Mac
and Linux). Application virtualization is a major functional capability that many organizations are
using or evaluating; they are looking to acquire this capability in PCCLM tools, or looking for
products to manage virtualized packages from third-party tools (e.g., Microsoft, Citrix Systems and
VMware).
Position and Adoption Speed Justification: PCCLM tools are widely adopted, particularly in large
enterprises (i.e., more than 5,000 users). The market has started to commoditize, with few
differences among products in some of the core functionality, such as inventory, software
distribution and OS deployment. Differences among products are increasingly found in the following
areas: the ability to manage security configurations, non-Windows client management, scalability,
usability and integration with adjacent products (e.g., service desk, IT asset management and
endpoint production products). Recently, vendors have begun to compete by offering appliancebased or software-as-a-service (SaaS)-based options that better address the needs of small or
midsize organizations, and those with highly mobile or distributed users.
User Advice: Users will benefit most from PCCLM tools when standardization and policies are in
place before automation is introduced. Although these tools can significantly offset staffing
resource costs, they require dedicated resources to define resource groups, package applications,
test deployments and maintain policies for updating.
Many factors could make certain vendors more appropriate for your environment than others. For
example, evaluate:

Ease of deployment and usability

Alignment of endpoint security and PCCLM

Page 76 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Alignment service desk and PCCLM

Geographic focus

Capabilities that meet a specific regulatory requirement

Business Impact: Among IT operations management tools, PCCLM tools have one of the most
obvious ROIs managing the client environment in an automated, one-to-many fashion, rather
than on a manual, one-to-one basis.
Benefit Rating: Moderate
Market Penetration: More than 50% of target audience
Maturity: Mature mainstream
Sample Vendors: BMC Software; CA Technologies; Dell KACE; FrontRange Solutions; HP; IBMBigFix; LANDesk; Matrix42; Microsoft; Novell; Symantec
Recommended Reading: "Magic Quadrant for PC Configuration Life Cycle Management Tools"
"Emerging PC Life Cycle Configuration Management Vendors"

Entering the Plateau


Network Fault-Monitoring Tools
Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall
Definition: Network fault-monitoring tools indicate the up/down status of network components. In
some cases, the tools also discover and visualize the topology map of physical relationships and
dependencies among network components as a way to display the up/down status in a context that
can be easily understood.
Position and Adoption Speed Justification: These tools have been widely deployed, primarily to
address the reactive nature of network monitoring in IT operations.
User Advice: Users should leverage network fault-monitoring tools to assess the status of network
components, but work toward improving problem resolution capabilities and aligning network
management tools with IT service and business goals. Network fault-monitoring tools are frequently
used for "blame avoidance," rather than problem resolution, with the goal of proving that the current
problem is not the network's fault. Resolving problems, rather than just avoiding blame, should be
the goal.
Business Impact: These tools help an IT organization view its network events through a single
"pane of glass." This helps improve the availability of the network infrastructure and shorten the
response time for noticing and repairing network issues. Network fault-monitoring tools support

Page 77 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

day-to-day network administration and provide useful features, but they don't tightly align the
network with business goals.
Benefit Rating: Moderate
Market Penetration: More than 50% of target audience
Maturity: Mature mainstream
Sample Vendors: CA Technologies; EMC; Entuity; HP; IBM Tivoli; Ipswitch; ManageEngine; Nagios;
SolarWinds

Job-Scheduling Tools
Analysis By: Milind Govekar
Definition: Job-scheduling tools are used to schedule online or offline production jobs, such as
customer bill calculations, and the transfer of data between heterogeneous systems on the basis of
events and batch processes within packaged applications. A scheduled job usually has a date, time
and frequency, as well as other dependencies, inputs and outputs associated with it.
Position and Adoption Speed Justification: Job-scheduling tools are widely used, mature
technologies. They support Java and .NET application server platforms, in addition to integration
technology. Job-scheduling tools help enterprises in their automation requirements across
heterogeneous computing environments. They automate critical batch business processes, such as
billing or IT operational processes, including backups, and they provide event-driven automation
and batch application integration for example, integrating CRM processes with ERP processes.
These tools are evolving toward handling dynamic, policy-driven workloads; thus, they are moving
toward IT workload automation broker tools (also found on this Hype Cycle).
User Advice: Enterprises should plan to use a single job-scheduling tool that is able to schedule
jobs in a heterogeneous infrastructure and application environment, to improve the quality of
automation and service, and to lower the total cost of ownership of the environment. Furthermore,
enterprises should evaluate the tool's event-based capabilities, in addition to traditional date- and
time-scheduling capabilities.
Enterprises looking for policy-driven, dynamic workload management capabilities should consider
IT workload automation broker tools.
Business Impact: These tools can automate a batch process to improve the availability and
reliability of a business process that depends on it.
Benefit Rating: Moderate
Market Penetration: More than 50% of target audience
Maturity: Mature mainstream

Page 78 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Sample Vendors: Advanced Systems Concepts; Argent Software; ASG Software Solutions; BMC
Software; CA Technologies; Cisco (Tidal); Flux; IBM Tivoli; MVP Systems Software; Open Systems
Management; Orsyp; Redwood Software; Software and Management Associates; SOS-Berlin;
Terracotta; UC4 Software; Vinzant Software
Recommended Reading: "Magic Quadrant for Job Scheduling"
"How to Modernize Your Job Scheduling Environment"
"IT Workload Automation Broker: Job Scheduler 2.0"
"Toolkit: Best Practices for Job Scheduling"

Appendixes

Page 79 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Figure 3. Hype Cycle for IT Operations Management, 2010

expectations
Run Book Automation
Application Management
Open-Source IT Operations Tools
IT Service Portfolio Management
and IT Service Catalog Tools
Release Management Tools
IT Workload Automation
Broker Tools
Application Transaction Profiling

CobiT
CMDB
Mobile Service-Level Management Software
IT Infrastructure Library v.3
Real-Time Infrastructure
IT Service Dependency Mapping
Network Configuration and
Change Management Tools
Business Service Management Tools
Configuration Auditing
Server Provisioning and
Configuration Management

Virtual Server Resource


Capacity-Planning Tools
SaaS Tools for IT Operations

Job-Scheduling Tools
Network Monitoring Tools

End-User Monitoring Tools


Business Consulting Services
Hosted Virtual Desktops
IT Service Desk Tools
PC Configuration Life Cycle Management
Resource Capacity-Planning Tools
PC Application
Mobile Device Management
Virtualization
IT Infrastructure Library v.2
IT Change
Network Performance Management Tools
Management Tools
IT Event Correlation and Analysis Tools
IT Chargeback Tools
IT Asset Management
Service-Level Agreement Monitoring and Reporting Tools
Tools and Process

Cloud Management Platforms


Behavior Learning Tools

Work Space Virtualization

As of July 2010

Technology
Trigger

Peak of
Inflated
Expectations

Trough of
Disillusionment

Slope of Enlightenment

Plateau of
Productivity

time
Years to mainstream adoption:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (July 2010)

Page 80 of 84

Gartner, Inc. | G00214402

This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Hype Cycle Phases, Benefit Ratings and Maturity Levels


Table 1. Hype Cycle Phases
Phase

Definition

Technology Trigger

A breakthrough, public demonstration, product launch or other event generates


significant press and industry interest.

Peak of Inflated
Expectations

During this phase of overenthusiasm and unrealistic projections, a flurry of wellpublicized activity by technology leaders results in some successes, but more
failures, as the technology is pushed to its limits. The only enterprises making
money are conference organizers and magazine publishers.

Trough of
Disillusionment

Because the technology does not live up to its overinflated expectations, it rapidly
becomes unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of
Enlightenment

Focused experimentation and solid hard work by an increasingly diverse range of


organizations lead to a true understanding of the technology's applicability, risks
and benefits. Commercial off-the-shelf methodologies and tools ease the
development process.

Plateau of
Productivity

The real-world benefits of the technology are demonstrated and accepted. Tools
and methodologies are increasingly stable as they enter their second and third
generations. Growing numbers of organizations feel comfortable with the reduced
level of risk; the rapid growth phase of adoption begins. Approximately 20% of
the technology's target audience has adopted or is adopting the technology as it
enters this phase.

Years to Mainstream
Adoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (July 2011)

Page 81 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Table 2. Benefit Ratings


Benefit Rating

Definition

Transformational

Enables new ways of doing business across industries that will result in major shifts in
industry dynamics

High

Enables new ways of performing horizontal or vertical processes that will result in
significantly increased revenue or cost savings for an enterprise

Moderate

Provides incremental improvements to established processes that will result in


increased revenue or cost savings for an enterprise

Low

Slightly improves processes (for example, improved user experience) that will be
difficult to translate into increased revenue or cost savings

Source: Gartner (July 2011)

Table 3. Maturity Levels


Maturity Level

Status

Products/Vendors

Embryonic

In labs

None

Emerging

Commercialization by vendors
Pilots and deployments by industry leaders

First generation
High price
Much customization

Adolescent

Maturing technology capabilities and process


understanding
Uptake beyond early adopters

Second generation
Less customization

Early mainstream

Proven technology
Vendors, technology and adoption rapidly
evolving

Third generation
More out of box
Methodologies

Mature
mainstream

Robust technology
Not much evolution in vendors or technology

Several dominant vendors

Legacy

Not appropriate for new developments


Cost of migration constrains replacement

Maintenance revenue focus

Obsolete

Rarely used

Used/resale market only

Source: Gartner (July 2011)

Page 82 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

Recommended Reading
Some documents may not be available as part of your current Gartner subscription.
"Understanding Gartner's Hype Cycles, 2011"
This research is part of a set of related research pieces. See Gartner's Hype Cycle Special Report
for 2011 for an overview.

Page 83 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

This research note is restricted to the personal use of inttelmexit@telmex.com

GARTNER HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096
Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartners prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartners Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see Guiding Principles on Independence and Objectivity.

Page 84 of 84

Gartner, Inc. | G00214402


This research note is restricted to the personal use of inttelmexit@telmex.com

You might also like