SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Volume 1                                                 Issue 2   November 2010



                CA Technology Exchange
                                  Insights from CA Technologies




                                     Virtualization




Inside this issue:
• Virtualization: What is it and what can it do for you?
• Virtualization: Enabling the Self-Service Enterprise
• Data Virtualization
plus
Columns by CA Technologies thought leaders
CA Technology Exchange
Table of Contents

 1 Welcome from the Editor in Chief
     Marv Waschke, Principal Software Architect, Office of the CTO,
     CA Technologies, and Editor in Chief, CA Technology Exchange

 3   Virtualization: What is it and what can it do for you?
     Anders Magnusson, Senior Engineering Services Architect,
     CA Technologies

25   Leading Edge Knowledge Creation
     Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs,
     CA Technologies

27   Virtualization: Enabling the Self-Service Enterprise
     Efraim Moscovich, Principal Software Architect, CA Technologies

40   Service Assurance & ITIL: Medicine for Business Service Management
     Brian Johnson, Principal Architect, CA Technologies

42   Data Virtualization
     Sudhakar Anivella, Senior Architect, Service Desk, CA Technologies

51   Lessons Learned from the Mainframe Virtualization Experience
     John Kane, Technical Fellow Emeritus

53   Glossary of Virtualization Terms
CATX: Virtualization
by Marv Waschke, Principal Software Architect, CA Technologies,
Editor in Chief, CA Technology Exchange

Our first issue of CATX was published in April 2010. The theme was cloud com-          CA Technology Exchange
puting. This issue addresses virtualization, a subject closely related to cloud       Editorial Committee
computing. The two concepts are often juxtaposed in discussion and I occasion-
ally hear people speaking as if the two concepts were identical.                      Marv Waschke, Editor in Chief
                                                                                      Principal Software Architect,
Virtualization and Cloud                                                              Office of the CTO, CA Technologies
Virtualization and cloud are related yes, but identical, clearly not. In computing,
                                                                                      Janine Alexander
we often advance by focusing on the outcome of an activity and delegate per-          Technical Writer, CA Support,
formance of the activity itself to some other entity. For instance, when we use       CA Technologies
SQL to query a relational database, we delegate moving read heads and scan-
ning disk sectors to the RDBMS and concentrate on the tables of relations that        Marie Daniels
are the result. This allows database application builders to concentrate on the       Program Director, CA Support,
data, not the mechanics of moving bits and bytes.                                     CA Technologies

                                                                                      Michael Diaz
Both cloud computing and virtualization are examples of delegation, but what is       Quality Assurance Architect,
delegated is different and the delegation occurs for different reasons.               Workload Automation, CA Technologies

Cloud computing is delegation on a grand scale. A cloud consumer engages with         Robert P. Kennedy
a network interface that permits the consumer to delegate the maintenance             Senior Director, Technical Information,
and management of equipment and software to a cloud provider. The consumer            CA Technologies
concentrates on the results and the provider keeps the lights on and the equip-
                                                                                      Laurie McKenna
ment running in the datacenter.                                                       Director, Technical Information
                                                                                      CA Technologies
Virtualization separates the execution of software from physical hardware by
delegating computing to emulators that emulate physical hardware with soft-           David W. Martin
ware. The user can focus on the software rather than configuring the underlying        Senior Principal Software Engineer
hardware. Emulators can often be configured more quickly and with greater flex-         Virtualization and Service Automation
                                                                                      CA Technologies
ibility than physical systems, and configured systems can be stored as files and
reproduced easily. Without the convenience and flexibility of virtualized sys-         Cheryl Morris
tems, cloud implementations can be slow and difficult, which is why almost all         Principal, Innovation and University
mention of cloud includes virtualization.                                             Programs, CA Technologies

Articles in This Issue                                                                Richard Philyaw
Practice is never as simple as theory. Two of our three articles in this issue dis-   Principal Software Architect,
                                                                                      Office of the CTO, CA Technologies
cuss virtual systems in practice.
                                                                                      David Tootill
Although virtualization can deliver great rewards, deploying an IT service or         Principal Software Architect, Service
group of services to run virtually is a complicated project that requires planning    Management, CA Technologies
and systematic execution. Anders Magnusson from CA Services is an experi-
enced implementer of virtualization projects. His article provides an insider’s
view of the challenges in justifying, planning, and executing virtualization proj-
ects.

Efraim Moscovich is an architect of virtualization management tools. He has
taken time to consider the potential of virtual systems for self-service in IT.

                                                                1
Finally Sudhakar Anivella, a senior architect in service management develop-
ment, discusses another dimension to virtualization. We tend to think of virtual-
ization as synonymous with virtual servers, but in fact, we use the concept of
virtualization in many ways in computing: virtual memory, virtual routing, are
all common. Data virtualization, as Sudhakar points out, has become very im-
portant in IT systems.

The glossary of virtualization terms was a joint project of the editors and the au-
thors. Terms come and go and change meaning all the time as virtualization
evolves. We attempted to provide terms as they are understood today in this
glossary.

Columns
In addition to full-length articles, we have columns from CA Labs senior execu-
tive, Gabby Silberman, and ITIL expert, Brian Johnson. Virtualization has long
been a staple of mainframe computing. Ideas that are new to distributed com-
puting have been used for a long time on the mainframe. Recently retired CA
Technical Fellow Emeritus, John Kane, has written a column that touches on
some of the ways that virtual distributed computing is recapping the experience
of the mainframe.

All articles in CATX are reviewed by panels of experts from CA Technologies. Ar-
ticles that pass the internal review go on to external review panels made up of
individuals from universities, industry experts, and experts among CA Technolo-
gies customers. These reviewers remain anonymous to preserve the integrity of
the review process, but the editorial committee would like to thank them for
their efforts. They are valued contributors to the success of CATX and we are
grateful to them. If any readers would like to participate in a review panel,
please let us know of your interest and expertise in an email to CATX@ca.com.

The editorial committee hopes you find value and are challenged in this issue
on virtualization. Please consider contributing to our next issue, which will cen-
ter on REST, (Representational State Transfer), the “architecture of the World
Wide Web.” Although REST will be the main theme of our next issue, we will
also include additional articles on virtualization, the cloud, and other topics of
interest to the IT technical community.

Our April 2011 issue promises to offer a varied range of thought-provoking arti-
cles. CATX is open to everyone to contribute, not only CA Technologies employ-
ees but all IT technologists. Please address questions and queries to
CATX@ca.com.




                                                               2
Virtualization: What is it and what can it do for you?
by Anders Magnusson, Senior Engineering Services Architect, CA Technologies


The Promise of Virtualization                                                         About the author:
Although the concept of virtualization began in the mainframe environment in
the late 1960’s and early 1970’s, its use in the distributed environment did not
become commonplace until very recently. Even though the underlying technol-
ogy and related best practices rapidly continue to evolve, for most application
types virtualization has proven mature enough to support business critical sys-
tems in production environment.

When done right virtualization provides significant business value by helping or-
ganizations manage cost, improve service, and simplify the process of aligning
business with IT. We can see a rapid acceleration in the number of traditional
datacenters that are pursuing this value by shifting to a virtualization based
model, and some are even taking it one step further and by implementing pri-          Anders Magnusson is a Senior Engineer-
vate clouds. How fast this transformation will happen and how much of the             ing Services Architect at CA Technolo-
“old” datacenter will instead move out to public clouds is uncertain. To help us      gies and a member of CA Technologies
with these estimates we can look at what Gartner Inc. and Forrester Research          Council for Technical Excellence.
are predicting:
                                                                                      Since joining CA Technologies in 1997
                                                                                      he has held a number of roles and re-
• “Virtualization continues as the highest-impact issue challenging infrastruc-
                                                                                      sponsibilities across the organization
  ture and operations through 2015. It changes how you manage, how and what           but, during the most recent several
  you buy, how you deploy, how you plan and how you charge. It also shakes up         years he has focused on developing
  licensing, pricing and component management. Infrastructure is on an                standard procedures and best practices
  inevitable shift from components that are physically integrated by vendors          for utilizing virtualization and deploying
  (for example, monolithic servers) or manually integrated by users to logically      multi-product solutions.
  composed "fabrics" of computing, input/output (I/O) and storage components,
  and is key to cloud architectures. This research explores many facets of            Anders is responsible for providing siz-
  virtualization.” (Gartner, Inc., “Virtualization Reality”, by Philip Dawson, July   ing best practices and tools for several
  30, 2010.)                                                                          CA Technologies solutions as well for
                                                                                      virtualization related material on the
                                                                                      Implementation Best Practices site,
• “By 2012, more than 40% of x86 architecture server workloads in enterprises         which can be found at
  will be running in virtual machines.” (Gartner, Inc., “IT Virtual Machines and      https://support.ca.com/phpdocs/0/com-
  Market Share Through 2012”, by Thomas J. Bittman, October 7, 2009.)                 mon/impcd/r11/StartHere.htm

• "Despite the hesitancy about cloud computing, virtualization remains a top
  priority for hardware technology decision-makers, driven by their objectives of
  improving IT infrastructure manageability, total cost of ownership, business
  continuity, and, to a lesser extent, their increased focus on energy efficiency."
  (Forrester Research Inc. – Press Release: Cambridge, Mass., December 2, 2009,
  “Security Concerns Hinder Cloud Computing Adoption”. Press release quoted
  Tim Harmon, Principal Analyst for Forrester.)

Despite the awareness of the huge potential provided by virtualization – or even
because of it – there are many virtualization projects that fail in the sense that
they aren’t as successful as expected. This article is written in two parts. Part
one defines virtualization and why organizations choose to use it, while part two
focuses on planning a successful virtualization project.
                                                              3
What is Virtualization?
The first step in understanding what the virtualization effort will achieve is to       At a high level, virtualization presents
agree on what we mean by “virtualization”. At a very high level virtualization         system users with an abstract
can be defined as a method of presenting “system users” (such as guest sys-             emulated platform without details.
tems and applications) with the big picture (that is, an abstract emulated com-
puting platform) without the need to get into all the little details – namely the
physical characteristics of the actual computing platform that is being used.

Virtualization has long been a topic of academic discussion and in 1966 it was
first successfully implemented in a commercial environment when the IBM
mainframe systems S/360 supported virtual storage. Another breakthrough
came in 1972, when the first hypervisors were introduced with the VM/370 op-
erating system. The introduction of the hypervisor is important because it en-
able hardware virtualization by allowing multiple guest systems to run in
parallel on a single host system. Since that time virtualization has been devel-
oped on many fronts and can include:

Platform or Server Virtualization: In this form of virtualization a single server
hosts one or more "virtual guest machines". Subcategories include: Hardware
Virtualization, Paravirtualization, and Operating System Virtualization.

Resource Virtualization: Virtualization also can be extended to encompass spe-
cific system resources, such as storage and network resources. Resource virtual-
ization can occur within a single host server or across multiple servers (using a
SAN, for example). Modern blade enclosures/servers often combine platform
and resource virtualization, sharing storage, network, and other infrastructure
across physical servers.

Desktop Virtualization: Virtual Desktop Infrastructure (VDI) provides end users
with a computer desktop that is identical or similar to their traditional desktop
computer while keeping the actual computing power in the datacenter.

When this approach is used, the end user requires only a thin client on his desk-
top. All updates or configuration changes to the application or hardware are per-
formed in the centrally located datacenter. This approach provides greater
flexibility when it comes to securing the systems and supplying computing
power on demand to the end user.

Application Virtualization: Application virtualization is a technology designed
to improve portability, manageability, and compatibility of individual applica-
tions by encapsulating the application so that it no longer communicates di-
rectly with the underlying operating system.

Application virtualization utilizes a “virtualization layer” to intercept calls from
the virtualized application and translate them to call the resources needed to
provide the underlying operating system.

Computer Clusters /Grid Computing: This type of virtualization connects mul-
tiple physical computers together as a single logical entity in order to provide
better performance and availability. In these environments the user connects to
the “virtual cluster” rather than to one of the actual physical machines.

The use of grid computing or clustering of computers is typically driven by the
                                                                 4
need to support high availability, load balancing, or a need for extreme comput-
ing power.

Each one of these general categories can be divided into additional subcate-
gories. All of these potential options make it important that you are clear about
what you are referring to when you talk about virtualization.

The requirements and best practices for each of these different techniques are
very similar – often what is valid for one form is valid for many of the others. In
addition, several of these depend on each other, and by implementing more of
them, you enhance the value. For example, if you are implementing Server Vir-
tualization or a Grid Structure you should also consider various types of resource
virtualization to support the infrastructure.

For the purposes of this article, we are focusing on server virtualization unless
otherwise specified.

Why Use Virtualization?
Now that you know what virtualization is,                                             The short answer to the question
why do organizations choose to use it?                                                “Why use virtualization” is to manage
The short answer is to manage cost, im-                                               cost, improve service, and simplify the
prove service, and simplify the process of                                            process of aligning business with IT.
aligning business with IT. For example, by
using virtualized environments, organiza-
tions can provide improved service by an-
ticipating and quickly responding to
growth in demand. In extreme examples
ROI has been achieved in as little as 3-6
months; however, a more realistic expec-
tation is that it will take 12-18 months.

Following are some of the common drivers that influence organizations in de-
ciding to virtualize their IT environment.

Hardware Cost Savings through Consolidation of logical servers into fewer             Hardware cost savings through
physical servers is one of the main promises from virtualization. There are mul-      consolidation of logical servers into
tiple ways in which savings can be realized. First, fewer physical servers may be     fewer physical servers is one of the
required. In a well managed virtual environment, multiple logical servers can be      main promises from virtualization.
hosted on the same physical server. Second, by reducing the number of physical
servers required, virtualization can help manage “datacenter sprawl”, a savings
of both physical space and the utilities required to manage the larger space.

To consolidate successfully, you need to understand the entire picture. An organ-
ization can consolidate workload that was previously distributed across multiple
smaller – and often underutilized - servers onto fewer physical servers - espe-
cially if those servers previously had a limited workload - but these new servers
still must have sufficient resources, at all times. See the section “New Hardware
Requirements” below for more details on this.

Automation and Enhanced Resource Management is, in many ways, related
to hardware cost savings but the drivers are sometimes different:

• Optimized usage of hardware resources. In a non-virtualized environment it is

                                                               5
common to have some servers that are barely utilized. Many datacenters are
 filled with servers that use only a small percent of the available resources.
 These centers are perfect targets for consolidation and can provide an
 excellent return on investment.

• Rapid deployment of new servers and applications. In a well managed
  environment with established templates for typical server installations, new
  logical servers can be deployed rapidly on host servers with available capacity.

• Flexibility, ability to provide on demand resources. Many applications require
  significant resources - but only briefly. For example end of month or end of
  year reporting or other specific events may trigger a higher than usual load. In
  a virtualized environment, more resources can be assigned dynamically to a
  logical server or, if the application is designed to support scaling out horizon-
  tally, rapid deployment can supply additional logical servers as worker nodes.

• Flexible chargeback systems. In a flexible virtualized environment an organiz-
  ation can provide a meaningful chargeback/showback system that will
  efficiently encourage system owners to use only the resources they need
  without risking the business by using servers that are inadequate for their
  needs. This approach is especially true in a highly mature and flexible virtual
  environment that includes management tools that collect all required metrics
  and resource virtualization techniques such as storage virtualization with thin
  provisioning.

• Support test and development by providing access to a large number of
  potential servers that are active and using resources only when needed. This
  need is typically the starting point and an obvious choice to virtualize for any
  environment that requires temporary short-lived servers. It is especially true
  when test and development groups require a large number of different
  operating systems, configurations, or the ability to redeploy a test environ-
  ment quickly from a pre-defined standard.

Fault Tolerance, High Availability, and Disaster Recovery on different levels          Fault tolerance, high availability, and
can be simplified or made more efficient in a virtual environment. In highly             disaster recovery on different levels
available environments, brief interruptions of service and potential loss of trans-    can be simplified or made more
actions serviced at the time of failure are tolerated, while fault tolerant environ-   efficient in a virtual environment.
ments target the most mission-critical applications that cannot tolerate any
interruption of service or data loss. Virtualization can provide a viable solution
for both – including everything from simplified backup/restore of systems to
complete disaster recovery or fault tolerance system supported by the various
hardware and virtualization vendors.

A few examples of this scenario are:

• Backup of complete images. A virtual server, by its very nature, is comprised of
  a set of files that can be moved easily between physical servers. A quick
  snapshot of those files can be used to start the server in this exact condition
  on another physical server.

• Simplified disaster recovery solutions. When coupled with the appropriate
  hardware infrastructure, virtualization strategies can be used to simplify the
  process of disaster recovery. For example, a typical disaster recovery solution

                                                               6
may include distributing resources into primary and secondary datacenters.
 Solution providers often take advantage of features built into a virtualization
 infrastructure and sell out-of-the-box solutions to support high availability
 and disaster recovery.

• Minimize downtime for hardware and software maintenance tasks. All down-
  time due to planned hardware maintenance can be avoided or kept to a
  minimum because an organization can move the active virtual images to
  another physical server while the upgrade is performed.

 With correct planning, change control for software maintenance can also be
 significantly enhanced through judicious use of virtualization. Because the
 complete logical machine can be copied and handled as a set of files,
 organizations can easily set up separate areas such as Development, Quality
 Assurance, Library of available images, Archive of previously used images,
 Staging area for Configuration, and so on. A structure like this one encourages
 organizations to upgrade and test a new version in the “Development” and
 “QA” areas while still running the old version in “Production.” When the new
 version is approved, a small maintenance window can be scheduled to trans-
 fer the new, updated, and verified library image over to the production system.
 Depending on the application, the maintenance window can even be com-
 pletely eliminated by having the old and the newly updated images running
 in parallel and switching the DNS entry to point to the updated instance.
 This approach requires some advanced planning, but it has been successfully
 used by service providers with tight service level agreements.

• Efficient usage of component level fault tolerance. Because all virtualized
  servers share a smaller number of physical servers, any hardware related
  problems with these physical servers will affect multiple logical servers.
  Therefore, it is important that servers take advantage of component level fault
  tolerance. The benefit of taking this approach is that all logical servers can
  take advantage of the fault tolerant hardware provided by the host system.

Energy Saving and Green IT. Another justification for using virtualization is to      Another justification for using virtual-
support sustainability efforts and lower energy costs for your datacenter. By con-   ization is to support sustainability
solidating hardware, fewer and more efficiently used servers demand less en-          efforts and lower energy costs for
ergy to perform the same tasks.                                                      your datacenter.

In addition, a mature and intelligent virtualized environment can power on and
off some virtual machines so that they are active only when they are in use. In
some cases, virtual machines running on underutilized host servers can be
moved onto fewer servers, and unused host servers powered down until they
are needed.

Simplify Management. One of the primary challenges in managing datacenters
is data center sprawl, the relentless increase in diverse servers that are patched
and configured in different ways. As the sprawl grows, the effort to maintain
these servers and keep them running becomes more complex and requires a
significant investment in time. It is worth noting that, unless effective lifecycle
management procedures and appropriate controls are in place, data center
sprawl is a problem that will be magnified in a virtual environment.

Using well controlled and well managed virtualization guest images, however,

                                                              7
reduces the number of configuration variations making it easier to manage
servers and keep them up to date. Note that this approach requires that a virtu-
alization project also includes a change control process that manages virtual
images in a secure way.

When a server farm is based on a small set of base images, these images can be
efficiently tested and re-used as templates for all servers. Additional modifica-
tions to these templates can be automatically applied in a final configuration
stage. When done correctly this approach minimizes the risk of serious failures
in the environment. All changes, including the final automated configuration,
should be tested before they are put in production. This secure environment
minimizes the need for expensive troubleshooting of production servers and
fosters a stable and predictable environment.

Managing Security. Security is one of the major concerns surrounding virtual-      In a virtual environment, much of
ization. Too often, the main security risk in any environment is the human fac-    the security management can be
tor; administrators who, without malicious intent, mis-configure the system. The    automated and raised one level so
traditional security models are effective if sufficiently rigorous procedures are   that fewer manual steps are needed
followed. In a virtual environment, much of the management can be automated        to keep the environment secure.
and raised one level so that fewer manual steps are needed to keep the environ-
ment secure.

A few examples of this are:

  Patch management. Virtualization allows testing changes in a controlled en-
  vironment, using an identical image. After the updated image is verified, the
  new image or the specific changes can be promoted to the production sys-
  tem with a minimum of downtime. This approach reduces the risks with
  patching the system and, in most cases, if something goes wrong reversion
  to a pre-patch snapshot is easy.

  Configuration management. The more dynamic environment and the poten-
  tial sprawl of both physical and logical servers makes it important to keep
  all networks and switches correctly configured. This is especially important in
  more established and dynamic virtual environment where virtual machines
  are moved between host servers based on location of available resources.

  In a virtual environment, configuration management can be handled by pol-
  icy driven virtual switches (a software implementation of a network switch
  running on the host server) where the configuration follows your logical
  server. Depending on your solution you can define a distributed switch where
  all the resources and policies are defined on the datacenter level. This
  approach provides a solution that is easy to manage for the complete
  datacenter.

  Support for O/S hardening and an integral part of change control. If all
  servers have been configured using a few well defined and tested base im-
  ages it becomes easier to lock down the operating systems on all servers in a
  well controlled manner and minimizes the risk for attacks.

Enabling Private Cloud Infrastructure. A highly automated virtualized environ-
ment can significantly help your environment create a private cloud infrastruc-
ture. Stakeholders can request the resources they need and return them when

                                                             8
they no longer are needed. In a highly mature environment where the stake-
holder requests resources or services, these requests can be hosted in a private
cloud or, if resources aren’t available, in a public cloud. This level of flexibility
will be difficult to accomplish in an acceptable way without basing the private
cloud on a virtual environment. From the requestor’s point of view, it doesn’t
matter if the services in the cloud are hosted on a physical machine, a virtual
machine, or some type of a grid as long as the stakeholder is getting the re-
quired resources and the performance.

Next Steps
The goals driving your particular virtualization project may include any number
of those identified in this article – or you may have a completely different set of
drivers. What is critical is that you clearly identify those goals and drivers prior
to undertaking the project. Project teams need a clear understanding of what
they are expected to accomplish and what business value is expected to be de-
rived in order to identify the appropriate metrics that will demonstrate the value
of virtualization to the stakeholders. Part two of this article “Planning Your Vir-
tualization Project” examines how these drivers can be used to direct the project
and outlines a number of important areas to be aware of when planning a virtu-
alization project.

Planning Your Virtualization Project

The Importance of Planning
When you are planning a virtualization project, one of the most critical first           When you are planning a virtualiza-
steps is to ensure that both the project team and all stakeholders understand           tion project, one of the most critical
what the project needs to accomplish, what the supporting technology is capa-           first steps is to ensure that both the
ble of and what the true business drivers behind the project really are. This is        project team and all stakeholders
true for any project – but it is particularly true for virtualization endeavors be-     understand what the project needs to
cause there are many common misperceptions about what virtualization can                accomplish, what the supporting
and cannot offer. For further insights on the benefits of virtualization – and           technology is capable of and what the
common business drivers - see Part one of this article “What is Virtualization?”        true business drivers behind the
                                                                                        project really are.
Even though your team may know that a virtualization project can provide sig-
nificant value unless that value is explicitly spelled out, it runs the risk of be-
coming “just another big project” which is an invitation to failure. The
virtualization project may save the company money, it may make it easier to
provision new machines, and, perhaps, it might even reduce the company’s car-
bon footprint, but a project with goals this vague is likely to fail and be be super-
seded by a new project because there is no way of effectively measuring its
progress or success. To endure and succeed, a project must have explicit intent,
understandable milestones, and clear measures of success defined up front.
Without them, expectations will be unclear and there will be no way to accu-
rately communicate the benefits.

Before undertaking any virtualization project, the following questions must be
addressed:

• Maturity levels: What is the current and expected maturity level of the virtu-
  alized environment? (see “Maturity Levels” later in this article for examples).

• Purpose: What are the business drivers for the project?

                                                                9
• What: What processes, functions, and applications will be virtualized?

• Support: Do stakeholders (for example, system owners and executive leaders)
  support the project goals?

• Cost: How much is the project expected to cost, and save?

• Risks: What functional and financial risks will be associated with the project?
  Are they acceptable?

• Scope: What is the timeframe and what resources will be needed to complete
  the virtualization project? (Will it be a single, focused project, or one of multi-
  ple phases and milestones?)

• Changes: Will changes will need to occur in the current processes, functions,
  and applications to support virtualization? Will changes need to occur in the
  deployment environment?

• Accountability: What measurements will be incorporated that indicate that
  the project has reached its targets and is successful? Which stakeholders need
  to be informed of project progress, and how often?

This list is by no means exhaustive, however, without at least a good under-
standing of the answers to these questions it is likely that the project will be
less successful than it could be. In a larger project where the goal is to virtualize
a significant part of the environment or span multiple maturity levels, it is also
important to have an open mind and, to some degree, an open project plan, that
permits incorporation of lessons learned during earlier phases of the project into
later phases. Changes to the original stakeholder agreement must have buy-in;
a minor change or delay that is communicated is rarely a problem, but ignored
changes might turn an otherwise successful project into a failure.

Virtualization Maturity Levels
Analyzing the current state of virtualization, the maturity level, and comparing it     There are typically four levels of
to the future desired level simplifies virtualization decisions. There are typically     virtualization maturity:
four levels of virtualization maturity:                                                 Level 1 – Islands of virtualization;
                                                                                        Level 2 - Consolidation and managing
                                                                                        expenses;
                                                                                        Level 3 – Agility and flexibility; and
                                                                                        Level 4 – Continuous adaptivity.




Level 0 – No Server Utilization
As the starting point of the virtualization maturity “ladder” this level describes
an organization which has not yet implemented virtualization.

Level 1 – Islands of Virtualization for Test and Development
This maturity level describes the state of most IT departments before they start
a formal virtualization project. Virtualization is often used by individuals or lim-

                                                                10
ited groups within the organization without centralized management or re-
sources. At this stage virtualization is used reactively and ad hoc to create vir-
tual machines for testing and development in order to address specific issues
for non- business critical systems when they arise.

Level 2 – Consolidation and Managing Expenses
At this stage the primary driver is to consolidate servers and increase the utiliza-
tion of available resources. When done correctly, consolidating small or under-
utilized servers into larger servers can be very efficient and it can save
significant costs. However, the key to saving costs is identifying the right servers
for virtualization. While there can be valid reasons to virtualize larger servers as
well, it is difficult to realize savings on hardware in doing so.

Level 3 – Agility / Flexibility
The driver for the next step on the virtualization maturity ladder is the need for
enhanced flexibility, enabling you to add and remove resources on demand and
even move workload between physical hosts. This ability can be used to balance
workload or to support a high availability solution that allows virtual machines
to be restarted on a different physical server after a server failure.

Level 4 – Continuous Adaptivity
The driver behind this step is the desire to fully automate all of these functions
in order to enable software solutions, often with hardware support, to pre-
dictably and dynamically balance the load between servers, rebalance resources
between virtual machines, start up and shut down virtual servers based on
need, control power saving features in both the virtual machines and the host
system itself, etc. This automation should be service-aware and should consider
such factors as measured and expected workload, tariffs for energy, importance
and urgency of requested resources, and demand from other services, and
should use all available information to identify the best use of the available re-
sources.

The potential gains from virtualization grow significantly with each step up the
maturity ladder: however, climbing too fast up the ladder can risk project failure.
This is especially true if you also lack complete support from the stakeholders
and the executive leadership, access to the correct infrastructure and tools or
the required skillset. Travelling up the maturity levels is often a journey and it is
likely that a project will lead to a mix of the different maturity levels, which is
expected, but it is important that your goals be clearly defined and communi-
cated.

Virtualization Challenges
Part one of this article “What is Virtualization?” discussed the importance of
identifying the business drivers for a project. After that is done it is equally im-
portant to be aware of problems and challenges that may arise. Awareness can
guide infrastructure design to minimize problems caused by these obstacles.

One common and challenging problem with server consolidation is that some
areas of the organization may want to retain control over their existing hard-
ware and applications. This resistance could be caused by a fear of losing con-
trol of their environment, fear of inadequate response times or systems
availability, concerns about security and handling of confidential data, or gen-
eral anxiety about changes to their business environment. Some of these con-

                                                                11
cerns may be valid while others may only express a lack of understanding of
what this new technology has to offer. The project team must identify these
concerns and address them to the satisfaction of the stakeholders.
Even though it is critical to have full support for the project, it is equally impor-
tant to have a good understanding of the types of problems – both technical
and business impact related - that can potentially occur.

A few common challenges are:

Overutilization: One common problem with a virtualized environment is                   One common problem with a virtual-
overutilization of physical servers. Although virtualization permits running mul-       ized environment is overutilization of
tiple logical servers on one physical server, virtualized applications require more,    physical servers.
not fewer, resources when they are virtualized. Virtualization always adds over-
head. A virtualized application uses more resources than a non-virtualized in-
stallation of the same application and it will not run faster unless it is hosted on
and has access to faster hardware than the non-virtualized installation. The ac-
tual overhead depends on a number of factors but independent tests have
shown that the CPU overhead generally ranges from 6%-20%
(See “VMware: The Virtualization Drag” at
http://www.networkcomputing.com/virtualization/vmware-the-virtualization-
drag.php). Overutilization of resources can present a serious problem in virtual-
ized environments that do not have correctly sized host servers. See the section
“New Hardware Requirements” below for more details.

Underutilization: Alternatively, the underutilization of servers minimizes the
value of virtualization. To provide a good balance it is important to understand
the environment and to have the necessary tools to monitor and balance the
load dynamically. Typically hypervisor vendors provide tools for this, but 3rd
party vendors can provide added flexibility and value. For example, one organi-
zation I have worked with utilizes virtualization to provide a dynamic test envi-
ronment that can scale to meet the needs of many different groups. Resource
requirements can vary dramatically depending on the type of testing being
done. The environment is a rapidly growing environment that initially experi-
enced serious issues with overutilization. They resolved these issues by imple-
menting a management solution that continuously measured the load and
provided early warning of potential overutilization. This allowed the team to
proactively balance their workloads and add resources when needed.

Single Point of Failure: In a virtualized environment where every host is run-          In a virtualized environment where
ning multiple logical servers, the impairment of a single physical server could         every host is running multiple logical
have devastating consequences. Therefore, it is important to implement redun-           servers, the impairment of a single
dant failsafe systems and high availability solutions to avoid situations where         physical server could have devastating
one failing component affects multiple applications. This solution should in-           consequences.
clude configuring redundancy for all critical server components, employing
highly available storage solutions (RAID 5 or RAID 1+0), ensuring network con-          Therefore, it is important to imple-
nections are connected to separate switches, etc. In addition, in the event every-      ment redundant failsafe systems and
thing else fails, we recommend configuring the environment to be fault tolerant          high availability solutions to avoid sit-
so that if one host fails, the guest systems will start on a secondary host. Imple-     uations where one failing component
mented correctly, virtualized systems are likely to have significantly better up-        affects multiple applications.
time than individual systems in physical environments.

One organization that initially experienced a few hectic evenings as the result
of a single failing server bringing down multiple important applications learned

                                                                 12
early on the value of clustered host servers with dynamic load balancing. After
virtualization was fully implemented, and one host went down, the workloads
automatically restarted on another node in the cluster. In addition this organiza-
tion has also set up separate distributed datacenter so if one datacenter be-
comes unavailable the entire organization isn’t affected.

Virtualization of Everything: Attempting to virtualize every server and applica-
tion in an environment can be challenging. It is true that it is possible to virtual-
ize most workloads, however, success requires careful planning that identifies
what should be virtualized, why it should be virtualized, and what supporting in-
frastructure is required. Just because something is possible does not mean that
it is a good idea.

Some of the more challenging examples are:

Heavily utilized servers. Significant planning is required before virtualizing
servers that often or always register high resource utilization. This is especially
true for servers with multiple CPUs. While most hypervisors support guest sys-
tems with 4 or more vCPU, this requires complicated scheduling and the over-
head can be steep. Therefore, unless there are compelling reasons and ample
available resources, virtualization should be avoided for heavily utilized systems
that require multiple CPUs, especially when predictable performance is critical.

Real time requirements. Applications that require real time or near real time re-
sponse from their servers typically are not suitable for virtualization. The system
clock on virtualized system may lag as much as 5-10 seconds under a heavy
load. For typical loads this is not a problem, but systems that require real time
or near real time response need special treatment. A satisfactory virtual imple-
mentation will require careful analysis of the hypervisor solutions support for
real time requirements on guest systems.

Application support. As virtualization becomes more common, many applica-
tion vendors will begin to support their applications in virtualized environments.
Nevertheless, a significant number of applications still are not supported and
even if virtualization is supported, some application vendors may require proof
that any reported issue can be reproduced in a non-virtualized environment.

Licensing. There are still many applications and licensing agreements that aren’t
designed with dynamic virtualized environments in mind. Ensure that licensing
provisions address whether the license cost is connected to the number of phys-
ical CPUs on the host servers and whether it is licensed to only run on a dedi-
cated physical server. In these situations, the license may require payment for a
license for the host server’s 16 CPUs even though the application is assigned to
only one vCPU. Dedicated physical server licenses may prevent dynamic migra-
tion of the logical server to other host servers. Another consideration is that a
well-planned lifecycle management solution requires each image to have multi-
ple active instances for Development, Test/QA, Production, and so on. The or-
ganization needs to determine and track whether each one of these instances
requires additional licenses?

Direct access to specific hardware. Applications that require direct access to cer-
tain hardware such as a USB or serial port keys or other specialized hardware,
such as video capturing equipment, tape drives and fax modems might be com-

                                                               13
plicated or impossible to virtualize in a meaningful way.

New Hardware Requirements. Hardware must be sized appropriately to take              Hardware must be sized appropriately
advantage of virtualization. For efficient scheduling of resources between multi-     to take advantage of virtualization.
ple logical servers, each host server must have ample resources, including CPU,      For efficient scheduling of resources
memory, network I/O and storage I/O. Because many concurrent resources are           between multiple logical servers,
sharing these resources, the environment must not only support high volumes,         each host server must have ample
it also must support a large number of transactions. For example, one extremely      resources, including CPU, memory,
fast network can be helpful but a single fast card is seldom adequate. Efficient      network I/O and storage I/O.
virtualization requires equipment with multiple fast I/O channels between all
components. Sufficient hardware can also provide added value acting as compo-
nent level fault tolerance for all logical servers.

Special focus needs to be put on the storage infrastructure. Connecting all of
your servers to a SAN (fibre channel or iSCSI based) is, highly recommended, for
a virtual environment. A fast SAN and dedicated LUNs for the virtual machines
avoids many I/O bottlenecks. The more advanced features and common drivers
for a virtualized environment, such as hot migration, high availability, and fault
tolerance, are impossible or significantly harder to implement without a SAN.

Cooling requirements can be a concern. An older datacenter may develop so
called ‘hot-spots’ when a large number of smaller servers are replaced with
fewer but larger servers. Although new servers may require less energy and cre-
ate less heat overall, the generated heat can be concentrated in a smaller area.
There are many ways to address this situation, including adding new racks with
integrated cooling or developing more complex redesigns of the cooling system.

A lack of sufficient resources is a common obstacle for virtualization efforts. For
example, a large organization quickly became aware that they hadn’t allocated
sufficient storage when the constraints became so severe that they weren’t able
to take snapshots of their images. Consequently, they could not implement their
planned high availability strategy.

Another organization tried to implement a large number of I/O intensive net-
work applications on a host with a limited number of network cards. As a result,
the number of I/O interrupts to each card quickly became a bottleneck for this
physical server.

These examples demonstrate how crucial it is to actively monitor and manage
all types of resources; a resource bottleneck can easily cause an otherwise suc-
cessful project to lose critical planned functionality.

Security. Another common concern is properly securing virtual environments.
Reorganization and consolidation of servers and applications can be disruptive
and risky; however, these risks can be managed. For security, there are advan-
tages and disadvantages to virtualized environments and the two are often
closely related. Items that are typically seen as problem areas or risks can often
turn into advantages when the environment is well managed. For example, new
abstraction layers and storage infrastructures create opportunities for attacks
but these additions have been generally proven to be robust. Nearly all attacks
are due to misconfigurations, which are vulnerabilities that exist in both physi-
cal and virtual environments.


                                                              14
A few common concerns are:                                                            Security must be considered when
                                                                                      deploying a virtualized environment.
 Management of inactive virtual machines can not rely on traditional patch            Common concerns that should be
 management systems. In many virtualized environments, the templates or               addressed are:
 guest systems that are used as definitive images for deployment may not be            • Managing inactive virtual machines
 accessible to traditional patch management solutions. In a worst case sce-           • Maintaining virtual appliances
 nario, a poorly managed definitive image may revert an existing image to an           • Version controlling
 unsafe earlier patch level.                                                          • Server theft
                                                                                      • Understanding the new abstraction
 In an environment with strict change control and automation of all configura-           layer
 tion changes, this is not a major issue, but in some environments, these situa-      • Hyperjacking
 tions can present major problems.                                                    • Securing a dynamic environment
                                                                                      • Immature or incomplete tools
 Maintenance of virtual appliances. Virtual appliances are pre-packaged solu-         • Securing and isolating confidential
 tions (applications, OS and required drivers) that are executed on a virtual host      data
 and that require minimal setup and configuration. Appliances can be secured
 further through OS lockdown and removal of any services or daemons that
 aren’t necessary for the appliance. This practice makes the appliance more ef-
 ficient and more secure because it minimizes the attack areas of which a mali-
 cious user can take advantage.

 These non-standard installations can be harder to maintain and patch be-
 cause some standard patches might not work out-of-the-box unless provided
 by the virtual appliance vendor. This problem can be mitigated by testing all
 configuration changes in a separated development and test environment be-
 fore deploying in production.

 Lack of version control. Virtualized environments that allow guest systems to
 be reverted to an earlier state require that special attention is paid to locally
 stored audit events, applied patches, configuration, and security policies that
 could be lost in a reversion. Strict change control procedures help avoid this
 issue. Storing policies and audit logs in a central location also helps avoid
 problems.

 Server theft. In a non-virtualized environment, stealing a server is difficult. The
 thief needs physical access to the server to disconnect it and then cooly walk
 out of the datacenter with a heavy piece of hardware. In a virtual environ-
 ment, a would-be thief only needs access to the file system where the image
 is stored and a large enough USB key. Surreptitious network access may be
 even more convenient for a thief. A successful illegal copy of the virtual image
 may be indetectable. This issue underscores the need for efficient access con-
 trol and an audit system that tracks the actual user and not just ‘root’, ‘admin-
 istrator’ or other pre-defined privileged users.

 New abstraction layer. Hypervisors introduce a new abstraction layer that can
 introduce new failures as well as security exposures. Hypervisors are designed
 to be as small and efficient as possible which can be a double-edged sword
 from a security perspective. On the upside, hypervisors have a small footprint
 with few well controlled APIs so they are relatively easy to secure. On the
 downside, lightweight and efficient can mean limited error recovery and secu-
 rity implementation. The downside can be mitigated by configuring high secu-
 rity around the hypervisor including specific security related virtual appliances
 or plug-ins to the hypervisors.

                                                              15
Hyperjacking is an attack on the hypervisor that enables a malicious user to
 access or disturb the function of a large number of systems. So far hyperjack-
 ing hasn’t been a significant problem, but it is critical to ensure that your vir-
 tual environment follows the lockdown procedures recommended by the
 hypervisor vendor and that you apply all recommended security patches.

 Securing dynamic environments. To fully take advanatage of the potential pro-
 vided by a virtual environment that environment needs to support automated
 migration of guest systems between host servers. A dynamic environment,
 however, presents new challenges. When secured resources move from host to
 host, a secure environment must be maintained regardless of the current host
 of the guest system. These challenges may not be as problematic as it ap-
 pears. With policy-based security and distributed vLANs that are managed for
 a complete group of host servers or the complete datacenter, policies will fol-
 low the guest system and remain correctly configured regardless of which
 server its currently running on.

 Immature or incomplete tools. Over the last several years the tools to manage
 virtual environments have been maturing rapidly and much of the functional-
 ity for securing the system, automating patching, and managing of the virtual
 system are enhanced frequently. Many of these functions are provided by the
 hypervisor vendor, while other tools with additional features are provided by
 3rd party vendors.

 This rapid development of tools and features can be expected to continue and
 it will be more important to have management solutions that can extend
 across heterogeneous environments - including virtual and non-virtual sys-
 tems - and all the way out to cloud infrastructures. Precise predictions are im-
 possible, but the industry is aware that virtual environments and cloud
 solutions will rapidly take over more and more workloads. Management sys-
 tems of the future will work with and manage a mix of hypervisors, operating
 systems and workloads in these environments.

 The ability to secure and isolate confidential data is a common concern that
 must be carefully considered when designing storage solutions. Server virtual-
 ization itself doesn’t add a significant risk in this area, but it’s important to be
 aware of since SAN and more complex virtual storage solutions are often em-
 ployed to maximize the value of a virtualized environment. Further discussion
 on this topic is beyond the scope of this article, when these solutions are em-
 ployed, the steps required to secure the storage may require special or vendor-
 specific knowledge.

 This is particularly important if data is governed by regulations such as HIPAA,
 GLBA, PCI, SOX, or any other federal or local regulations. When regulations
 such as these apply, data security often must be designed in consultation with
 auditors and regulators.

With a properly managed virtualization project, these risks can be minimized;
however, it is important that organizations be aware of the risks and address
them appropriately. A well-managed virtual environment can provide greater se-
curity by ensuring that all servers are based on identical, well-tested base im-
ages.


                                                              16
In addition, security solutions are easier to implement and administer when
based on centralized policies. For example, consider an organization that needs
to distribute management of much of the guest system to individual owners.
These individual owners might choose to revert to an earlier snapshot at will.
Central IT manages the security on these system by regularly scanning them to
ensure they are correctly patched and do not contain any viruses or other mal-
ware. When non-critical issues are detected, the owner is notified; for critical is-
sues the system is disconnected from the network.

Appropriate Skills and Training. With the right tools and planning, manage-
ment of a virtualized environment can be simple and streamlined; but the IT
staff may need additional training to acquire new skills.

Administrators who don’t fully understand the specific tools and requirements
for virtualized environments can easily misconfigure the environment – result-
ing in environments with unpredictable performance or, worse, security
breaches. Sufficient time and resources for training are required both before and
throughout any virtualization project.

Consider the same organization noted in the previous example. They needed to
distribute many management tasks to individual owners. They ran into an prob-
lem when a group using a guest system with 2TB of data took a snapshot of the
system. The local manager didn’t realize that the system would now need 4 TB
of data and that it would take 5 hours to commit the snapshot. The issue was
resolved by having a certified professional educate the system’s owner about
the impact various actions have on the storage requirements and performance.
They were able to remove the snapshot safely, without losing any data, but
could have avoided the issue if they had taken the proper training first.

General Project Risks. Virtualization projects are subject to the same generic
risks as any major project. Both scope creep and unrelated parallel changes with
entangling side effects can derail virtualization projects as quickly and com-
pletely as any other project.

Design Approach
Given an understanding of the reasons for virtualization, the business drivers,
the possible affects on business and potential obstacles, the sources of failure
and their mitigation, project planning can begin in earnest. The next step for a
virtualization project is to carefully understand and analyze the environment. A
successful virtualization project is the result of more planning than anyone ex-
pects. Some specific planning steps are laid out here.

Identify Workloads Appropriate to Virtualize
The first step is to identify the scope of this project, that is, the applications and
servers to be included. The bullet item Virtualization of Everything listed in the
previous “Typical Challenges with Virtualization” section identified several types
of servers that are difficult to virtualize. Avoid virtualizing these types of servers
unless there is a valid reason and a plan for addressing the concerns. Fortu-
nately, few servers fall into these categories. Most hypervisor vendors provide
tools that can assist with this process, but to get the best possible result and
avoid being at the mercy of the vendor, you should have a good understanding
of the environment in question. The following server categories are listed in the
order of suitability for virtualization:

                                                                17
Rarely used servers that must be accessed quickly. Virtualizing these servers al-
lows the organization to keep a large library of servers with different operating
systems and configurations with a minimum hardware investment.

They are typically used for:




This starting point is a common for many companies because the value is sig-
nificant and the risks few. Value is realized through faster provisioning of new
servers, reduction of provisioning errors, and minimized hardware investment.

Additional worker nodes to handle peak loads. This approach is especially useful
when applications can be dynamically scaled out with additional nodes sharing
a common virtual environment. If the environment is maintained and sufficient
resources are available when needed, this scenario adds great business value. A
just-in-time automated worker node provisioning system maximizes this value.

Consolidation of lightly used servers. Some examples of lightly used servers
include:

• Service Providers (xSP) with many small clients.

• Multiple mid-tier managers or file and print servers originally implemented
  on separate servers for political, organizational, or legal reasons.

In many cases isolation provided by virtualization is sufficient, especially if the
data is separated onto private disk systems; however, you should verify that vir-
tualization satisfies the organization’s isolation and separation requirements.

Servers with predictable resource consumption profiles allow planning the
distribution of work to virtualized servers. In these cases, keep in mind that:

• You should beware of applications with heavy I/O.

• Applications that require different sets of resources at the same time can
  coexist on the same physical server.

• Applications that require the same resources at different times can also
  coexist on the same physical server.

In each of these cases, value comes from reducing the number of servers, re-
sulting in both hardware maintenance and management cost savings. Unless a
project falls into one of these categories, virtualization alone seldom saves
money. There are other good reasons to consider virtualization, but you should
be aware that cost savings may not appear.


                                                               18
Understand and Analyze the Environment
Server consolidation is an opportunity to raise the virtualization maturity level   Server consolidation is an opportunity
of the environment, or to prepare to raise it by identifying aligned procedures     to raise the virtualization maturity
that can be automated and enhanced.                                                 level of the environment, or to
                                                                                    prepare to raise it by identifying
The analysis should include performance profiles of individual servers and appli-    aligned procedures that can be
cations, covering all critical resources (CPU, memory, storage I/O and network      automated and enhanced.
I/O) and their variation over time. Both the size and the number of transactions
are important. An understanding of when different applications need resources
and under which circumstances helps determine which applications are suitable
for virtualization and which can share resources and be co-located in the same
resource groups.

Many hypervisor vendors have tools that can assist with this process. However,
regardless of which tool you are using, it is important to monitor performance
over a period of time that also includes any expected performance peaks. Cap-
turing a baseline of this information is recommended so that it can be com-
pared against corresponding data collected from the virtualized environment. In
situations where all expected peaks can’t be measured it is important to care-
fully analyze and estimate the needs.

This analysis also requires consideration of social engineering and the types of
events that trigger resource consumption. You especially need to gauge the risk
that the same events will trigger multiple applications to use more resources.
Awareness of these scenarios is critical to ensure acceptable response times
during peak load times for different solutions.

Consider the following examples:

A typical environment in which, at the end of every workday, a majority of the
users:

• Send an email, synchronize email folders and then logout from the mail server.

• Run reports to prepare for the following day’s activities.

• Print these reports and, perhaps, additional documents to bring home with
  them for review.

• Make backup copies of a number of important files to a folder on a file server.

An environment in which an unplanned event or fault occurs that triggers activ-
ity on multiple systems such as:

• A fault triggers the fault management systems to alarm, do root cause
  analysis, handle event storms, and certain automation tasks.

• The end users notice the problem and use knowledge tools and service desk
  functions to determine if the problem is known and, otherwise, report it.

• The operations and the help desk team receive the alert from the fault
  management and service desk system and connect to the service desk, CMDB,
  asset management, or network and system management tools to

                                                               19
troubleshoot and correct the issue.

If the applications share a common host system whose requirements are based
on ordinary usage, these virtual machines will slow down from the higher peak
load as a result of the activity of hundreds or thousands of users. Just when the
systems are most needed, they become overloaded.

Tracking the consumption of critical resources over time will reveal patterns of
resource usage by servers. Based on that knowledge, you can determine servers
to safely virtualize, what resources they need and which applications can suit-
ably share resources. This will enable you to more effectively pair virtual ma-
chines to stress different types of resources and to stress the system at different
points in time.

Hypervisor and Supporting Software Selection
A hypervisor must work well in the environment and efficiently support require-
ments. A few years ago there were only a limited selection of hypervisors but
today, the number of solutions has increased. Independent lab tests show that
each of the major hypervisor solutions has advantages and disadvantages:

A few important areas to scrutinize when selecting a hypervisor vendor are:

Organizational and Social Requirements: Requirements that arise from the
knowledge and experience of the people in the environment are often as impor-
tant, if not more important, than the technical requirements. These require-
ments can affect the success and cost of the project.

For example:

• Does the organization have experience or knowledge about one specific
  solution?

• Do preferred partners have knowledge or experience with any potential
  solutions?

• Have solutions been tested or can they be tested easily with the planned
  hardware platform and most critical applications?

Required Functions and Protocol: With the gradual standardization of basic hy-
pervisor functions, many of the solutions from the major vendors are becoming
similar. Added value has become the primary differentiator in the form of:

• Efficient and dynamic automated migration that moves virtual servers
  between physical hosts. Load balancing and high availability solutions
  controlled by the vendor’s tools and integrated with standard enterprise
  management solutions are important here.

• Support for specific hardware combinations. For example, more advanced
  functions like hot migration commonly require the servers (especially CPUs)
  to be identical or similar. Some hypervisors also allow compatibility mode
  with mixed CPU versions, but this forces the systems only to take advantage
  of functionality that all of the CPUs in use have in common.


                                                              20
• Support for existing or planned SAN solutions.

• Support for multiple storage repositories and dynamic move and rebalance
  of virtual images between repositories.

• Support for all, or at least a majority, existing or planned software applications

• Support for all operating systems planned for virtualization (32/64 bits
  versions of Windows, UNIX and/or Linux).

• Ability to access, utilize, and efficiently distribute all required resources.

• Management tools or support for management tools to monitor performance
  and availability and use this information to automate your environment.
  Preferably the solution will have an open API to integrate it with existing
  enterprise management systems.

• Built in functions and APIs to manage advanced functions for security, high
  availability, fault tolerance, and energy saving.

These are just a few examples; a project should carefully list the requirements
important in the business environment. Describing the importance of each re-
quirement and the consequences of lack of support will simplify prioritization of
options.

Virtualization Management Tools
Management of an environment becomes even more critical when virtualization            Management of an environment
is employed. Some of the common management issues related to virtualization            becomes even more critical when
include the need to:                                                                   virtualization is employed.

• Simplify creation of new virtual servers and migration of existing systems into
  a virtualized environment.

• Predict and track virtual environments that compete for server and storage
  resources.

• Predict and track performance utilization in real time as well as historical
  trends in individual environments, the host system as well as the SAN system,
  and preferably, in a way that allows correlation between these components.

• Tools that trace resource utilization, up and down time, and connect metrics
  from these tools with chargeback and show back systems. Efficient usage of
  chargeback systems, together with mature systems that spin up and down
  servers as required, allow the organization to encourage the system owners
  to manage their environment efficiently and, therefore, maximize the impact
  of Green IT and minimize energy bills.

• Management tools supporting life cycle management processes with clear
  stages for development, quality assurance, library of available images, archive,
  configuration, and production.

• Efficient tools for workflow orchestration and automation will simplify and
  modularize automation by securely reusing previously created tasks.

                                                                 21
While implementing automation focus on “low hanging fruit”, simple automa-
 tions that clearly save money or add security. Complex one-off automation
 tasks can be expensive to maintain and are often not worth the effort.

• Tools that intelligently and actively manage the environment based on polices,
  measured performance and events. This added flexibility can be one of the
  great advantages of virtualization. A few examples are:

        • Dynamically changing resources available to virtual machines

        • Moving virtual machines between different host servers as needed

        • Dynamically provisioning and configuring servers on demand or when
          triggered by policies

        • Dynamically shutting down virtual machines and host servers when
          they aren’t being used

 If these capabilities aren’t managed appropriately, these otherwise great
 features can present some very significant risks.

• Manage “VM sprawl” by implementing good change control and life cycle
  management processes that track where, why, and how virtual applications
  are running and which resources they use.

• Provide tools for backup and disaster recovery of the virtual environment.

• Provide tools and procedures to manage security, including patch manage-
  ment tools, firewall integrated with the hypervisor, and various security related
  virtual appliances for the virtual environment.

If the management tools can handle most of these issues and include basic hot
migration (VMOTION, Live Migration, XenMotion or similar) the environment will
support efficient load balancing between the servers. Although some manage-
ment tasks can be automated, it is important to be able to predict, whenever
possible, the resources that are required before they are required. This approach
demands strong understanding of the business systems and occasional human
intervention.

The importance of a holistic view of datacenter management solutions cannot
be underemphasized. Datacenter management solutions must support the com-
plete environment, virtual and non-virtual systems, both on-premise and off-
premise in cloud infrastructures.

The solution should focus on business services and the role of IT in the busi-
ness, and, when needed, seamlessly drill into other aspects of management and
the business ecosystem. To accomplish this holistic approach, virtualization
tools must cooperate and integrate with the existing enterprise management
software.

Executive Buy-in
Having examined what virtualization can and cannot do and the considerations
for deploying virtualization in an environment, we return to a crucial step in the

                                                              22
project plan and one that can post the most common obstacle to success:
stakeholder support.

Without executive support and backing from all important stakeholders, any
project is likely to fail or achieve only partial success and profitability.

The following steps will help garner support:

Identify the Importance: Articulate the importance of the virtualization project
to both the company as a whole and to the stakeholder’s organization and fa-
vored projects.

The business drivers listed earlier in this article are a starting point. Spell out the
savings the project will generate, and how it will support new business models
that will create new revenue streams. Will it make the organization more effi-
cient and minimize the lead time to provision new services, and so on?

Always communicate the importance of the project in a way that makes sense
and is relevant to the target audience.

Communicate Risks and Risks of Inaction: Honestly and sincerely point out
the risks. The stakeholders must buy into the true picture. Hidden facts seldom
stay hidden forever. A strong supporter who feels misinformed by the project
group can easily turn into an even bigger obstacle, resulting in severe damage
to the project.

Explain Migration without Risk of Interrupting the Business: A main concern
for the business owners is interruption in the business. A detailed migration
plan that addresses interruption of business is essential. Point out that a mature
and flexible virtualized environment will minimize downtime for planned out-
ages.

Listen: It is important to listen to concerns, investigate whether those concerns
are valid and, if so, identify how they can be addressed. Again, the key to a suc-
cessful project is to have strong support from the stakeholders.

Proof Points
Proof points are measurements that indicate the degree of the project’s success.
Without identifying these points, the value of the virtualization will be obscure.
These metrics will also help obtain executive buy-in and support for the project
by associating it with measurable gains in productivity – or reductions in costs
or response time. This is especially important if the stakeholders have previ-
ously raised concerns.

Proof points can be derived from business drivers and their baseline metrics. For
example, if the intent is to reduce the time it takes to deploy new hardware or
software, first identify current deployment times. If the intent is to save money
through hardware consolidation, identify the costs for maintaining the current
hardware, including cooling costs for the data center. Then, follow up with those
same measurements after the project or a significant phase in the project has
completed.



                                                                 23
Summary – Five Key Points
The five points to remember for a successful virtualization
project:

Understand Why and What: Clearly understand the reason
for the project, the business drivers, and the applications
and functions to be virtualized. The scope of the project
must be clearly defined including phases for a staged ap-
proach, milestones, and the appropriate metrics to meas-
ure progress and expected outcome.

Identify the Expected Risks: Risks, both functional and fi-
nancial, are expected and acceptable. Virtualization can
provide great value, but like any project, there are risks.
Risks can usually be managed, but the key is awareness
and planning by the project team and the stakeholders.

Virtualize Appropriate Workloads and Avoid Overutilization (and Underuti-
lization): A common reason for virtualization failure is unreliable performance
after applications have been virtualized. Avoid this situation by ensuring that
too many applications do not share limited resources, and avoid host systems
with inadequate bandwidth or inadequate support for large numbers of I/O
transactions. Conversely, overestimating the amount of resources required can
result in too many idle resources and can reduce the overall ROI.

When virtualizing an environment the key is to choose the appropriate workload
to virtualize, provide modern high-end server class host servers and carefully
manage the workload and rebalance the workload so that all applications have
sufficient resources during their peak time.

Get Support of Stakeholders: Get support from executive management as well
as from the business owners before starting the project. Listen to concerns and
address them. Have buy-in before the project starts.

Establish Success Criteria: Each project or subproject must have defined suc-
cess criteria. These should include a comparison with the baselines from before
virtualization. This criteria should be tied directly to the project’s business driv-
ers, such as cost per application, energy consumption in the datacenter, speed
to provision a server, or avoided alternative cost for building a new datacenter.

Virtualization offers efficiency and agility, but there are many pitfalls and obsta-
cles to success. By following these five key points and the principles explained
in this article, risks are reduced and chances for success are maximized.

Additional insights on implementing virtualization can be found in the Virtual-
ization Best Practices section of the Implementation Best Practices pages,
which you can access through the following URL:
https://support.ca.com/phpdocs/0/common/impcd/r11/virtualization/virt_Fram
e.htm

Anders thanks Terry Pisauro, Engineering Services Architect at CA Technologies,
for providing valuable editing contributions.


                                                                24
Leading Edge Knowledge Creation
by Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs, CA
Technologies

Ever since businesses began looking for efficiencies by outsourcing or leveraging      About the author:
specialized services or favorable cost structures, one of the challenges has been
to use this approach for acquiring leading edge knowledge. It may be argued
that mergers and acquisition activities fulfill this role, as does recruiting of new
personnel, either new university graduates or those who have accumulated pro-
fessional experience. But these methods tend to be sporadic and do not repre-
sent a continuous process for bringing knowledge into a large and diverse
organization.

At CA Technologies we have taken a different approach to tap into external re-
sources. We aim to carry out a broad agenda geared towards continuous, in-con-
text knowledge creation, to complement other more sporadic efforts. In contrast
to the “pull” model used by some companies to attract ideas and proposals, CA         Gabriel (Gabby) Silberman is Senior
Labs, the research arm of CA Technologies, relies on a “push” paradigm. This en-      Vice President and Director of CA
ables us to reach out to the research community to seek insights into technical       Labs, responsible for building CA
challenges, the evolution of existing products, point solutions, or research to as-   Technologies research and innova-
sist in new product development.                                                      tion capacity across the business. In
                                                                                      collaboration with Development,
Using a popular context, the … as a Service (aaS) framework, think of CA Labs as      Technical Services, and Support, and
an internal service provider. Its offerings include access to an extensive network    working with leading universities
of leading academics, and the mechanisms (legal, financial, etc) to establish a        around the world, CA Labs supports
framework for collaboration. This would be the equivalent of an Infrastructure        relevant academic research to further
as a Service (IaaS) offering. On top of this, foundational research projects may      establish innovation in the com-
be structured to undertake long-term technical initiatives. These are based on        pany's key growth areas.
needs identified by the Office of the CTO and others responsible for charting and
executing the strategic direction for CA Technologies’ products and services.         Gabby joined CA in 2005, bringing
These initiatives will explore technological advancements prior to potential im-      with him more than 25 years of aca-
plementation as CA offerings, and constitute a Platform as a Service (PaaS) type      demic and industrial research experi-
of offering.                                                                          ence. He joined CA from IBM, where
                                                                                      he was program director for the com-
To complete the analogy with a Software as a Service (SaaS) offering, CA Labs         pany's Centers for Advanced Studies
provides the capability to create “research sprints.” These are short term efforts,   (CAS) worldwide. Previously, Gabby
based on the relationships established through our long-term trusted relation-        was a manager and researcher at
ships with academic partners and their deep knowledge of interests, products          IBM's T.J. Watson Research Center
and services relevant to CA Technologies.                                             where he led exploratory and devel-
                                                                                      opment efforts, including work in the
Consider the example of Reacto, a tool for testing the scalability of reactive sys-   Deep Blue chess project.
tems developed as a foundational research project (think PaaS) in collaboration
with researchers from the Swinburne University of Technology in Australia and         Gabby earned bachelor of science
CA’s development lab in Melbourne.                                                    and master of science degrees in
                                                                                      computer science from the Technion
In a sophisticated enterprise application, a single user action may trigger a         – Israel Institute of Technology, and
number of coordinated activities across a variety of systems. Before deploying        a Ph.D. in computer science from
such an application, it needs to be thoroughly tested against realistic operation     the State University of New York at
scenarios for quality assurance purposes. However, replicating such a large-scale     Buffalo.
testing environment is challenging and even cost prohibitive, due to resource
                                                              25
and complexity constraints. The Reacto project developed a general emulation
framework, using lightweight models to emulate the endpoints with which the
system under test interacts. This enables large-scale realistic emulation of a va-
riety of enterprise production environments using only a small number of physi-
cal machines.

Reacto has been used to demonstrate the scalability of several CA components
and products, including the Java Connector Server (a component of CA Identity
Manager).

Now let us look at an example of a foundational research (PaaS) effort which be-
came the basis for a research sprint (SaaS). The case in point is the Data Mining
Roles and Identities project done in collaboration with researchers from the Uni-
versity of Melbourne in Australia.

Role mining tools automate the implementation of role based access control
(RBAC) by data mining existing access rights, as found in logs, to reveal existing
roles in an enterprise. Along with individual roles, a role hierarchy can be built
and roles may be assigned to individual users. Additionally, data mining may be
used to identify associations among users, accounts and groups, and whether
these associations are necessary.

As a result of CA’s acquisition of Eurekify and its Enterprise Role Manager, the re-
searchers were asked to move their focus to leverage the role visualization tool
developed as part of the project. This request gave birth to a research sprint to
develop a tool to visualize access control data. Using the tool it is possible to vi-
sualize the “health” of a customer's RBAC implementation, before and after the
deployment of CA's Role and Compliance Manager. Furthermore, the tool may
be used periodically to detect and investigate outliers within an enterprise’s role
hierarchy, as part of governance best practices.

The success of the research model practiced by CA Labs has been sustained by
these and other examples of innovative and practical implementation of knowl-
edge transfer.




                                                               26
Virtualization: Enabling the Self-Service Enterprise
by Efraim Moscovich, Principal Software Architect, CA Technologies


“To provision a complete multi-system SAP CRM application, press or say ‘1’.”         About the author:
Virtualization is not a new concept; it has been around since the early 1960s.
Self-service systems such as travel reservations and online shopping are an in-
tegral part of today’s dynamic economy. The marriage of virtualization technolo-
gies and self-service concepts has the potential to transform the traditional
datacenter to a Self-service App Store.

This article examines virtualization technologies and the role they play in en-
abling the self-service enterprise. It also discusses key concepts such as service,
service catalog, security, policy, management, and management standards, such
as Open Virtualization Format, in the context of self-service systems.

1.0 IT Services and the Services Gap                                                  Efraim Moscovich is a Principal Software
In today’s enterprise, the IT department has the primary responsibility of deliv-     Architect in the CA Architecture Team,
ering, running, and maintaining the business critical services (line of business)     specializing in Virtualization and Au-
also known as production. This includes the infrastructure (such as servers, net-     tomation.
work, cabling, and cooling), software, and management functions to ensure high
availability, good performance, and tight security. Downtime or degraded func-        He has over 25 years of experience in IT
                                                                                      and Software Development in various
tionality may cause significant negative financial impact to the bottom line.
                                                                                      capacities including IT Production Con-
The critical business services include, among others, email, customer relation-       trol, programmer, as a development
ship management or practice management, supply chain management, manu-                manager, and architect. Efraim has been
facturing, and enterprise resource planning.                                          involved in the development of many
                                                                                      products including Unicenter NSM, and
In addition to the production services, the IT department has to provide infra-       Spectrum Automation Manager.
structure and support for a wide variety of other services, which range from as-
sisting the sales force with setting up demo systems for clients to helping the       He has expertise in various domains in-
engineering department with their testing labs. The typical IT department has a       cluding Event Management, Notification
long backlog of projects, requests, and commitments that it cannot fulfill in a        Services, automated testing, web serv-
                                                                                      ices, virtualization, cloud computing, in-
timely manner. Many of the backlog items are requests for evaluation and pur-
                                                                                      ternationalization & localization,
chase of new hardware or software, to set up and configure systems for end             Windows internals, clustering and high-
users, create custom applications for the enterprise, and provide short-term          availability, scripting languages, and di-
loaners for product demos and ad-hoc projects. For example, to convert a large        agnostics techniques.
collection of images from one format to another, the project team required hun-
dreds of computers to run the conversion but only for a few days or weeks.            He is an active participant in the DMTF
                                                                                      Cloud Management Work Group.
The gap between the ‘must do’ services and the ‘should do’ services typically is
called the IT service gap.                                                            Prior to joining CA Technologies, Efraim
                                                                                      worked on large scale performance
                                                                                      management and capacity planning
The struggle to close this gap and provide high quality services to all IT users on
                                                                                      projects at various IT departments.
time at a low cost has been raging for years. Some of the solutions used to im-
prove the speed and quality include:                                                  Efraim has a M.Sc. in Computer Science
                                                                                      from New Jersey Institute of Technology.
• Automating procedures (including scripting and job scheduling systems)

• Adopting standardized policies and procedures (such as ITIL1)

                                                              27
• Distributing functions to local facilities

• Sub-contracting and using consulting services

• Outsourcing the whole data center or some services to third parties

• Enabling end users to fulfill their own needs using automated tools
  (self-service)

2.0 Self-Service
The self-service concept dates back to 1917 when Clarence Saunders2, who
owned a grocery store, was awarded the patent for a self-serving store. Rather
than having the customers ask the store employees for the groceries they
wanted, Saunders invited them to go through the store, look at the selections
and price of goods, collect the goods they wanted to buy, and pay a cashier on
their way out of the store.

Some well-known self-service examples include:

• Gas stations, where the customers pump their own gas rather than have an
  attendant do it

• Automatic Teller Machines (ATMs) that enable consumers to have better
  control of their money

• The human-free, and sometimes annoying, phone support systems in many
  companies (“for directions, press 1”)

• The ubiquitous shopping web sites (such as Amazon) that almost transformed
  the self-service concept into an art form

The main reasons for the proliferation of the self-service paradigm are the po-
tential cost savings for the service providers and the assumed better service ex-
perience for the consumers.

In order for a service to be a candidate for automated self-service, some or all of
the following conditions must be met:

• There are considerable cost savings or revenue opportunities for the provider
  in operating the service.

• There is a service gap between what the provider can offer and what the
  consumer demands.

• The service can be automated (that is, the service has a discrete and repeat-
  able list of steps to be carried out and no human intervention is required
  from the provider.).

• The implemented self-service is convenient and easy to use by the consumers,
  and is faster than the non-automated version.

• The service offering fits nicely within the consumers’ mode of operations and
  does not require expert knowledge outside their domain.

                                                              28
The IT department adopted the self-service paradigm for many of its functions          The IT department adopted the self-
even before virtualization was prevalent. Examples include the Help Desk and           service paradigm for many of its func-
other issue tracking systems, and reservation systems for enterprise resources.        tions even before virtualization was
However, the implementation of more complex and resource intensive self-ser-           prevalent.
vice systems was not possible, at an acceptable cost, until the arrival of virtual-
ization technologies.

3.0 Virtualization
According to the Merriam-Webster dictionary, the word ”virtual” comes from
Medieval Latin ”virtualis”, from Latin ”virtus” strength, virtue, and it means ”effi-
cacious” or ”potential”3.

In our context, virtualization is a form of abstraction – abstracting one layer of
computing resources (real or physical) and presenting them in a different form
(virtual, with more virtues) that is more efficacious and has more potential. Usu-
ally the resources appear larger in size, more flexible, more readily usable, and
faster than they really are in their raw form.

There are many forms of virtualization, from hardware or server virtualization
(that can create what is commonly known as Virtual Machines or VMs), to Stor-
age (implemented via SAN or NAS), to Network, and Application virtualization.

Emerging forms of virtualization that are entering the mainstream are network
and memory virtualization (a shared resource pool of high-speed memory banks
as opposed to virtual memory), and I/O virtualization.

Server Virtualization is achieved by inserting a layer between the real resources
and the services or applications that use them. This layer is called a Virtual Ma-
chine Monitor, a Hypervisor, or Control Program.




Figure 1: Virtualization (VMware)

These virtualization technologies can be abstracted further to provide database
and data virtualization, and more application-level constructs such as a mes-
sage queuing appliance, a relational database appliance, and a web server ap-
pliance. For additional virtualization terms and definitions, please refer to the
Glossary.

                                                               29
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you
Virtualization: What is it and what can it do for you

Weitere ähnliche Inhalte

Was ist angesagt?

All Clouds are Not Created Equal: A Logical Approach to Cloud Adoption in Y...
All Clouds are Not Created Equal:  A Logical Approach to Cloud Adoption in  Y...All Clouds are Not Created Equal:  A Logical Approach to Cloud Adoption in  Y...
All Clouds are Not Created Equal: A Logical Approach to Cloud Adoption in Y...IBM India Smarter Computing
 
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY""SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"IBM India Smarter Computing
 
VDI Performance Assessment
VDI Performance AssessmentVDI Performance Assessment
VDI Performance AssessmenteG Innovations
 
The Datacenter Of The Future
The Datacenter Of The FutureThe Datacenter Of The Future
The Datacenter Of The FutureCTRLS
 
Workflow and Collaboration: Working Faster, Smarter, Cheaper
Workflow and Collaboration: Working Faster, Smarter, CheaperWorkflow and Collaboration: Working Faster, Smarter, Cheaper
Workflow and Collaboration: Working Faster, Smarter, CheaperOnFrame Ltd
 
DaaS/IaaS Forum Moscow - Najat Messaoud
DaaS/IaaS Forum Moscow - Najat MessaoudDaaS/IaaS Forum Moscow - Najat Messaoud
DaaS/IaaS Forum Moscow - Najat MessaoudDenis Gundarev
 
Optimizing Cloud Computing Through Cross- Domain Provisioning
Optimizing Cloud Computing Through Cross- Domain ProvisioningOptimizing Cloud Computing Through Cross- Domain Provisioning
Optimizing Cloud Computing Through Cross- Domain ProvisioningGaletech
 
X.DAYS Service Provider Pitch Interlaken Swiss
X.DAYS Service Provider Pitch Interlaken SwissX.DAYS Service Provider Pitch Interlaken Swiss
X.DAYS Service Provider Pitch Interlaken SwissRuud van Zutphen ⛅️
 
Application Profile Knowledgeware
Application Profile KnowledgewareApplication Profile Knowledgeware
Application Profile KnowledgewareGlenWhite
 
Achieving Cloud Enterprise Agility
Achieving Cloud Enterprise AgilityAchieving Cloud Enterprise Agility
Achieving Cloud Enterprise AgilitySteven_Jackson
 
Emg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperEmg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperhoanv
 
From Valleys to Clouds
From Valleys to CloudsFrom Valleys to Clouds
From Valleys to CloudsPeter Coffee
 
Dr. Michael Valivullah, NASS/USDA - Cloud Computing
Dr. Michael Valivullah, NASS/USDA - Cloud ComputingDr. Michael Valivullah, NASS/USDA - Cloud Computing
Dr. Michael Valivullah, NASS/USDA - Cloud Computingikanow
 
Whitepaer VDI and DaaS -- June 2015
Whitepaer VDI and DaaS -- June 2015Whitepaer VDI and DaaS -- June 2015
Whitepaer VDI and DaaS -- June 2015Greg Spence
 
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...Principled Technologies
 
Planning, deploying and managing a microsoft vdi infrastructure (slides tra...
Planning,  deploying and managing a microsoft vdi infrastructure  (slides tra...Planning,  deploying and managing a microsoft vdi infrastructure  (slides tra...
Planning, deploying and managing a microsoft vdi infrastructure (slides tra...Fabrizio Volpe
 
AMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkAMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkJames Price
 
DaaS/IaaS Forum Moscow - Ivo Murris
DaaS/IaaS Forum Moscow - Ivo MurrisDaaS/IaaS Forum Moscow - Ivo Murris
DaaS/IaaS Forum Moscow - Ivo MurrisDenis Gundarev
 

Was ist angesagt? (20)

All Clouds are Not Created Equal: A Logical Approach to Cloud Adoption in Y...
All Clouds are Not Created Equal:  A Logical Approach to Cloud Adoption in  Y...All Clouds are Not Created Equal:  A Logical Approach to Cloud Adoption in  Y...
All Clouds are Not Created Equal: A Logical Approach to Cloud Adoption in Y...
 
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY""SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"
"SEEDING CLOUDS ON POWER SYSTEMS WITH IBM SMARTCLOUD™ ENTRY"
 
VDI Performance Assessment
VDI Performance AssessmentVDI Performance Assessment
VDI Performance Assessment
 
The Datacenter Of The Future
The Datacenter Of The FutureThe Datacenter Of The Future
The Datacenter Of The Future
 
VDI Cost benefit analysis
VDI Cost benefit analysisVDI Cost benefit analysis
VDI Cost benefit analysis
 
Workflow and Collaboration: Working Faster, Smarter, Cheaper
Workflow and Collaboration: Working Faster, Smarter, CheaperWorkflow and Collaboration: Working Faster, Smarter, Cheaper
Workflow and Collaboration: Working Faster, Smarter, Cheaper
 
DaaS/IaaS Forum Moscow - Najat Messaoud
DaaS/IaaS Forum Moscow - Najat MessaoudDaaS/IaaS Forum Moscow - Najat Messaoud
DaaS/IaaS Forum Moscow - Najat Messaoud
 
Optimizing Cloud Computing Through Cross- Domain Provisioning
Optimizing Cloud Computing Through Cross- Domain ProvisioningOptimizing Cloud Computing Through Cross- Domain Provisioning
Optimizing Cloud Computing Through Cross- Domain Provisioning
 
X.DAYS Service Provider Pitch Interlaken Swiss
X.DAYS Service Provider Pitch Interlaken SwissX.DAYS Service Provider Pitch Interlaken Swiss
X.DAYS Service Provider Pitch Interlaken Swiss
 
Application Profile Knowledgeware
Application Profile KnowledgewareApplication Profile Knowledgeware
Application Profile Knowledgeware
 
Achieving Cloud Enterprise Agility
Achieving Cloud Enterprise AgilityAchieving Cloud Enterprise Agility
Achieving Cloud Enterprise Agility
 
Emg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaperEmg821511050D3 data center_whitepaper
Emg821511050D3 data center_whitepaper
 
From Valleys to Clouds
From Valleys to CloudsFrom Valleys to Clouds
From Valleys to Clouds
 
Intel Cloud
Intel CloudIntel Cloud
Intel Cloud
 
Dr. Michael Valivullah, NASS/USDA - Cloud Computing
Dr. Michael Valivullah, NASS/USDA - Cloud ComputingDr. Michael Valivullah, NASS/USDA - Cloud Computing
Dr. Michael Valivullah, NASS/USDA - Cloud Computing
 
Whitepaer VDI and DaaS -- June 2015
Whitepaer VDI and DaaS -- June 2015Whitepaer VDI and DaaS -- June 2015
Whitepaer VDI and DaaS -- June 2015
 
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...
CA Infrastructure Management 2.0 vs. Solarwinds Orion: Speed and ease of mana...
 
Planning, deploying and managing a microsoft vdi infrastructure (slides tra...
Planning,  deploying and managing a microsoft vdi infrastructure  (slides tra...Planning,  deploying and managing a microsoft vdi infrastructure  (slides tra...
Planning, deploying and managing a microsoft vdi infrastructure (slides tra...
 
AMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to WorkAMD Putting Server Virtualization to Work
AMD Putting Server Virtualization to Work
 
DaaS/IaaS Forum Moscow - Ivo Murris
DaaS/IaaS Forum Moscow - Ivo MurrisDaaS/IaaS Forum Moscow - Ivo Murris
DaaS/IaaS Forum Moscow - Ivo Murris
 

Andere mochten auch

киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторыguest7b6b59
 
киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторыguest7b6b59
 
Post Merger Integration Manager
Post Merger Integration ManagerPost Merger Integration Manager
Post Merger Integration Managerguestf64ce34
 
киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторыguest7b6b59
 
New Divide
New DivideNew Divide
New Divideinza85
 
Study: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsStudy: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
 

Andere mochten auch (8)

киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторы
 
киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторы
 
Post Merger Integration Manager
Post Merger Integration ManagerPost Merger Integration Manager
Post Merger Integration Manager
 
Swift Trade
Swift TradeSwift Trade
Swift Trade
 
киновпроекторы
киновпроекторыкиновпроекторы
киновпроекторы
 
New Divide
New DivideNew Divide
New Divide
 
3 Reasons
3 Reasons3 Reasons
3 Reasons
 
Study: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving CarsStudy: The Future of VR, AR and Self-Driving Cars
Study: The Future of VR, AR and Self-Driving Cars
 

Ähnlich wie Virtualization: What is it and what can it do for you

TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - Trivadis
TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - TrivadisTechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - Trivadis
TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - TrivadisTrivadis
 
Getting Over 'the Hump': How to Expand Your Stalled Virtualization Deployment
Getting Over 'the Hump': How to Expand Your Stalled Virtualization DeploymentGetting Over 'the Hump': How to Expand Your Stalled Virtualization Deployment
Getting Over 'the Hump': How to Expand Your Stalled Virtualization DeploymentDavid Resnic
 
Microsoft cloud whitepaper
Microsoft cloud whitepaperMicrosoft cloud whitepaper
Microsoft cloud whitepaperPradeep Bhatia
 
First step to the cloud white paper
First step to the cloud white paperFirst step to the cloud white paper
First step to the cloud white paperNewton Day Uploads
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101MCPc, Inc
 
The Cloud Computing and Enterprise Architecture
The Cloud Computing and Enterprise ArchitectureThe Cloud Computing and Enterprise Architecture
The Cloud Computing and Enterprise ArchitectureDr. Saurabh Katiyar
 
Explain, in your own words, the concept of cloud computing. Do you b.pdf
Explain, in your own words, the concept of cloud computing. Do you b.pdfExplain, in your own words, the concept of cloud computing. Do you b.pdf
Explain, in your own words, the concept of cloud computing. Do you b.pdfsales96
 
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...webhostingguy
 
Automation white paper-nextgendc
Automation white paper-nextgendcAutomation white paper-nextgendc
Automation white paper-nextgendcMike Kuhn
 
Internship Presentation.pptx
Internship Presentation.pptxInternship Presentation.pptx
Internship Presentation.pptxjisogo
 
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...Enterprise Wired
 
Pm440 Presentation Black Cloud
Pm440 Presentation Black CloudPm440 Presentation Black Cloud
Pm440 Presentation Black Cloudguesta946d0
 
Cloud Computing With SAS
Cloud Computing With SASCloud Computing With SAS
Cloud Computing With SASwhite paper
 
Iac evolutions
Iac evolutionsIac evolutions
Iac evolutionsPrancer Io
 
CWIN17 london becoming cloud native part 2 - guy martin docker
CWIN17 london   becoming cloud native part 2 - guy martin dockerCWIN17 london   becoming cloud native part 2 - guy martin docker
CWIN17 london becoming cloud native part 2 - guy martin dockerCapgemini
 
Cloud Artificial Intelligence.pptx
Cloud Artificial Intelligence.pptxCloud Artificial Intelligence.pptx
Cloud Artificial Intelligence.pptxPrakashKumarSahni
 
White Paper: The Benefits of An Outsourced IT Infrastructure
White Paper: The Benefits of An Outsourced IT InfrastructureWhite Paper: The Benefits of An Outsourced IT Infrastructure
White Paper: The Benefits of An Outsourced IT InfrastructureAsaca
 

Ähnlich wie Virtualization: What is it and what can it do for you (20)

TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - Trivadis
TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - TrivadisTechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - Trivadis
TechEvent 2019: Chaos Engineering - here we go; Lothar Wieske - Trivadis
 
Getting Over 'the Hump': How to Expand Your Stalled Virtualization Deployment
Getting Over 'the Hump': How to Expand Your Stalled Virtualization DeploymentGetting Over 'the Hump': How to Expand Your Stalled Virtualization Deployment
Getting Over 'the Hump': How to Expand Your Stalled Virtualization Deployment
 
Oracle Cloud Native
Oracle Cloud NativeOracle Cloud Native
Oracle Cloud Native
 
Microsoft cloud whitepaper
Microsoft cloud whitepaperMicrosoft cloud whitepaper
Microsoft cloud whitepaper
 
First step to the cloud white paper
First step to the cloud white paperFirst step to the cloud white paper
First step to the cloud white paper
 
Virtualization 101
Virtualization 101Virtualization 101
Virtualization 101
 
The Cloud Computing and Enterprise Architecture
The Cloud Computing and Enterprise ArchitectureThe Cloud Computing and Enterprise Architecture
The Cloud Computing and Enterprise Architecture
 
Explain, in your own words, the concept of cloud computing. Do you b.pdf
Explain, in your own words, the concept of cloud computing. Do you b.pdfExplain, in your own words, the concept of cloud computing. Do you b.pdf
Explain, in your own words, the concept of cloud computing. Do you b.pdf
 
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
Server Virtualization and Cloud Computing: Four Hidden Impacts on ...
 
Automation white paper-nextgendc
Automation white paper-nextgendcAutomation white paper-nextgendc
Automation white paper-nextgendc
 
Internship Presentation.pptx
Internship Presentation.pptxInternship Presentation.pptx
Internship Presentation.pptx
 
Cloud Computing_2015_03_05
Cloud Computing_2015_03_05Cloud Computing_2015_03_05
Cloud Computing_2015_03_05
 
ACM-CTO-Roundtable
ACM-CTO-RoundtableACM-CTO-Roundtable
ACM-CTO-Roundtable
 
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...
Important Role of a Cloud Computing Engineer in the Tech-dominated Infrastruc...
 
Pm440 Presentation Black Cloud
Pm440 Presentation Black CloudPm440 Presentation Black Cloud
Pm440 Presentation Black Cloud
 
Cloud Computing With SAS
Cloud Computing With SASCloud Computing With SAS
Cloud Computing With SAS
 
Iac evolutions
Iac evolutionsIac evolutions
Iac evolutions
 
CWIN17 london becoming cloud native part 2 - guy martin docker
CWIN17 london   becoming cloud native part 2 - guy martin dockerCWIN17 london   becoming cloud native part 2 - guy martin docker
CWIN17 london becoming cloud native part 2 - guy martin docker
 
Cloud Artificial Intelligence.pptx
Cloud Artificial Intelligence.pptxCloud Artificial Intelligence.pptx
Cloud Artificial Intelligence.pptx
 
White Paper: The Benefits of An Outsourced IT Infrastructure
White Paper: The Benefits of An Outsourced IT InfrastructureWhite Paper: The Benefits of An Outsourced IT Infrastructure
White Paper: The Benefits of An Outsourced IT Infrastructure
 

Virtualization: What is it and what can it do for you

  • 1. Volume 1 Issue 2 November 2010 CA Technology Exchange Insights from CA Technologies Virtualization Inside this issue: • Virtualization: What is it and what can it do for you? • Virtualization: Enabling the Self-Service Enterprise • Data Virtualization plus Columns by CA Technologies thought leaders
  • 2. CA Technology Exchange Table of Contents 1 Welcome from the Editor in Chief Marv Waschke, Principal Software Architect, Office of the CTO, CA Technologies, and Editor in Chief, CA Technology Exchange 3 Virtualization: What is it and what can it do for you? Anders Magnusson, Senior Engineering Services Architect, CA Technologies 25 Leading Edge Knowledge Creation Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs, CA Technologies 27 Virtualization: Enabling the Self-Service Enterprise Efraim Moscovich, Principal Software Architect, CA Technologies 40 Service Assurance & ITIL: Medicine for Business Service Management Brian Johnson, Principal Architect, CA Technologies 42 Data Virtualization Sudhakar Anivella, Senior Architect, Service Desk, CA Technologies 51 Lessons Learned from the Mainframe Virtualization Experience John Kane, Technical Fellow Emeritus 53 Glossary of Virtualization Terms
  • 3. CATX: Virtualization by Marv Waschke, Principal Software Architect, CA Technologies, Editor in Chief, CA Technology Exchange Our first issue of CATX was published in April 2010. The theme was cloud com- CA Technology Exchange puting. This issue addresses virtualization, a subject closely related to cloud Editorial Committee computing. The two concepts are often juxtaposed in discussion and I occasion- ally hear people speaking as if the two concepts were identical. Marv Waschke, Editor in Chief Principal Software Architect, Virtualization and Cloud Office of the CTO, CA Technologies Virtualization and cloud are related yes, but identical, clearly not. In computing, Janine Alexander we often advance by focusing on the outcome of an activity and delegate per- Technical Writer, CA Support, formance of the activity itself to some other entity. For instance, when we use CA Technologies SQL to query a relational database, we delegate moving read heads and scan- ning disk sectors to the RDBMS and concentrate on the tables of relations that Marie Daniels are the result. This allows database application builders to concentrate on the Program Director, CA Support, data, not the mechanics of moving bits and bytes. CA Technologies Michael Diaz Both cloud computing and virtualization are examples of delegation, but what is Quality Assurance Architect, delegated is different and the delegation occurs for different reasons. Workload Automation, CA Technologies Cloud computing is delegation on a grand scale. A cloud consumer engages with Robert P. Kennedy a network interface that permits the consumer to delegate the maintenance Senior Director, Technical Information, and management of equipment and software to a cloud provider. The consumer CA Technologies concentrates on the results and the provider keeps the lights on and the equip- Laurie McKenna ment running in the datacenter. Director, Technical Information CA Technologies Virtualization separates the execution of software from physical hardware by delegating computing to emulators that emulate physical hardware with soft- David W. Martin ware. The user can focus on the software rather than configuring the underlying Senior Principal Software Engineer hardware. Emulators can often be configured more quickly and with greater flex- Virtualization and Service Automation CA Technologies ibility than physical systems, and configured systems can be stored as files and reproduced easily. Without the convenience and flexibility of virtualized sys- Cheryl Morris tems, cloud implementations can be slow and difficult, which is why almost all Principal, Innovation and University mention of cloud includes virtualization. Programs, CA Technologies Articles in This Issue Richard Philyaw Practice is never as simple as theory. Two of our three articles in this issue dis- Principal Software Architect, Office of the CTO, CA Technologies cuss virtual systems in practice. David Tootill Although virtualization can deliver great rewards, deploying an IT service or Principal Software Architect, Service group of services to run virtually is a complicated project that requires planning Management, CA Technologies and systematic execution. Anders Magnusson from CA Services is an experi- enced implementer of virtualization projects. His article provides an insider’s view of the challenges in justifying, planning, and executing virtualization proj- ects. Efraim Moscovich is an architect of virtualization management tools. He has taken time to consider the potential of virtual systems for self-service in IT. 1
  • 4. Finally Sudhakar Anivella, a senior architect in service management develop- ment, discusses another dimension to virtualization. We tend to think of virtual- ization as synonymous with virtual servers, but in fact, we use the concept of virtualization in many ways in computing: virtual memory, virtual routing, are all common. Data virtualization, as Sudhakar points out, has become very im- portant in IT systems. The glossary of virtualization terms was a joint project of the editors and the au- thors. Terms come and go and change meaning all the time as virtualization evolves. We attempted to provide terms as they are understood today in this glossary. Columns In addition to full-length articles, we have columns from CA Labs senior execu- tive, Gabby Silberman, and ITIL expert, Brian Johnson. Virtualization has long been a staple of mainframe computing. Ideas that are new to distributed com- puting have been used for a long time on the mainframe. Recently retired CA Technical Fellow Emeritus, John Kane, has written a column that touches on some of the ways that virtual distributed computing is recapping the experience of the mainframe. All articles in CATX are reviewed by panels of experts from CA Technologies. Ar- ticles that pass the internal review go on to external review panels made up of individuals from universities, industry experts, and experts among CA Technolo- gies customers. These reviewers remain anonymous to preserve the integrity of the review process, but the editorial committee would like to thank them for their efforts. They are valued contributors to the success of CATX and we are grateful to them. If any readers would like to participate in a review panel, please let us know of your interest and expertise in an email to CATX@ca.com. The editorial committee hopes you find value and are challenged in this issue on virtualization. Please consider contributing to our next issue, which will cen- ter on REST, (Representational State Transfer), the “architecture of the World Wide Web.” Although REST will be the main theme of our next issue, we will also include additional articles on virtualization, the cloud, and other topics of interest to the IT technical community. Our April 2011 issue promises to offer a varied range of thought-provoking arti- cles. CATX is open to everyone to contribute, not only CA Technologies employ- ees but all IT technologists. Please address questions and queries to CATX@ca.com. 2
  • 5. Virtualization: What is it and what can it do for you? by Anders Magnusson, Senior Engineering Services Architect, CA Technologies The Promise of Virtualization About the author: Although the concept of virtualization began in the mainframe environment in the late 1960’s and early 1970’s, its use in the distributed environment did not become commonplace until very recently. Even though the underlying technol- ogy and related best practices rapidly continue to evolve, for most application types virtualization has proven mature enough to support business critical sys- tems in production environment. When done right virtualization provides significant business value by helping or- ganizations manage cost, improve service, and simplify the process of aligning business with IT. We can see a rapid acceleration in the number of traditional datacenters that are pursuing this value by shifting to a virtualization based model, and some are even taking it one step further and by implementing pri- Anders Magnusson is a Senior Engineer- vate clouds. How fast this transformation will happen and how much of the ing Services Architect at CA Technolo- “old” datacenter will instead move out to public clouds is uncertain. To help us gies and a member of CA Technologies with these estimates we can look at what Gartner Inc. and Forrester Research Council for Technical Excellence. are predicting: Since joining CA Technologies in 1997 he has held a number of roles and re- • “Virtualization continues as the highest-impact issue challenging infrastruc- sponsibilities across the organization ture and operations through 2015. It changes how you manage, how and what but, during the most recent several you buy, how you deploy, how you plan and how you charge. It also shakes up years he has focused on developing licensing, pricing and component management. Infrastructure is on an standard procedures and best practices inevitable shift from components that are physically integrated by vendors for utilizing virtualization and deploying (for example, monolithic servers) or manually integrated by users to logically multi-product solutions. composed "fabrics" of computing, input/output (I/O) and storage components, and is key to cloud architectures. This research explores many facets of Anders is responsible for providing siz- virtualization.” (Gartner, Inc., “Virtualization Reality”, by Philip Dawson, July ing best practices and tools for several 30, 2010.) CA Technologies solutions as well for virtualization related material on the Implementation Best Practices site, • “By 2012, more than 40% of x86 architecture server workloads in enterprises which can be found at will be running in virtual machines.” (Gartner, Inc., “IT Virtual Machines and https://support.ca.com/phpdocs/0/com- Market Share Through 2012”, by Thomas J. Bittman, October 7, 2009.) mon/impcd/r11/StartHere.htm • "Despite the hesitancy about cloud computing, virtualization remains a top priority for hardware technology decision-makers, driven by their objectives of improving IT infrastructure manageability, total cost of ownership, business continuity, and, to a lesser extent, their increased focus on energy efficiency." (Forrester Research Inc. – Press Release: Cambridge, Mass., December 2, 2009, “Security Concerns Hinder Cloud Computing Adoption”. Press release quoted Tim Harmon, Principal Analyst for Forrester.) Despite the awareness of the huge potential provided by virtualization – or even because of it – there are many virtualization projects that fail in the sense that they aren’t as successful as expected. This article is written in two parts. Part one defines virtualization and why organizations choose to use it, while part two focuses on planning a successful virtualization project. 3
  • 6. What is Virtualization? The first step in understanding what the virtualization effort will achieve is to At a high level, virtualization presents agree on what we mean by “virtualization”. At a very high level virtualization system users with an abstract can be defined as a method of presenting “system users” (such as guest sys- emulated platform without details. tems and applications) with the big picture (that is, an abstract emulated com- puting platform) without the need to get into all the little details – namely the physical characteristics of the actual computing platform that is being used. Virtualization has long been a topic of academic discussion and in 1966 it was first successfully implemented in a commercial environment when the IBM mainframe systems S/360 supported virtual storage. Another breakthrough came in 1972, when the first hypervisors were introduced with the VM/370 op- erating system. The introduction of the hypervisor is important because it en- able hardware virtualization by allowing multiple guest systems to run in parallel on a single host system. Since that time virtualization has been devel- oped on many fronts and can include: Platform or Server Virtualization: In this form of virtualization a single server hosts one or more "virtual guest machines". Subcategories include: Hardware Virtualization, Paravirtualization, and Operating System Virtualization. Resource Virtualization: Virtualization also can be extended to encompass spe- cific system resources, such as storage and network resources. Resource virtual- ization can occur within a single host server or across multiple servers (using a SAN, for example). Modern blade enclosures/servers often combine platform and resource virtualization, sharing storage, network, and other infrastructure across physical servers. Desktop Virtualization: Virtual Desktop Infrastructure (VDI) provides end users with a computer desktop that is identical or similar to their traditional desktop computer while keeping the actual computing power in the datacenter. When this approach is used, the end user requires only a thin client on his desk- top. All updates or configuration changes to the application or hardware are per- formed in the centrally located datacenter. This approach provides greater flexibility when it comes to securing the systems and supplying computing power on demand to the end user. Application Virtualization: Application virtualization is a technology designed to improve portability, manageability, and compatibility of individual applica- tions by encapsulating the application so that it no longer communicates di- rectly with the underlying operating system. Application virtualization utilizes a “virtualization layer” to intercept calls from the virtualized application and translate them to call the resources needed to provide the underlying operating system. Computer Clusters /Grid Computing: This type of virtualization connects mul- tiple physical computers together as a single logical entity in order to provide better performance and availability. In these environments the user connects to the “virtual cluster” rather than to one of the actual physical machines. The use of grid computing or clustering of computers is typically driven by the 4
  • 7. need to support high availability, load balancing, or a need for extreme comput- ing power. Each one of these general categories can be divided into additional subcate- gories. All of these potential options make it important that you are clear about what you are referring to when you talk about virtualization. The requirements and best practices for each of these different techniques are very similar – often what is valid for one form is valid for many of the others. In addition, several of these depend on each other, and by implementing more of them, you enhance the value. For example, if you are implementing Server Vir- tualization or a Grid Structure you should also consider various types of resource virtualization to support the infrastructure. For the purposes of this article, we are focusing on server virtualization unless otherwise specified. Why Use Virtualization? Now that you know what virtualization is, The short answer to the question why do organizations choose to use it? “Why use virtualization” is to manage The short answer is to manage cost, im- cost, improve service, and simplify the prove service, and simplify the process of process of aligning business with IT. aligning business with IT. For example, by using virtualized environments, organiza- tions can provide improved service by an- ticipating and quickly responding to growth in demand. In extreme examples ROI has been achieved in as little as 3-6 months; however, a more realistic expec- tation is that it will take 12-18 months. Following are some of the common drivers that influence organizations in de- ciding to virtualize their IT environment. Hardware Cost Savings through Consolidation of logical servers into fewer Hardware cost savings through physical servers is one of the main promises from virtualization. There are mul- consolidation of logical servers into tiple ways in which savings can be realized. First, fewer physical servers may be fewer physical servers is one of the required. In a well managed virtual environment, multiple logical servers can be main promises from virtualization. hosted on the same physical server. Second, by reducing the number of physical servers required, virtualization can help manage “datacenter sprawl”, a savings of both physical space and the utilities required to manage the larger space. To consolidate successfully, you need to understand the entire picture. An organ- ization can consolidate workload that was previously distributed across multiple smaller – and often underutilized - servers onto fewer physical servers - espe- cially if those servers previously had a limited workload - but these new servers still must have sufficient resources, at all times. See the section “New Hardware Requirements” below for more details on this. Automation and Enhanced Resource Management is, in many ways, related to hardware cost savings but the drivers are sometimes different: • Optimized usage of hardware resources. In a non-virtualized environment it is 5
  • 8. common to have some servers that are barely utilized. Many datacenters are filled with servers that use only a small percent of the available resources. These centers are perfect targets for consolidation and can provide an excellent return on investment. • Rapid deployment of new servers and applications. In a well managed environment with established templates for typical server installations, new logical servers can be deployed rapidly on host servers with available capacity. • Flexibility, ability to provide on demand resources. Many applications require significant resources - but only briefly. For example end of month or end of year reporting or other specific events may trigger a higher than usual load. In a virtualized environment, more resources can be assigned dynamically to a logical server or, if the application is designed to support scaling out horizon- tally, rapid deployment can supply additional logical servers as worker nodes. • Flexible chargeback systems. In a flexible virtualized environment an organiz- ation can provide a meaningful chargeback/showback system that will efficiently encourage system owners to use only the resources they need without risking the business by using servers that are inadequate for their needs. This approach is especially true in a highly mature and flexible virtual environment that includes management tools that collect all required metrics and resource virtualization techniques such as storage virtualization with thin provisioning. • Support test and development by providing access to a large number of potential servers that are active and using resources only when needed. This need is typically the starting point and an obvious choice to virtualize for any environment that requires temporary short-lived servers. It is especially true when test and development groups require a large number of different operating systems, configurations, or the ability to redeploy a test environ- ment quickly from a pre-defined standard. Fault Tolerance, High Availability, and Disaster Recovery on different levels Fault tolerance, high availability, and can be simplified or made more efficient in a virtual environment. In highly disaster recovery on different levels available environments, brief interruptions of service and potential loss of trans- can be simplified or made more actions serviced at the time of failure are tolerated, while fault tolerant environ- efficient in a virtual environment. ments target the most mission-critical applications that cannot tolerate any interruption of service or data loss. Virtualization can provide a viable solution for both – including everything from simplified backup/restore of systems to complete disaster recovery or fault tolerance system supported by the various hardware and virtualization vendors. A few examples of this scenario are: • Backup of complete images. A virtual server, by its very nature, is comprised of a set of files that can be moved easily between physical servers. A quick snapshot of those files can be used to start the server in this exact condition on another physical server. • Simplified disaster recovery solutions. When coupled with the appropriate hardware infrastructure, virtualization strategies can be used to simplify the process of disaster recovery. For example, a typical disaster recovery solution 6
  • 9. may include distributing resources into primary and secondary datacenters. Solution providers often take advantage of features built into a virtualization infrastructure and sell out-of-the-box solutions to support high availability and disaster recovery. • Minimize downtime for hardware and software maintenance tasks. All down- time due to planned hardware maintenance can be avoided or kept to a minimum because an organization can move the active virtual images to another physical server while the upgrade is performed. With correct planning, change control for software maintenance can also be significantly enhanced through judicious use of virtualization. Because the complete logical machine can be copied and handled as a set of files, organizations can easily set up separate areas such as Development, Quality Assurance, Library of available images, Archive of previously used images, Staging area for Configuration, and so on. A structure like this one encourages organizations to upgrade and test a new version in the “Development” and “QA” areas while still running the old version in “Production.” When the new version is approved, a small maintenance window can be scheduled to trans- fer the new, updated, and verified library image over to the production system. Depending on the application, the maintenance window can even be com- pletely eliminated by having the old and the newly updated images running in parallel and switching the DNS entry to point to the updated instance. This approach requires some advanced planning, but it has been successfully used by service providers with tight service level agreements. • Efficient usage of component level fault tolerance. Because all virtualized servers share a smaller number of physical servers, any hardware related problems with these physical servers will affect multiple logical servers. Therefore, it is important that servers take advantage of component level fault tolerance. The benefit of taking this approach is that all logical servers can take advantage of the fault tolerant hardware provided by the host system. Energy Saving and Green IT. Another justification for using virtualization is to Another justification for using virtual- support sustainability efforts and lower energy costs for your datacenter. By con- ization is to support sustainability solidating hardware, fewer and more efficiently used servers demand less en- efforts and lower energy costs for ergy to perform the same tasks. your datacenter. In addition, a mature and intelligent virtualized environment can power on and off some virtual machines so that they are active only when they are in use. In some cases, virtual machines running on underutilized host servers can be moved onto fewer servers, and unused host servers powered down until they are needed. Simplify Management. One of the primary challenges in managing datacenters is data center sprawl, the relentless increase in diverse servers that are patched and configured in different ways. As the sprawl grows, the effort to maintain these servers and keep them running becomes more complex and requires a significant investment in time. It is worth noting that, unless effective lifecycle management procedures and appropriate controls are in place, data center sprawl is a problem that will be magnified in a virtual environment. Using well controlled and well managed virtualization guest images, however, 7
  • 10. reduces the number of configuration variations making it easier to manage servers and keep them up to date. Note that this approach requires that a virtu- alization project also includes a change control process that manages virtual images in a secure way. When a server farm is based on a small set of base images, these images can be efficiently tested and re-used as templates for all servers. Additional modifica- tions to these templates can be automatically applied in a final configuration stage. When done correctly this approach minimizes the risk of serious failures in the environment. All changes, including the final automated configuration, should be tested before they are put in production. This secure environment minimizes the need for expensive troubleshooting of production servers and fosters a stable and predictable environment. Managing Security. Security is one of the major concerns surrounding virtual- In a virtual environment, much of ization. Too often, the main security risk in any environment is the human fac- the security management can be tor; administrators who, without malicious intent, mis-configure the system. The automated and raised one level so traditional security models are effective if sufficiently rigorous procedures are that fewer manual steps are needed followed. In a virtual environment, much of the management can be automated to keep the environment secure. and raised one level so that fewer manual steps are needed to keep the environ- ment secure. A few examples of this are: Patch management. Virtualization allows testing changes in a controlled en- vironment, using an identical image. After the updated image is verified, the new image or the specific changes can be promoted to the production sys- tem with a minimum of downtime. This approach reduces the risks with patching the system and, in most cases, if something goes wrong reversion to a pre-patch snapshot is easy. Configuration management. The more dynamic environment and the poten- tial sprawl of both physical and logical servers makes it important to keep all networks and switches correctly configured. This is especially important in more established and dynamic virtual environment where virtual machines are moved between host servers based on location of available resources. In a virtual environment, configuration management can be handled by pol- icy driven virtual switches (a software implementation of a network switch running on the host server) where the configuration follows your logical server. Depending on your solution you can define a distributed switch where all the resources and policies are defined on the datacenter level. This approach provides a solution that is easy to manage for the complete datacenter. Support for O/S hardening and an integral part of change control. If all servers have been configured using a few well defined and tested base im- ages it becomes easier to lock down the operating systems on all servers in a well controlled manner and minimizes the risk for attacks. Enabling Private Cloud Infrastructure. A highly automated virtualized environ- ment can significantly help your environment create a private cloud infrastruc- ture. Stakeholders can request the resources they need and return them when 8
  • 11. they no longer are needed. In a highly mature environment where the stake- holder requests resources or services, these requests can be hosted in a private cloud or, if resources aren’t available, in a public cloud. This level of flexibility will be difficult to accomplish in an acceptable way without basing the private cloud on a virtual environment. From the requestor’s point of view, it doesn’t matter if the services in the cloud are hosted on a physical machine, a virtual machine, or some type of a grid as long as the stakeholder is getting the re- quired resources and the performance. Next Steps The goals driving your particular virtualization project may include any number of those identified in this article – or you may have a completely different set of drivers. What is critical is that you clearly identify those goals and drivers prior to undertaking the project. Project teams need a clear understanding of what they are expected to accomplish and what business value is expected to be de- rived in order to identify the appropriate metrics that will demonstrate the value of virtualization to the stakeholders. Part two of this article “Planning Your Vir- tualization Project” examines how these drivers can be used to direct the project and outlines a number of important areas to be aware of when planning a virtu- alization project. Planning Your Virtualization Project The Importance of Planning When you are planning a virtualization project, one of the most critical first When you are planning a virtualiza- steps is to ensure that both the project team and all stakeholders understand tion project, one of the most critical what the project needs to accomplish, what the supporting technology is capa- first steps is to ensure that both the ble of and what the true business drivers behind the project really are. This is project team and all stakeholders true for any project – but it is particularly true for virtualization endeavors be- understand what the project needs to cause there are many common misperceptions about what virtualization can accomplish, what the supporting and cannot offer. For further insights on the benefits of virtualization – and technology is capable of and what the common business drivers - see Part one of this article “What is Virtualization?” true business drivers behind the project really are. Even though your team may know that a virtualization project can provide sig- nificant value unless that value is explicitly spelled out, it runs the risk of be- coming “just another big project” which is an invitation to failure. The virtualization project may save the company money, it may make it easier to provision new machines, and, perhaps, it might even reduce the company’s car- bon footprint, but a project with goals this vague is likely to fail and be be super- seded by a new project because there is no way of effectively measuring its progress or success. To endure and succeed, a project must have explicit intent, understandable milestones, and clear measures of success defined up front. Without them, expectations will be unclear and there will be no way to accu- rately communicate the benefits. Before undertaking any virtualization project, the following questions must be addressed: • Maturity levels: What is the current and expected maturity level of the virtu- alized environment? (see “Maturity Levels” later in this article for examples). • Purpose: What are the business drivers for the project? 9
  • 12. • What: What processes, functions, and applications will be virtualized? • Support: Do stakeholders (for example, system owners and executive leaders) support the project goals? • Cost: How much is the project expected to cost, and save? • Risks: What functional and financial risks will be associated with the project? Are they acceptable? • Scope: What is the timeframe and what resources will be needed to complete the virtualization project? (Will it be a single, focused project, or one of multi- ple phases and milestones?) • Changes: Will changes will need to occur in the current processes, functions, and applications to support virtualization? Will changes need to occur in the deployment environment? • Accountability: What measurements will be incorporated that indicate that the project has reached its targets and is successful? Which stakeholders need to be informed of project progress, and how often? This list is by no means exhaustive, however, without at least a good under- standing of the answers to these questions it is likely that the project will be less successful than it could be. In a larger project where the goal is to virtualize a significant part of the environment or span multiple maturity levels, it is also important to have an open mind and, to some degree, an open project plan, that permits incorporation of lessons learned during earlier phases of the project into later phases. Changes to the original stakeholder agreement must have buy-in; a minor change or delay that is communicated is rarely a problem, but ignored changes might turn an otherwise successful project into a failure. Virtualization Maturity Levels Analyzing the current state of virtualization, the maturity level, and comparing it There are typically four levels of to the future desired level simplifies virtualization decisions. There are typically virtualization maturity: four levels of virtualization maturity: Level 1 – Islands of virtualization; Level 2 - Consolidation and managing expenses; Level 3 – Agility and flexibility; and Level 4 – Continuous adaptivity. Level 0 – No Server Utilization As the starting point of the virtualization maturity “ladder” this level describes an organization which has not yet implemented virtualization. Level 1 – Islands of Virtualization for Test and Development This maturity level describes the state of most IT departments before they start a formal virtualization project. Virtualization is often used by individuals or lim- 10
  • 13. ited groups within the organization without centralized management or re- sources. At this stage virtualization is used reactively and ad hoc to create vir- tual machines for testing and development in order to address specific issues for non- business critical systems when they arise. Level 2 – Consolidation and Managing Expenses At this stage the primary driver is to consolidate servers and increase the utiliza- tion of available resources. When done correctly, consolidating small or under- utilized servers into larger servers can be very efficient and it can save significant costs. However, the key to saving costs is identifying the right servers for virtualization. While there can be valid reasons to virtualize larger servers as well, it is difficult to realize savings on hardware in doing so. Level 3 – Agility / Flexibility The driver for the next step on the virtualization maturity ladder is the need for enhanced flexibility, enabling you to add and remove resources on demand and even move workload between physical hosts. This ability can be used to balance workload or to support a high availability solution that allows virtual machines to be restarted on a different physical server after a server failure. Level 4 – Continuous Adaptivity The driver behind this step is the desire to fully automate all of these functions in order to enable software solutions, often with hardware support, to pre- dictably and dynamically balance the load between servers, rebalance resources between virtual machines, start up and shut down virtual servers based on need, control power saving features in both the virtual machines and the host system itself, etc. This automation should be service-aware and should consider such factors as measured and expected workload, tariffs for energy, importance and urgency of requested resources, and demand from other services, and should use all available information to identify the best use of the available re- sources. The potential gains from virtualization grow significantly with each step up the maturity ladder: however, climbing too fast up the ladder can risk project failure. This is especially true if you also lack complete support from the stakeholders and the executive leadership, access to the correct infrastructure and tools or the required skillset. Travelling up the maturity levels is often a journey and it is likely that a project will lead to a mix of the different maturity levels, which is expected, but it is important that your goals be clearly defined and communi- cated. Virtualization Challenges Part one of this article “What is Virtualization?” discussed the importance of identifying the business drivers for a project. After that is done it is equally im- portant to be aware of problems and challenges that may arise. Awareness can guide infrastructure design to minimize problems caused by these obstacles. One common and challenging problem with server consolidation is that some areas of the organization may want to retain control over their existing hard- ware and applications. This resistance could be caused by a fear of losing con- trol of their environment, fear of inadequate response times or systems availability, concerns about security and handling of confidential data, or gen- eral anxiety about changes to their business environment. Some of these con- 11
  • 14. cerns may be valid while others may only express a lack of understanding of what this new technology has to offer. The project team must identify these concerns and address them to the satisfaction of the stakeholders. Even though it is critical to have full support for the project, it is equally impor- tant to have a good understanding of the types of problems – both technical and business impact related - that can potentially occur. A few common challenges are: Overutilization: One common problem with a virtualized environment is One common problem with a virtual- overutilization of physical servers. Although virtualization permits running mul- ized environment is overutilization of tiple logical servers on one physical server, virtualized applications require more, physical servers. not fewer, resources when they are virtualized. Virtualization always adds over- head. A virtualized application uses more resources than a non-virtualized in- stallation of the same application and it will not run faster unless it is hosted on and has access to faster hardware than the non-virtualized installation. The ac- tual overhead depends on a number of factors but independent tests have shown that the CPU overhead generally ranges from 6%-20% (See “VMware: The Virtualization Drag” at http://www.networkcomputing.com/virtualization/vmware-the-virtualization- drag.php). Overutilization of resources can present a serious problem in virtual- ized environments that do not have correctly sized host servers. See the section “New Hardware Requirements” below for more details. Underutilization: Alternatively, the underutilization of servers minimizes the value of virtualization. To provide a good balance it is important to understand the environment and to have the necessary tools to monitor and balance the load dynamically. Typically hypervisor vendors provide tools for this, but 3rd party vendors can provide added flexibility and value. For example, one organi- zation I have worked with utilizes virtualization to provide a dynamic test envi- ronment that can scale to meet the needs of many different groups. Resource requirements can vary dramatically depending on the type of testing being done. The environment is a rapidly growing environment that initially experi- enced serious issues with overutilization. They resolved these issues by imple- menting a management solution that continuously measured the load and provided early warning of potential overutilization. This allowed the team to proactively balance their workloads and add resources when needed. Single Point of Failure: In a virtualized environment where every host is run- In a virtualized environment where ning multiple logical servers, the impairment of a single physical server could every host is running multiple logical have devastating consequences. Therefore, it is important to implement redun- servers, the impairment of a single dant failsafe systems and high availability solutions to avoid situations where physical server could have devastating one failing component affects multiple applications. This solution should in- consequences. clude configuring redundancy for all critical server components, employing highly available storage solutions (RAID 5 or RAID 1+0), ensuring network con- Therefore, it is important to imple- nections are connected to separate switches, etc. In addition, in the event every- ment redundant failsafe systems and thing else fails, we recommend configuring the environment to be fault tolerant high availability solutions to avoid sit- so that if one host fails, the guest systems will start on a secondary host. Imple- uations where one failing component mented correctly, virtualized systems are likely to have significantly better up- affects multiple applications. time than individual systems in physical environments. One organization that initially experienced a few hectic evenings as the result of a single failing server bringing down multiple important applications learned 12
  • 15. early on the value of clustered host servers with dynamic load balancing. After virtualization was fully implemented, and one host went down, the workloads automatically restarted on another node in the cluster. In addition this organiza- tion has also set up separate distributed datacenter so if one datacenter be- comes unavailable the entire organization isn’t affected. Virtualization of Everything: Attempting to virtualize every server and applica- tion in an environment can be challenging. It is true that it is possible to virtual- ize most workloads, however, success requires careful planning that identifies what should be virtualized, why it should be virtualized, and what supporting in- frastructure is required. Just because something is possible does not mean that it is a good idea. Some of the more challenging examples are: Heavily utilized servers. Significant planning is required before virtualizing servers that often or always register high resource utilization. This is especially true for servers with multiple CPUs. While most hypervisors support guest sys- tems with 4 or more vCPU, this requires complicated scheduling and the over- head can be steep. Therefore, unless there are compelling reasons and ample available resources, virtualization should be avoided for heavily utilized systems that require multiple CPUs, especially when predictable performance is critical. Real time requirements. Applications that require real time or near real time re- sponse from their servers typically are not suitable for virtualization. The system clock on virtualized system may lag as much as 5-10 seconds under a heavy load. For typical loads this is not a problem, but systems that require real time or near real time response need special treatment. A satisfactory virtual imple- mentation will require careful analysis of the hypervisor solutions support for real time requirements on guest systems. Application support. As virtualization becomes more common, many applica- tion vendors will begin to support their applications in virtualized environments. Nevertheless, a significant number of applications still are not supported and even if virtualization is supported, some application vendors may require proof that any reported issue can be reproduced in a non-virtualized environment. Licensing. There are still many applications and licensing agreements that aren’t designed with dynamic virtualized environments in mind. Ensure that licensing provisions address whether the license cost is connected to the number of phys- ical CPUs on the host servers and whether it is licensed to only run on a dedi- cated physical server. In these situations, the license may require payment for a license for the host server’s 16 CPUs even though the application is assigned to only one vCPU. Dedicated physical server licenses may prevent dynamic migra- tion of the logical server to other host servers. Another consideration is that a well-planned lifecycle management solution requires each image to have multi- ple active instances for Development, Test/QA, Production, and so on. The or- ganization needs to determine and track whether each one of these instances requires additional licenses? Direct access to specific hardware. Applications that require direct access to cer- tain hardware such as a USB or serial port keys or other specialized hardware, such as video capturing equipment, tape drives and fax modems might be com- 13
  • 16. plicated or impossible to virtualize in a meaningful way. New Hardware Requirements. Hardware must be sized appropriately to take Hardware must be sized appropriately advantage of virtualization. For efficient scheduling of resources between multi- to take advantage of virtualization. ple logical servers, each host server must have ample resources, including CPU, For efficient scheduling of resources memory, network I/O and storage I/O. Because many concurrent resources are between multiple logical servers, sharing these resources, the environment must not only support high volumes, each host server must have ample it also must support a large number of transactions. For example, one extremely resources, including CPU, memory, fast network can be helpful but a single fast card is seldom adequate. Efficient network I/O and storage I/O. virtualization requires equipment with multiple fast I/O channels between all components. Sufficient hardware can also provide added value acting as compo- nent level fault tolerance for all logical servers. Special focus needs to be put on the storage infrastructure. Connecting all of your servers to a SAN (fibre channel or iSCSI based) is, highly recommended, for a virtual environment. A fast SAN and dedicated LUNs for the virtual machines avoids many I/O bottlenecks. The more advanced features and common drivers for a virtualized environment, such as hot migration, high availability, and fault tolerance, are impossible or significantly harder to implement without a SAN. Cooling requirements can be a concern. An older datacenter may develop so called ‘hot-spots’ when a large number of smaller servers are replaced with fewer but larger servers. Although new servers may require less energy and cre- ate less heat overall, the generated heat can be concentrated in a smaller area. There are many ways to address this situation, including adding new racks with integrated cooling or developing more complex redesigns of the cooling system. A lack of sufficient resources is a common obstacle for virtualization efforts. For example, a large organization quickly became aware that they hadn’t allocated sufficient storage when the constraints became so severe that they weren’t able to take snapshots of their images. Consequently, they could not implement their planned high availability strategy. Another organization tried to implement a large number of I/O intensive net- work applications on a host with a limited number of network cards. As a result, the number of I/O interrupts to each card quickly became a bottleneck for this physical server. These examples demonstrate how crucial it is to actively monitor and manage all types of resources; a resource bottleneck can easily cause an otherwise suc- cessful project to lose critical planned functionality. Security. Another common concern is properly securing virtual environments. Reorganization and consolidation of servers and applications can be disruptive and risky; however, these risks can be managed. For security, there are advan- tages and disadvantages to virtualized environments and the two are often closely related. Items that are typically seen as problem areas or risks can often turn into advantages when the environment is well managed. For example, new abstraction layers and storage infrastructures create opportunities for attacks but these additions have been generally proven to be robust. Nearly all attacks are due to misconfigurations, which are vulnerabilities that exist in both physi- cal and virtual environments. 14
  • 17. A few common concerns are: Security must be considered when deploying a virtualized environment. Management of inactive virtual machines can not rely on traditional patch Common concerns that should be management systems. In many virtualized environments, the templates or addressed are: guest systems that are used as definitive images for deployment may not be • Managing inactive virtual machines accessible to traditional patch management solutions. In a worst case sce- • Maintaining virtual appliances nario, a poorly managed definitive image may revert an existing image to an • Version controlling unsafe earlier patch level. • Server theft • Understanding the new abstraction In an environment with strict change control and automation of all configura- layer tion changes, this is not a major issue, but in some environments, these situa- • Hyperjacking tions can present major problems. • Securing a dynamic environment • Immature or incomplete tools Maintenance of virtual appliances. Virtual appliances are pre-packaged solu- • Securing and isolating confidential tions (applications, OS and required drivers) that are executed on a virtual host data and that require minimal setup and configuration. Appliances can be secured further through OS lockdown and removal of any services or daemons that aren’t necessary for the appliance. This practice makes the appliance more ef- ficient and more secure because it minimizes the attack areas of which a mali- cious user can take advantage. These non-standard installations can be harder to maintain and patch be- cause some standard patches might not work out-of-the-box unless provided by the virtual appliance vendor. This problem can be mitigated by testing all configuration changes in a separated development and test environment be- fore deploying in production. Lack of version control. Virtualized environments that allow guest systems to be reverted to an earlier state require that special attention is paid to locally stored audit events, applied patches, configuration, and security policies that could be lost in a reversion. Strict change control procedures help avoid this issue. Storing policies and audit logs in a central location also helps avoid problems. Server theft. In a non-virtualized environment, stealing a server is difficult. The thief needs physical access to the server to disconnect it and then cooly walk out of the datacenter with a heavy piece of hardware. In a virtual environ- ment, a would-be thief only needs access to the file system where the image is stored and a large enough USB key. Surreptitious network access may be even more convenient for a thief. A successful illegal copy of the virtual image may be indetectable. This issue underscores the need for efficient access con- trol and an audit system that tracks the actual user and not just ‘root’, ‘admin- istrator’ or other pre-defined privileged users. New abstraction layer. Hypervisors introduce a new abstraction layer that can introduce new failures as well as security exposures. Hypervisors are designed to be as small and efficient as possible which can be a double-edged sword from a security perspective. On the upside, hypervisors have a small footprint with few well controlled APIs so they are relatively easy to secure. On the downside, lightweight and efficient can mean limited error recovery and secu- rity implementation. The downside can be mitigated by configuring high secu- rity around the hypervisor including specific security related virtual appliances or plug-ins to the hypervisors. 15
  • 18. Hyperjacking is an attack on the hypervisor that enables a malicious user to access or disturb the function of a large number of systems. So far hyperjack- ing hasn’t been a significant problem, but it is critical to ensure that your vir- tual environment follows the lockdown procedures recommended by the hypervisor vendor and that you apply all recommended security patches. Securing dynamic environments. To fully take advanatage of the potential pro- vided by a virtual environment that environment needs to support automated migration of guest systems between host servers. A dynamic environment, however, presents new challenges. When secured resources move from host to host, a secure environment must be maintained regardless of the current host of the guest system. These challenges may not be as problematic as it ap- pears. With policy-based security and distributed vLANs that are managed for a complete group of host servers or the complete datacenter, policies will fol- low the guest system and remain correctly configured regardless of which server its currently running on. Immature or incomplete tools. Over the last several years the tools to manage virtual environments have been maturing rapidly and much of the functional- ity for securing the system, automating patching, and managing of the virtual system are enhanced frequently. Many of these functions are provided by the hypervisor vendor, while other tools with additional features are provided by 3rd party vendors. This rapid development of tools and features can be expected to continue and it will be more important to have management solutions that can extend across heterogeneous environments - including virtual and non-virtual sys- tems - and all the way out to cloud infrastructures. Precise predictions are im- possible, but the industry is aware that virtual environments and cloud solutions will rapidly take over more and more workloads. Management sys- tems of the future will work with and manage a mix of hypervisors, operating systems and workloads in these environments. The ability to secure and isolate confidential data is a common concern that must be carefully considered when designing storage solutions. Server virtual- ization itself doesn’t add a significant risk in this area, but it’s important to be aware of since SAN and more complex virtual storage solutions are often em- ployed to maximize the value of a virtualized environment. Further discussion on this topic is beyond the scope of this article, when these solutions are em- ployed, the steps required to secure the storage may require special or vendor- specific knowledge. This is particularly important if data is governed by regulations such as HIPAA, GLBA, PCI, SOX, or any other federal or local regulations. When regulations such as these apply, data security often must be designed in consultation with auditors and regulators. With a properly managed virtualization project, these risks can be minimized; however, it is important that organizations be aware of the risks and address them appropriately. A well-managed virtual environment can provide greater se- curity by ensuring that all servers are based on identical, well-tested base im- ages. 16
  • 19. In addition, security solutions are easier to implement and administer when based on centralized policies. For example, consider an organization that needs to distribute management of much of the guest system to individual owners. These individual owners might choose to revert to an earlier snapshot at will. Central IT manages the security on these system by regularly scanning them to ensure they are correctly patched and do not contain any viruses or other mal- ware. When non-critical issues are detected, the owner is notified; for critical is- sues the system is disconnected from the network. Appropriate Skills and Training. With the right tools and planning, manage- ment of a virtualized environment can be simple and streamlined; but the IT staff may need additional training to acquire new skills. Administrators who don’t fully understand the specific tools and requirements for virtualized environments can easily misconfigure the environment – result- ing in environments with unpredictable performance or, worse, security breaches. Sufficient time and resources for training are required both before and throughout any virtualization project. Consider the same organization noted in the previous example. They needed to distribute many management tasks to individual owners. They ran into an prob- lem when a group using a guest system with 2TB of data took a snapshot of the system. The local manager didn’t realize that the system would now need 4 TB of data and that it would take 5 hours to commit the snapshot. The issue was resolved by having a certified professional educate the system’s owner about the impact various actions have on the storage requirements and performance. They were able to remove the snapshot safely, without losing any data, but could have avoided the issue if they had taken the proper training first. General Project Risks. Virtualization projects are subject to the same generic risks as any major project. Both scope creep and unrelated parallel changes with entangling side effects can derail virtualization projects as quickly and com- pletely as any other project. Design Approach Given an understanding of the reasons for virtualization, the business drivers, the possible affects on business and potential obstacles, the sources of failure and their mitigation, project planning can begin in earnest. The next step for a virtualization project is to carefully understand and analyze the environment. A successful virtualization project is the result of more planning than anyone ex- pects. Some specific planning steps are laid out here. Identify Workloads Appropriate to Virtualize The first step is to identify the scope of this project, that is, the applications and servers to be included. The bullet item Virtualization of Everything listed in the previous “Typical Challenges with Virtualization” section identified several types of servers that are difficult to virtualize. Avoid virtualizing these types of servers unless there is a valid reason and a plan for addressing the concerns. Fortu- nately, few servers fall into these categories. Most hypervisor vendors provide tools that can assist with this process, but to get the best possible result and avoid being at the mercy of the vendor, you should have a good understanding of the environment in question. The following server categories are listed in the order of suitability for virtualization: 17
  • 20. Rarely used servers that must be accessed quickly. Virtualizing these servers al- lows the organization to keep a large library of servers with different operating systems and configurations with a minimum hardware investment. They are typically used for: This starting point is a common for many companies because the value is sig- nificant and the risks few. Value is realized through faster provisioning of new servers, reduction of provisioning errors, and minimized hardware investment. Additional worker nodes to handle peak loads. This approach is especially useful when applications can be dynamically scaled out with additional nodes sharing a common virtual environment. If the environment is maintained and sufficient resources are available when needed, this scenario adds great business value. A just-in-time automated worker node provisioning system maximizes this value. Consolidation of lightly used servers. Some examples of lightly used servers include: • Service Providers (xSP) with many small clients. • Multiple mid-tier managers or file and print servers originally implemented on separate servers for political, organizational, or legal reasons. In many cases isolation provided by virtualization is sufficient, especially if the data is separated onto private disk systems; however, you should verify that vir- tualization satisfies the organization’s isolation and separation requirements. Servers with predictable resource consumption profiles allow planning the distribution of work to virtualized servers. In these cases, keep in mind that: • You should beware of applications with heavy I/O. • Applications that require different sets of resources at the same time can coexist on the same physical server. • Applications that require the same resources at different times can also coexist on the same physical server. In each of these cases, value comes from reducing the number of servers, re- sulting in both hardware maintenance and management cost savings. Unless a project falls into one of these categories, virtualization alone seldom saves money. There are other good reasons to consider virtualization, but you should be aware that cost savings may not appear. 18
  • 21. Understand and Analyze the Environment Server consolidation is an opportunity to raise the virtualization maturity level Server consolidation is an opportunity of the environment, or to prepare to raise it by identifying aligned procedures to raise the virtualization maturity that can be automated and enhanced. level of the environment, or to prepare to raise it by identifying The analysis should include performance profiles of individual servers and appli- aligned procedures that can be cations, covering all critical resources (CPU, memory, storage I/O and network automated and enhanced. I/O) and their variation over time. Both the size and the number of transactions are important. An understanding of when different applications need resources and under which circumstances helps determine which applications are suitable for virtualization and which can share resources and be co-located in the same resource groups. Many hypervisor vendors have tools that can assist with this process. However, regardless of which tool you are using, it is important to monitor performance over a period of time that also includes any expected performance peaks. Cap- turing a baseline of this information is recommended so that it can be com- pared against corresponding data collected from the virtualized environment. In situations where all expected peaks can’t be measured it is important to care- fully analyze and estimate the needs. This analysis also requires consideration of social engineering and the types of events that trigger resource consumption. You especially need to gauge the risk that the same events will trigger multiple applications to use more resources. Awareness of these scenarios is critical to ensure acceptable response times during peak load times for different solutions. Consider the following examples: A typical environment in which, at the end of every workday, a majority of the users: • Send an email, synchronize email folders and then logout from the mail server. • Run reports to prepare for the following day’s activities. • Print these reports and, perhaps, additional documents to bring home with them for review. • Make backup copies of a number of important files to a folder on a file server. An environment in which an unplanned event or fault occurs that triggers activ- ity on multiple systems such as: • A fault triggers the fault management systems to alarm, do root cause analysis, handle event storms, and certain automation tasks. • The end users notice the problem and use knowledge tools and service desk functions to determine if the problem is known and, otherwise, report it. • The operations and the help desk team receive the alert from the fault management and service desk system and connect to the service desk, CMDB, asset management, or network and system management tools to 19
  • 22. troubleshoot and correct the issue. If the applications share a common host system whose requirements are based on ordinary usage, these virtual machines will slow down from the higher peak load as a result of the activity of hundreds or thousands of users. Just when the systems are most needed, they become overloaded. Tracking the consumption of critical resources over time will reveal patterns of resource usage by servers. Based on that knowledge, you can determine servers to safely virtualize, what resources they need and which applications can suit- ably share resources. This will enable you to more effectively pair virtual ma- chines to stress different types of resources and to stress the system at different points in time. Hypervisor and Supporting Software Selection A hypervisor must work well in the environment and efficiently support require- ments. A few years ago there were only a limited selection of hypervisors but today, the number of solutions has increased. Independent lab tests show that each of the major hypervisor solutions has advantages and disadvantages: A few important areas to scrutinize when selecting a hypervisor vendor are: Organizational and Social Requirements: Requirements that arise from the knowledge and experience of the people in the environment are often as impor- tant, if not more important, than the technical requirements. These require- ments can affect the success and cost of the project. For example: • Does the organization have experience or knowledge about one specific solution? • Do preferred partners have knowledge or experience with any potential solutions? • Have solutions been tested or can they be tested easily with the planned hardware platform and most critical applications? Required Functions and Protocol: With the gradual standardization of basic hy- pervisor functions, many of the solutions from the major vendors are becoming similar. Added value has become the primary differentiator in the form of: • Efficient and dynamic automated migration that moves virtual servers between physical hosts. Load balancing and high availability solutions controlled by the vendor’s tools and integrated with standard enterprise management solutions are important here. • Support for specific hardware combinations. For example, more advanced functions like hot migration commonly require the servers (especially CPUs) to be identical or similar. Some hypervisors also allow compatibility mode with mixed CPU versions, but this forces the systems only to take advantage of functionality that all of the CPUs in use have in common. 20
  • 23. • Support for existing or planned SAN solutions. • Support for multiple storage repositories and dynamic move and rebalance of virtual images between repositories. • Support for all, or at least a majority, existing or planned software applications • Support for all operating systems planned for virtualization (32/64 bits versions of Windows, UNIX and/or Linux). • Ability to access, utilize, and efficiently distribute all required resources. • Management tools or support for management tools to monitor performance and availability and use this information to automate your environment. Preferably the solution will have an open API to integrate it with existing enterprise management systems. • Built in functions and APIs to manage advanced functions for security, high availability, fault tolerance, and energy saving. These are just a few examples; a project should carefully list the requirements important in the business environment. Describing the importance of each re- quirement and the consequences of lack of support will simplify prioritization of options. Virtualization Management Tools Management of an environment becomes even more critical when virtualization Management of an environment is employed. Some of the common management issues related to virtualization becomes even more critical when include the need to: virtualization is employed. • Simplify creation of new virtual servers and migration of existing systems into a virtualized environment. • Predict and track virtual environments that compete for server and storage resources. • Predict and track performance utilization in real time as well as historical trends in individual environments, the host system as well as the SAN system, and preferably, in a way that allows correlation between these components. • Tools that trace resource utilization, up and down time, and connect metrics from these tools with chargeback and show back systems. Efficient usage of chargeback systems, together with mature systems that spin up and down servers as required, allow the organization to encourage the system owners to manage their environment efficiently and, therefore, maximize the impact of Green IT and minimize energy bills. • Management tools supporting life cycle management processes with clear stages for development, quality assurance, library of available images, archive, configuration, and production. • Efficient tools for workflow orchestration and automation will simplify and modularize automation by securely reusing previously created tasks. 21
  • 24. While implementing automation focus on “low hanging fruit”, simple automa- tions that clearly save money or add security. Complex one-off automation tasks can be expensive to maintain and are often not worth the effort. • Tools that intelligently and actively manage the environment based on polices, measured performance and events. This added flexibility can be one of the great advantages of virtualization. A few examples are: • Dynamically changing resources available to virtual machines • Moving virtual machines between different host servers as needed • Dynamically provisioning and configuring servers on demand or when triggered by policies • Dynamically shutting down virtual machines and host servers when they aren’t being used If these capabilities aren’t managed appropriately, these otherwise great features can present some very significant risks. • Manage “VM sprawl” by implementing good change control and life cycle management processes that track where, why, and how virtual applications are running and which resources they use. • Provide tools for backup and disaster recovery of the virtual environment. • Provide tools and procedures to manage security, including patch manage- ment tools, firewall integrated with the hypervisor, and various security related virtual appliances for the virtual environment. If the management tools can handle most of these issues and include basic hot migration (VMOTION, Live Migration, XenMotion or similar) the environment will support efficient load balancing between the servers. Although some manage- ment tasks can be automated, it is important to be able to predict, whenever possible, the resources that are required before they are required. This approach demands strong understanding of the business systems and occasional human intervention. The importance of a holistic view of datacenter management solutions cannot be underemphasized. Datacenter management solutions must support the com- plete environment, virtual and non-virtual systems, both on-premise and off- premise in cloud infrastructures. The solution should focus on business services and the role of IT in the busi- ness, and, when needed, seamlessly drill into other aspects of management and the business ecosystem. To accomplish this holistic approach, virtualization tools must cooperate and integrate with the existing enterprise management software. Executive Buy-in Having examined what virtualization can and cannot do and the considerations for deploying virtualization in an environment, we return to a crucial step in the 22
  • 25. project plan and one that can post the most common obstacle to success: stakeholder support. Without executive support and backing from all important stakeholders, any project is likely to fail or achieve only partial success and profitability. The following steps will help garner support: Identify the Importance: Articulate the importance of the virtualization project to both the company as a whole and to the stakeholder’s organization and fa- vored projects. The business drivers listed earlier in this article are a starting point. Spell out the savings the project will generate, and how it will support new business models that will create new revenue streams. Will it make the organization more effi- cient and minimize the lead time to provision new services, and so on? Always communicate the importance of the project in a way that makes sense and is relevant to the target audience. Communicate Risks and Risks of Inaction: Honestly and sincerely point out the risks. The stakeholders must buy into the true picture. Hidden facts seldom stay hidden forever. A strong supporter who feels misinformed by the project group can easily turn into an even bigger obstacle, resulting in severe damage to the project. Explain Migration without Risk of Interrupting the Business: A main concern for the business owners is interruption in the business. A detailed migration plan that addresses interruption of business is essential. Point out that a mature and flexible virtualized environment will minimize downtime for planned out- ages. Listen: It is important to listen to concerns, investigate whether those concerns are valid and, if so, identify how they can be addressed. Again, the key to a suc- cessful project is to have strong support from the stakeholders. Proof Points Proof points are measurements that indicate the degree of the project’s success. Without identifying these points, the value of the virtualization will be obscure. These metrics will also help obtain executive buy-in and support for the project by associating it with measurable gains in productivity – or reductions in costs or response time. This is especially important if the stakeholders have previ- ously raised concerns. Proof points can be derived from business drivers and their baseline metrics. For example, if the intent is to reduce the time it takes to deploy new hardware or software, first identify current deployment times. If the intent is to save money through hardware consolidation, identify the costs for maintaining the current hardware, including cooling costs for the data center. Then, follow up with those same measurements after the project or a significant phase in the project has completed. 23
  • 26. Summary – Five Key Points The five points to remember for a successful virtualization project: Understand Why and What: Clearly understand the reason for the project, the business drivers, and the applications and functions to be virtualized. The scope of the project must be clearly defined including phases for a staged ap- proach, milestones, and the appropriate metrics to meas- ure progress and expected outcome. Identify the Expected Risks: Risks, both functional and fi- nancial, are expected and acceptable. Virtualization can provide great value, but like any project, there are risks. Risks can usually be managed, but the key is awareness and planning by the project team and the stakeholders. Virtualize Appropriate Workloads and Avoid Overutilization (and Underuti- lization): A common reason for virtualization failure is unreliable performance after applications have been virtualized. Avoid this situation by ensuring that too many applications do not share limited resources, and avoid host systems with inadequate bandwidth or inadequate support for large numbers of I/O transactions. Conversely, overestimating the amount of resources required can result in too many idle resources and can reduce the overall ROI. When virtualizing an environment the key is to choose the appropriate workload to virtualize, provide modern high-end server class host servers and carefully manage the workload and rebalance the workload so that all applications have sufficient resources during their peak time. Get Support of Stakeholders: Get support from executive management as well as from the business owners before starting the project. Listen to concerns and address them. Have buy-in before the project starts. Establish Success Criteria: Each project or subproject must have defined suc- cess criteria. These should include a comparison with the baselines from before virtualization. This criteria should be tied directly to the project’s business driv- ers, such as cost per application, energy consumption in the datacenter, speed to provision a server, or avoided alternative cost for building a new datacenter. Virtualization offers efficiency and agility, but there are many pitfalls and obsta- cles to success. By following these five key points and the principles explained in this article, risks are reduced and chances for success are maximized. Additional insights on implementing virtualization can be found in the Virtual- ization Best Practices section of the Implementation Best Practices pages, which you can access through the following URL: https://support.ca.com/phpdocs/0/common/impcd/r11/virtualization/virt_Fram e.htm Anders thanks Terry Pisauro, Engineering Services Architect at CA Technologies, for providing valuable editing contributions. 24
  • 27. Leading Edge Knowledge Creation by Dr. Gabriel Silberman, Senior Vice President and Director, CA Labs, CA Technologies Ever since businesses began looking for efficiencies by outsourcing or leveraging About the author: specialized services or favorable cost structures, one of the challenges has been to use this approach for acquiring leading edge knowledge. It may be argued that mergers and acquisition activities fulfill this role, as does recruiting of new personnel, either new university graduates or those who have accumulated pro- fessional experience. But these methods tend to be sporadic and do not repre- sent a continuous process for bringing knowledge into a large and diverse organization. At CA Technologies we have taken a different approach to tap into external re- sources. We aim to carry out a broad agenda geared towards continuous, in-con- text knowledge creation, to complement other more sporadic efforts. In contrast to the “pull” model used by some companies to attract ideas and proposals, CA Gabriel (Gabby) Silberman is Senior Labs, the research arm of CA Technologies, relies on a “push” paradigm. This en- Vice President and Director of CA ables us to reach out to the research community to seek insights into technical Labs, responsible for building CA challenges, the evolution of existing products, point solutions, or research to as- Technologies research and innova- sist in new product development. tion capacity across the business. In collaboration with Development, Using a popular context, the … as a Service (aaS) framework, think of CA Labs as Technical Services, and Support, and an internal service provider. Its offerings include access to an extensive network working with leading universities of leading academics, and the mechanisms (legal, financial, etc) to establish a around the world, CA Labs supports framework for collaboration. This would be the equivalent of an Infrastructure relevant academic research to further as a Service (IaaS) offering. On top of this, foundational research projects may establish innovation in the com- be structured to undertake long-term technical initiatives. These are based on pany's key growth areas. needs identified by the Office of the CTO and others responsible for charting and executing the strategic direction for CA Technologies’ products and services. Gabby joined CA in 2005, bringing These initiatives will explore technological advancements prior to potential im- with him more than 25 years of aca- plementation as CA offerings, and constitute a Platform as a Service (PaaS) type demic and industrial research experi- of offering. ence. He joined CA from IBM, where he was program director for the com- To complete the analogy with a Software as a Service (SaaS) offering, CA Labs pany's Centers for Advanced Studies provides the capability to create “research sprints.” These are short term efforts, (CAS) worldwide. Previously, Gabby based on the relationships established through our long-term trusted relation- was a manager and researcher at ships with academic partners and their deep knowledge of interests, products IBM's T.J. Watson Research Center and services relevant to CA Technologies. where he led exploratory and devel- opment efforts, including work in the Consider the example of Reacto, a tool for testing the scalability of reactive sys- Deep Blue chess project. tems developed as a foundational research project (think PaaS) in collaboration with researchers from the Swinburne University of Technology in Australia and Gabby earned bachelor of science CA’s development lab in Melbourne. and master of science degrees in computer science from the Technion In a sophisticated enterprise application, a single user action may trigger a – Israel Institute of Technology, and number of coordinated activities across a variety of systems. Before deploying a Ph.D. in computer science from such an application, it needs to be thoroughly tested against realistic operation the State University of New York at scenarios for quality assurance purposes. However, replicating such a large-scale Buffalo. testing environment is challenging and even cost prohibitive, due to resource 25
  • 28. and complexity constraints. The Reacto project developed a general emulation framework, using lightweight models to emulate the endpoints with which the system under test interacts. This enables large-scale realistic emulation of a va- riety of enterprise production environments using only a small number of physi- cal machines. Reacto has been used to demonstrate the scalability of several CA components and products, including the Java Connector Server (a component of CA Identity Manager). Now let us look at an example of a foundational research (PaaS) effort which be- came the basis for a research sprint (SaaS). The case in point is the Data Mining Roles and Identities project done in collaboration with researchers from the Uni- versity of Melbourne in Australia. Role mining tools automate the implementation of role based access control (RBAC) by data mining existing access rights, as found in logs, to reveal existing roles in an enterprise. Along with individual roles, a role hierarchy can be built and roles may be assigned to individual users. Additionally, data mining may be used to identify associations among users, accounts and groups, and whether these associations are necessary. As a result of CA’s acquisition of Eurekify and its Enterprise Role Manager, the re- searchers were asked to move their focus to leverage the role visualization tool developed as part of the project. This request gave birth to a research sprint to develop a tool to visualize access control data. Using the tool it is possible to vi- sualize the “health” of a customer's RBAC implementation, before and after the deployment of CA's Role and Compliance Manager. Furthermore, the tool may be used periodically to detect and investigate outliers within an enterprise’s role hierarchy, as part of governance best practices. The success of the research model practiced by CA Labs has been sustained by these and other examples of innovative and practical implementation of knowl- edge transfer. 26
  • 29. Virtualization: Enabling the Self-Service Enterprise by Efraim Moscovich, Principal Software Architect, CA Technologies “To provision a complete multi-system SAP CRM application, press or say ‘1’.” About the author: Virtualization is not a new concept; it has been around since the early 1960s. Self-service systems such as travel reservations and online shopping are an in- tegral part of today’s dynamic economy. The marriage of virtualization technolo- gies and self-service concepts has the potential to transform the traditional datacenter to a Self-service App Store. This article examines virtualization technologies and the role they play in en- abling the self-service enterprise. It also discusses key concepts such as service, service catalog, security, policy, management, and management standards, such as Open Virtualization Format, in the context of self-service systems. 1.0 IT Services and the Services Gap Efraim Moscovich is a Principal Software In today’s enterprise, the IT department has the primary responsibility of deliv- Architect in the CA Architecture Team, ering, running, and maintaining the business critical services (line of business) specializing in Virtualization and Au- also known as production. This includes the infrastructure (such as servers, net- tomation. work, cabling, and cooling), software, and management functions to ensure high availability, good performance, and tight security. Downtime or degraded func- He has over 25 years of experience in IT and Software Development in various tionality may cause significant negative financial impact to the bottom line. capacities including IT Production Con- The critical business services include, among others, email, customer relation- trol, programmer, as a development ship management or practice management, supply chain management, manu- manager, and architect. Efraim has been facturing, and enterprise resource planning. involved in the development of many products including Unicenter NSM, and In addition to the production services, the IT department has to provide infra- Spectrum Automation Manager. structure and support for a wide variety of other services, which range from as- sisting the sales force with setting up demo systems for clients to helping the He has expertise in various domains in- engineering department with their testing labs. The typical IT department has a cluding Event Management, Notification long backlog of projects, requests, and commitments that it cannot fulfill in a Services, automated testing, web serv- ices, virtualization, cloud computing, in- timely manner. Many of the backlog items are requests for evaluation and pur- ternationalization & localization, chase of new hardware or software, to set up and configure systems for end Windows internals, clustering and high- users, create custom applications for the enterprise, and provide short-term availability, scripting languages, and di- loaners for product demos and ad-hoc projects. For example, to convert a large agnostics techniques. collection of images from one format to another, the project team required hun- dreds of computers to run the conversion but only for a few days or weeks. He is an active participant in the DMTF Cloud Management Work Group. The gap between the ‘must do’ services and the ‘should do’ services typically is called the IT service gap. Prior to joining CA Technologies, Efraim worked on large scale performance management and capacity planning The struggle to close this gap and provide high quality services to all IT users on projects at various IT departments. time at a low cost has been raging for years. Some of the solutions used to im- prove the speed and quality include: Efraim has a M.Sc. in Computer Science from New Jersey Institute of Technology. • Automating procedures (including scripting and job scheduling systems) • Adopting standardized policies and procedures (such as ITIL1) 27
  • 30. • Distributing functions to local facilities • Sub-contracting and using consulting services • Outsourcing the whole data center or some services to third parties • Enabling end users to fulfill their own needs using automated tools (self-service) 2.0 Self-Service The self-service concept dates back to 1917 when Clarence Saunders2, who owned a grocery store, was awarded the patent for a self-serving store. Rather than having the customers ask the store employees for the groceries they wanted, Saunders invited them to go through the store, look at the selections and price of goods, collect the goods they wanted to buy, and pay a cashier on their way out of the store. Some well-known self-service examples include: • Gas stations, where the customers pump their own gas rather than have an attendant do it • Automatic Teller Machines (ATMs) that enable consumers to have better control of their money • The human-free, and sometimes annoying, phone support systems in many companies (“for directions, press 1”) • The ubiquitous shopping web sites (such as Amazon) that almost transformed the self-service concept into an art form The main reasons for the proliferation of the self-service paradigm are the po- tential cost savings for the service providers and the assumed better service ex- perience for the consumers. In order for a service to be a candidate for automated self-service, some or all of the following conditions must be met: • There are considerable cost savings or revenue opportunities for the provider in operating the service. • There is a service gap between what the provider can offer and what the consumer demands. • The service can be automated (that is, the service has a discrete and repeat- able list of steps to be carried out and no human intervention is required from the provider.). • The implemented self-service is convenient and easy to use by the consumers, and is faster than the non-automated version. • The service offering fits nicely within the consumers’ mode of operations and does not require expert knowledge outside their domain. 28
  • 31. The IT department adopted the self-service paradigm for many of its functions The IT department adopted the self- even before virtualization was prevalent. Examples include the Help Desk and service paradigm for many of its func- other issue tracking systems, and reservation systems for enterprise resources. tions even before virtualization was However, the implementation of more complex and resource intensive self-ser- prevalent. vice systems was not possible, at an acceptable cost, until the arrival of virtual- ization technologies. 3.0 Virtualization According to the Merriam-Webster dictionary, the word ”virtual” comes from Medieval Latin ”virtualis”, from Latin ”virtus” strength, virtue, and it means ”effi- cacious” or ”potential”3. In our context, virtualization is a form of abstraction – abstracting one layer of computing resources (real or physical) and presenting them in a different form (virtual, with more virtues) that is more efficacious and has more potential. Usu- ally the resources appear larger in size, more flexible, more readily usable, and faster than they really are in their raw form. There are many forms of virtualization, from hardware or server virtualization (that can create what is commonly known as Virtual Machines or VMs), to Stor- age (implemented via SAN or NAS), to Network, and Application virtualization. Emerging forms of virtualization that are entering the mainstream are network and memory virtualization (a shared resource pool of high-speed memory banks as opposed to virtual memory), and I/O virtualization. Server Virtualization is achieved by inserting a layer between the real resources and the services or applications that use them. This layer is called a Virtual Ma- chine Monitor, a Hypervisor, or Control Program. Figure 1: Virtualization (VMware) These virtualization technologies can be abstracted further to provide database and data virtualization, and more application-level constructs such as a mes- sage queuing appliance, a relational database appliance, and a web server ap- pliance. For additional virtualization terms and definitions, please refer to the Glossary. 29