1. Introduction to Virtualization
• History of Virtualization
– Mainframe origins
– Computers in the 1990s & 2000s
– Resulting IT challenges
• What is Virtualization
– Key technology for today
– Physical Server vs. Virtual Server
– Virtualization layer
– Virtual Machines
2. Virtualization History
• Born from Mainframe Technology:
– Originally part of mainframe technology, virtualization is not a new concept.
– Mainframes started as very large computers in the 1960s to process compute
tasks.
3. Virtualization on a Mainframe
• Mainframe Virtualization:
– Concept was to split the computer into multiple virtual machines so different “tasks”
can be run separately and independently on the same mainframe.
– If one virtual machine or “task” has a problem, other virtual machines are unaffected
VM #1
Task A
Mainframe Sample Diagram
VM #2
Task B
VM #3
Task C
VM #4
Task D
VM #5
Task E
VM #6
Task F
VM #7
Task G
4. Computers in 1990s
• Fast Forward to the 1990s
– Intel/AMD servers now very popular (known as “x86” servers)
– Each server runs Operating Systems such as Microsoft, Linux, or Netware
– Companies put ONE operating system & ONE application on each server
– 2 servers would grow to 6 servers, eventually to 50 or more servers!
– Electricity and space (footprint) becomes a problem….
File
Server
Web
Server
File
Server Web
Server
File
Server
Domain
ServerApp
Server
DNS
Server
Each Server Running
1 Application
5. Computers in 2000s
• Fast Forward to the 2000s
– Manufacturers “to the rescue”!
– Focus on making servers small
– “Rack” form factors (6-20 servers per cabinet)
– “Blade” form factors (30-60 servers per cabinet)
– Space/footprint problem helped….some
– Electricity and heat still a problem
Example Dell “Rack” Servers
Example HP “Blade” Servers
• As Servers Got Faster…
– Server utilization became even lower
– Average server utilization ranges between 4 -10%
– STILL one application per server
6. Today’s IT Challenges
Continued Server Sprawl
– Power, space and cooling costs represent one of the largest IT
budget line items
– One-application-per-server approach leads to complexity and high
costs of equipment and administration
Low Server Utilization Rates
– Result in excessive acquisition and maintenance costs
What this Equates to Today:
7. Virtualization is the Key
Apply Mainframe Virtualization Concepts to Intel / AMD Servers:
– Use virtualization software to partition an Intel / AMD server to work with several
operating system and application “instances”
Oracle SQL Application Servers Email File Print DNS Domain
Deploy several “virtual machines”
on one server using groundbreaking
virtualization software
8. Traditional Physical Server
Traditional x86 Server Architecture
– Single operating system per
machine
– Single application per machine
– Hardware components connected
directly to operating system
• CPU
• Memory
• Disk
• Network Card
x86 Architecture
Operating System
Application
CPU Memory Disk Network
1 Physical Server, 1 Application
9. New Architecture: Virtual Server
Virtualization Layer
– Addition of a virtualization
layer called a “hypervisor”
– Several servers can be
deployed as Virtual Machines
(VM) on each physical box
– Each VM has its own
operating system and
application
– Can run multiple, different
operating systems on the
same machine
– If one VM fails, other VMs are
unaffected
x86 Architecture
Application
Microsoft
OS
CPU (s)
Memory
vDisk
vLAN
Application
Microsoft
OS
CPU (s)
Memory
vDisk
vLAN
Application
Linux
OS
CPU (s)
Memory
vDisk
vLAN
Virtualization Layer (Hypervisor)
CPU Memory Disk Network
3 Virtual Servers on 1 Physical Server
10. Virtualization Layer Explored
Virtualization Layer - Compatibility
– A virtual machine is compatible with standard x86 operating
systems such as Windows and Linux
– A virtual machine has a motherboard, cpu, memory, disk and
network just like a physical server
– Applications developed for the standard OS’s will work on a
virtual machine
– No adjustments are needed to run applications on virtual servers
Virtualization Layer - Isolation
– Virtual machines on the same physical machine run
independently
– They are protected from each other
11. Virtual Machines Explored
Virtual Machines
– A virtual machine is a collection of software that has been
translated into files
– These files are collected and organized in “containers”
– These containers can be moved in seconds from one physical
machine to another in case of physical server failure or
performance needs.
– Virtual machines have all the same hardware resources
available such as CPU, memory, disk, and network
12. Server Virtualization In the Enterprise
Reduced
CapEx, Increased
Utilization
Reduced Cost of HA
and DR
Business Value
Virtualization Use
High Availability &
Disaster Recovery
Rapid
Provisioning
Server
Consolidation
Reduced
Operational Costs
Capacity
Management
Operational Efficiency
Policy-based
Automation
Any app, any
resource, any time
Improve resource
utilization, get
more out of
today’s fast
industry-standard
hardware
Quickly and
cheaply set up
development,
test, and
production
environments
Recover from
failures quickly,
reliably and
cost-efficiently
Match workloads
with available
capacity to
optimize
efficiency and
manage SLA’s
Automate to
reduce manual
intervention,
human errors,
time and labor
costs
13. Virtual Technologies
• Virtual Technologies designs and implements virtualization
solutions for business, education and government entities
• Offers world-class virtualization software products from
partners such as Virtualiron, VMware, and XenSource and
hardware products from HP, Dell & Compellent
• Provides a total package: assessment, product selection,
implementation and support
• Working with Regional Utility Companies to offer rebates for
customers who “virtualize”
Hinweis der Redaktion
We see customers deploying virtualization to solve the following problems. Most customers start with consolidation to improve average server utilization to get more out of today’s fast industry standard hardware Then users use virtual servers to quickly and cheaply set up development, test, and production environments Availability features allow users to recover from failures quickly, reliably and cost-efficiently (using N+1 hardware versus 2N) and simplifying reconfigurations for DR scenarios Match the capacity to workload (Optimize your equipment for efficiency) Reduce the human labor needed through policy-based automation (reduce human error, reduce # of human resources needed)