2. Virtualization
Separation of administrative zones
Separation of software failure
Consolidation of hardware resources
Full utilization of hardware
Easier hardware provisioning -- Want a
server? You’ve got a server.
Excellent test environments
3. What virtualization isn’t
Not an HA solution by itself
Naïve Implementation:
Not suitable for some secure applications
Timing of private keys
Unknown -- Lots of new code
Host OS adds a new point of entry
May actually increase complexity
Adds Host OSes to manage
Adds to total number of points of management
Encourages “guerilla” server projects
4. Full Virtualization
Hardware Virtual Machines
VMWare, Xen HVM, KVM, Microsoft VM, Parallels
Runs unmodified guests
Generally worst performance, but often acceptable
Simulates bios, communicates with VMs through
ACPI emulation, BIOS emulation, sometimes custom
drivers
Can sometimes virtualize accross architectures,
although this is out of fashion.
5. Para-virtualization
Hypervisor runs on the bare metal. Handles CPU
scheduling and memory compartmentalization.
Dom0, a modified Linux Kernel, handles networking
and block storage for all guests.
Dom0 is also privileged to manage the VMs on the system.
DomU, or the guests OS, sends some requests
straight to the hypervisor, and others to the Dom0.
Because the kernel knows its virtualized, features can
be built into it: hot connection/disconnection of
resources, friendly shutdown, serial console.
Other paravirtualization schemes: Sun Logical
Domains, VMware (sometimes)
6. Elements of a Xen VM
Virtual Block Device
Image file
Real block device (either LVM or physical)
Network Bridges
Routed, terminates at the Dom0
Bridged, terminates at the network
interface
Virtual Framebuffer
VNC Server
8. xm -- Xen Manager
Commandline tool on Dom0 for managing vms.
Quick overview of options:
console -- attach to a device’s console
create -- boot a DomU from a config file
destroy -- immediately stop a DomU
list -- List running DomUs
migrate -- Migrate a console to another Dom0
pause/unpause -- akin to suspend. TCP connections will timeout
shutdown -- Tell a DomU to shut down.
network-attach/network-detach
block-attach/block-detach
10. Xen Live Migration
Migrate machines off during upgrades or
balance load
Set xend.conf to allow migration from
other xen Dom0s.
Machine must reside on shared storage.
Must be on the same level2 network
xm migrate -l Machine dest.ip.addr.ess
11. Shared Storage Options
NFS
Simple hardware failover
well-understood configuration
Spotty reliability history
Block level storage (iscsi or FC)
More complex configuration
Multipathing
Commercial solutions are expensive
We’re seeing traction for open iscsi lately.
12. What to Look for In Storage
Redundant host connections
Snapshotting
Replication
Sensible Volume Management
Thin Provisioning
IP-based failover, esp. if x86 based
13. Storage Systems
OpenFiler
Nice fronted.
Replication with DRBD
iscsi with linux iscsi-target
OpenSolaris/ZFS
Thin provisioning
Too many ZFS features to list
StorageTek AVS -- Replication in may forms
Complex configuration
NexentaStor
ZFS/AVS in Debian.
Rapidly Evolving
SAN/IQ
Failover, storage virtualization, n(y) redundancy
Expensive and wickedly strict licensing
Too Many propriety hardware systems to list
14. Network Segmentation
802.1q VLAN tagging
All VLANs operate on the same physical network, but
packets carry an extra tag that indicates which
network they belong in.
Create an interface and a bridge for each vlan.
Connect Xen DomUs to their appropriate vlan
Configure host’s switch ports as vlan trunk ports.
Configure router somewhere, or a layer 3 switch is
useful here.
15. Commercial Xens
Citrix XenServer
Oracle VM
VirtualIron
Typical Features:
Resource QoS
Performance trending
Physical Machine Failure detection
Pretty GUI!
API for server provisioning
16. Recovery strategies
Mount virtual block device on Dom0
losetup /dev/loop0 XenVBlockImage.img
losetup -a
kpartx -a /dev/loop0
pvscan (if using LVM inside VM)
vgchange -a y VolGroup00
mount /dev/mapper/VolGroup00-LogVol00 /mnt/xen
chroot /mnt/xen (or whatever recovery steps you take next)
17. Xen Recovery -- cont
Boot from recovery CD as HVM
disk = [ ’tap:aio:/home/xen/domains/damsel.img,ioemu:hda,w',
'file:/home/jack/knoppix.iso,ioemu:hdc:cdrom,r' ]
builder="hvm"
extid=0
device_model="/usr/lib/xen/bin/qemu-dm"
kernel="/usr/lib/xen/boot/hvmloader"
boot="d"
vnc=1
vncunused=1
apic=0
acpi=1
Create custom Xen Kernel OS image for rescues
18. Pitfalls
Failure to segregate network
802.1q and iptables firewalls everywhere
Creating Single Points of Failure
Make sure that VMs are clustered
If they can’t be clustered, auto started on another
machine
Assess reliability of shared storage
Storage Bottlenecks
Not planning for extra points of management
cfengine, puppet, centralized authentication
Less predictable performance modeling
19. Maintaining HA
Hardware will fail
Individual VMs will crash
Cluster Multiple VMs for each application
Load Balancers can be VMs too.
20. HA -- Continued
Failure Detection, make VM restart on different
machines if a machine fails
Make VMs migrate off a host when you shut it
down
Build your testing system into the VM scheme.
At least one testing system per type of host.
Diligently do all changes on that before rolling out.
Have at least one development VM per VM cluster.
Make sure that networking equipment and
storage is redundant too
If running web servers, keep a physical web
server on hand to serve a “We’re sorry, come
back later” page. For mail servers, an
independant backup MX.
21. What is File System?
• A file system is a hierarchical structure (file
tree) of files and directories.
• This file tree uses directories to organize
data and programs into groups, allowing the
management of several directories and files
at one time.
• Some tasks are performed more efficiently
on a file system than on each directory
within the file system.
22. What is Network File System?
• NFS developed by SUN Microsystems for use
on its UNIX-based workstations.
• A distributed file system
• Allows users to access files and directories
located on remote computers
• But, data potentially stored on another
machine.
• NFS builds on the Open Network Computing
Remote Procedure Call (ONC RPC) system
23. Continue…
Mechanism for storing files on a network.
Allows users to ‘Share’ a directory.
NFS most commonly used with UNIX systems.
Other software platforms:
-Mac OS, Microsoft Windows, Novell NetWare, etc.
Major Goals:
-simple crash recovery
-reasonable performance :80% of a local drive
24. Versions and Variations
Version 1 and Version 2
V1 Sun used only for in-house
experimental purposes
Did not release it to the public
V2 of the protocol originally operated
entirely over UDP and was meant to keep
the protocol stateless, with locking (for
example) implemented outside of the core
protocol.
Both suffered from performance problems
Both suffered from security problems
security dependant upon IP address
25. Version 3
NFS v3 can operate across TCP as well as
UDP
Support for asynchronous writes on the
server
Obtains multiple file name, handles and
attributes
Support for 64-bit file sizes and offsets
Handle files larger than 4 gigabytes (GB)
Improves performance, and allowed it to
work more reliably across the Internet
26. Version 4
Currently version 2 and version 3
protocols are in use with version 4 under
consideration for a standard
includes more performance
improvements
Mandates strong security
introduces a stateful protocol
developed with the IETF (Internet
Engineering Task Force)
28. RPC request Action
GETATTR Get file attribute
SETATTR Set file attribute
LOOKUP File name search
ACCESS Check access
READ Read file
WRITE Write to the file
CREATE Create file
REMOVE Remove file
RENAME Rename file
29. stateless server and client
server can be rebooted and user on
client might be unaware of the reboot
client/server distinction occurs at the
application/user level not the system
level
highly flexible, so we need to be
disciplined in our
administration/configuration
Advantages
30. Disadvantage
uses RPC authentication
easily spoofed
filesystem data is transmitted in
cleartext
Data could be copied
Network slower than local disk
Complexity, Security issues.
31. Conclusion
New technologies open up new
possibilities for network file systems
Cost of increased traffic over Ethernet
may cause problems for xFS, cooperative
caching.