4. What is XCP?
XCP = Xen Cloud Platform
Open Source Citrix’s XenServer
Datacenter and cloud-ready API
Complete virtualization stack
5. What is XAPI?
• XAPI = XenAPI server
• Written in OCaml
• XML-RPC style API
• Extensible via python plugins
• Shared with XenServer
• http://github.com/xen-org/xen-api
6. Features
• VM lifecycle
• Resource pools
• Event tracking
• Real-time performance monitoring
• Works with Windows and Linux guests
• Paravirtualized drivers optimized for Windows VMs
• OpenFlow support with Open vSwitch
11. XCP ISO
• Installs like XenServer
• Same kernel and drivers as XenServer
• Mostly the same code as XenServer
• Based on CentOS 5.x
• Hard to build it yourself
• http://www.xen.org/download/xcp/index.html
12. XCP-XAPI
• Make the XAPI toolstack independent of CentOS 5.x
• Xen, XAPI and everything in between via your Linux distro
– “apt-get install xcp-xapi” or “yum install xcp-xapi”
• Debian 7.0 “Wheezy"
• Ubuntu 12.04 LTS
• Next: Fedora & CentOS
13. Compare XCP Packages
ISO xcp-xapi
• Black-box style appliance • Standard Linux packages
• Based on CentOS 5.x • Most components provided by distro
• Managed using XenAPI • Managed using XenAPI
• Supports most XenServer features • Limited set of shared SR types
• Supports most SR types • Currently only in Debian/Ubuntu
• Hard to build it from source • Plans forming for Fedora
15. Past Releases
• XCP 0.5
– July 2010, based on XenServer 5.6
• XCP 1.0
– February 2011, based on XenServer 5.6 SP1
• XCP 1.1
– October 2011, based on XenServer 5.6 FP2
16. XCP 1.5 - beta released Feb 2012
• Internals: Xen 4.1, GPT, smaller Dom0
• Networking: Open vSwitch backend, NIC Bonding
• Performance and Scalability:
– 1 TB mem/host
– 16 VCPUs/VM, 128 GB/VM
• New OS Templates: Ubuntu 10.04, Debian Squeeze, Oracle
Enterprise Linux 6.0, SLES 10 SP4
• GPU pass through: for VMs serving high end graphics
17. XCP 1.6 - due Oct 2012
• Internals: Xen 4.1.2, CentOS 5.7, 2.6.32.43, OVS 1.4.1
• Networking: Better VLAN scalability, LACP bonding, IPv6
• New OS templates: Ubuntu Precise 12.04, RHEL/CentOS,
Oracle Enterprise Linux 6.1 & 6.2, Windows 8
• New Windows drivers: installable by Windows Update
Service
• Storage XenMotion: move VDIs during live-migration
18. XCP-XAPI
• Current Release:
– Ubuntu 12.04 LTS
– Based on snapshot of XCP 1.6
• Next Releases:
– Debian Wheezy
– Ubuntu 12.10
• Future:
– Merge with xen-api master
– Fedora
24. Domain 0 Disaggregation
• Split Control Domain into Driver, Stub and Service Domains
– Each domain contains a specific management server
• Unique benefit of the Xen architecture
– Security: Minimum privilege; Narrow interfaces
– Robustness: ability to safely restart parts of the system
– Scalability: more distributed system
• Currently used by Qubes OS and Citrix XenClient XT
• Hopefully coming to XCP 2.0 in 2013
26. Getting involved with XCP
• Download it and use it
• http://lists.xen.org/xen-api
• https://github.com/xen-org
• https://launchpad.net/xcp
• How do you want to get involved?
27. Make XCP more open?
• Open Roadmap planning
• Open Bug tracker
• Open Build system
• Release independently of XenServer
• More code open sourced
• What do you want to see?
28. Questions?
• Get involved:
– #xen-api on Freenode
– xen-api@lists.xen.org
• Get more info:
– http://wiki.xen.org
– Tutorial: http://xen.org/community/xenday11
Editor's Notes
Xenoservers = public infrastructure for wide-area distributed computing
Note: not exactly 1:1 with XEComparisons to other APIs in the virtualization space (source: Steven Maresca)Generally speaking XAPI is well-designed and well-executedXAPI makes it pleasantly easy to achieve quick productivityXAPI is set up to work with frameworkssuch as CloudStack and OpenStack. Some SOAPy lovers of big XML envelopes and WSDLs scoff at XML-RPC, but it certainly gets the job done with few complaintsExample codehttp://bazaar.launchpad.net/~nova-core/nova/github/files/head:/plugins/xenserver/xenapi/etc/xapi.d/plugins/ https://github.com/xen-org/xen-api/blob/master/scripts/examples/python/XenAPIPlugin.py
VM lifecycle (start, stop, resume) ... automation is the key pointLive snapshots: Takes a snapshot of a live VM (e.g. for disaster recovery or migration)Resource pools (multiple physical machines): XS & XCP onlylive migration: VM is backed up while running, onto shared storage (e.g. NFS) in a pool and when completed restarted elsewhere in that pool. disaster recovery: you can find lots of information on how this works at http://support.citrix.com/servlet/KbServlet/download/17141-102-19301/XenServer_Pool_Replication_-_Disaster_Recovery.pdf (the key point is that I can back up the metadata for the entire VM)Flexible storage: XAPI does hide details for storage and networkingI.e. I apply generic commands (NFS, NETAPP, iSCSI ... once its created they all appear the same) from XAPI. I only need to know the storage type when I create storage and network objects (OOL)Upgrading a host to a later version of XCP (all my configs and VMs stay the same) …and patching (broken now - bug, can apply security patches to XCP/XS or Dom0 but not DomU)
Concepts used by the API and CLI:xeVMs: identified by uuid, you can get an opaque ref, ref passed to other calls, can get the record listing all the paramsNetwork: VMs have Vifs, part of a network, each networkStorage: SRs contain VDIs, attached to VMs using VBDs. The VDI is a representation of a PBDHosts: the physical server, has PBDs and PIFs, VM is resident on a particular hostPools: XenAPI pools, groups hosts, master slave system, shared storage/networking, live-migrationConsider a snapshot: it’s a VM, has a parent VM, the VDIs are related too.http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/api/
http://wiki.xen.org/wiki/XCP_Release_History
* Host Architectural Improvements. XCP 1.5 now runs on the Xen 4.1 hypervisor, provides GPT (new partition table type) support and a smaller, more scalable Dom0. * GPU Pass-Through. Enables a physical GPU to be assigned to a VM providing high-end graphics. * Increased Performance and Scale. Supported limits have been increased to 1 TB memory for XCP hosts, and up to16 virtual processors and 128 GB virtual memory for VMs. Improved XCP Tools with smaller footprint. * Networking Improvements. Open vSwitch is now the default networking stack in XCP 1.5 and now provides formal support for Active-Backup NIC bonding. * Enhanced Guest OS Support. Support for Ubuntu 10.04 (32/64-bit).Updated support for Debian Squeeze 6.0 64-bit, Oracle Enterprise Linux6.0 (32/64-bit) and SLES 10 SP4 (32/64-bit). Experimental VM templates for CentOS 6.0 (32/64-bit), Ubuntu 10.10 (32/64-bit) and Solaris 10. * Virtual Appliance Support (vApp). Ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
* Host Architectural Improvements. XCP 1.5 now runs on the Xen 4.1 hypervisor, provides GPT (new partition table type) support and a smaller, more scalable Dom0. * GPU Pass-Through. Enables a physical GPU to be assigned to a VM providing high-end graphics. * Increased Performance and Scale. Supported limits have been increased to 1 TB memory for XCP hosts, and up to16 virtual processors and 128 GB virtual memory for VMs. Improved XCP Tools with smaller footprint. * Networking Improvements. Open vSwitch is now the default networking stack in XCP 1.5 and now provides formal support for Active-Backup NIC bonding. * Enhanced Guest OS Support. Support for Ubuntu 10.04 (32/64-bit).Updated support for Debian Squeeze 6.0 64-bit, Oracle Enterprise Linux6.0 (32/64-bit) and SLES 10 SP4 (32/64-bit). Experimental VM templates for CentOS 6.0 (32/64-bit), Ubuntu 10.10 (32/64-bit) and Solaris 10. * Virtual Appliance Support (vApp). Ability to create multi-VM and boot sequenced virtual appliances (vApps) that integrate with Integrated Site Recovery and High Availability. vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
NBD usedto sync the live disk
Example: XOARSelf-destructing VMs (destroyed after initialization): PCIBack = virtualize access to PCI Bus configRestartable VMs (periodic restarts): NetBack (Physical network driver exposed to guest) = restarted on timerBuilder (instantiate other VMs) = Restarted on each request