Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

OSDC 2019 | KubeVirt: Converge IT infrastructure into one single Kubernetes platform by Kedar Bidarkar

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige

Hier ansehen

1 von 33 Anzeige

OSDC 2019 | KubeVirt: Converge IT infrastructure into one single Kubernetes platform by Kedar Bidarkar

Herunterladen, um offline zu lesen

We will dive into KubeVirt and see how we could create and manage VMs in Kubernetes In this session we will talk about what is KubeVirt and how it works on a kubernetes platform. KubeVirt allows users to create and manage virtual machines within a Kubernetes Cluster.
This session will be covering the following topics:

KubeVirt Installation
Basic KubeVirt objects and components
How to deploy and manage virtual machines
KubeVirt Storage
KubeVirt Networking
Benefits :
Kubernetes is a well established container platform, but migrating applications/services to containers is not always easy. KubeVirt allows in such situations to migrate virtual machine based workloads to the same platform where the containers are already running, thus helping converge IT Infrastructure into one single platform, Kubernetes.

We will dive into KubeVirt and see how we could create and manage VMs in Kubernetes In this session we will talk about what is KubeVirt and how it works on a kubernetes platform. KubeVirt allows users to create and manage virtual machines within a Kubernetes Cluster.
This session will be covering the following topics:

KubeVirt Installation
Basic KubeVirt objects and components
How to deploy and manage virtual machines
KubeVirt Storage
KubeVirt Networking
Benefits :
Kubernetes is a well established container platform, but migrating applications/services to containers is not always easy. KubeVirt allows in such situations to migrate virtual machine based workloads to the same platform where the containers are already running, thus helping converge IT Infrastructure into one single platform, Kubernetes.

Anzeige
Anzeige

Weitere Verwandte Inhalte

Diashows für Sie (20)

Ähnlich wie OSDC 2019 | KubeVirt: Converge IT infrastructure into one single Kubernetes platform by Kedar Bidarkar (20)

Anzeige

Aktuellste (20)

OSDC 2019 | KubeVirt: Converge IT infrastructure into one single Kubernetes platform by Kedar Bidarkar

  1. 1. 1 KubeVirt: Converge IT Infra into one single k8s platform Kedar Bidarkar @kbidarka Senior Quality Engineer @ Red Hat
  2. 2. 2 Agenda ●Why KubeVirt? ●What is KubeVirt? ●Basic KubeVirt objects and components ●Deployment and management Virtual Machines ●KubeVirt Storage ●KubeVirt Networking ●Q & A
  3. 3. 3 Currently ●We have On-premises solutions like Openstack, oVirt ●We have public clouds AWS, GCP, Azure. ●So why KubeVirt and why VM management stuff again?
  4. 4. 4 Infrastructure Convergence Old way... Multiple Workloads - Multiple Stacks VM Workload VM Platform Operating System Bare Metal Container Workload Kubernetes Operating System Bare Metal Scheduling, Storage, Network Logging, Metrics, Monitoring Knowledge 2x
  5. 5. 5 Infrastructure Convergence KubeVirt way… Multiple Workloads - Single stack Container Workload Kubernetes Operating System Bare Metal VM Workload Logging, Metrics, Monitoring Knowledge Scheduling, Storage, Network 1x
  6. 6. 6 Infrastructure Convergence ●Environments will coexist over time –Many new workloads will move to containers. –But virtualization will still remain for foreseeable future. ●Business reasons ( cost, time to market, app towards EOL ) ●Technical reasons ( custom kernel, hard-to-containerize apps ) ●Unified infra, should be easier to maintain, operate and reduce costs. ●Migration Path: Migration of workloads from VM to Containers will be on same Infra. ●VMs can benefit from kubernetes concepts (load balancing, rolling deployment, etc.)
  7. 7. 7 What is KubeVirt? KubeVirt is a Kubernetes addon and enables scheduling of traditional VM workloads side by side with container workloads on Kubernetes. –https://kubevirt.io/ ●Makes use of Custom Resource Definitions(CRD) and bunch of controllers –A custom resource is an extension of k8s API, not available by default with k8s. ●Extends existing k8s clusters by providing set of Virt APIs. ●Works by running libvirt (KVM) in a container
  8. 8. 8 KubeVirt Installation ●Pre-requisites: –Kubectl –Minikube ●https://github.com/kubevirt/demo
  9. 9. 9 Benefits with KubeVirt ●Drops directly into existing Kubernetes Clusters –No additional host setup required –Manage VMs like pods ●Enables a transition path where vms can make use of k8s –Infra, tools and Management ●Hard to containerize apps can be deployed in k8s as VM’s. ●Lowers the entry load for migration. No need to containerize app before migrating. ●Provides infra convergence and workflow convergence.
  10. 10. 10 KubeVirt architecture
  11. 11. 11 Components of KubeVirt ●Virt-operator: Handles install, removal and upgrade of kubeVirt application. ●Virt-api: apiserver ( validation, defaults of VMs and entry point for all Virt flows) ●Virt-controller: controller-manager ( where all the controllers and logic lives ) ●Virt-handler: Kubelet ( node daemon, managing VMIs which run inside Pods, which are managed by kubelet) ●Virt-launcher: ( Provides cgroups and namespaces. For every VMI object one pod is created and uses a local libvirt instance)
  12. 12. 12 KubeVirt Objects
  13. 13. 13 Virtual Machine Instance ●VMI is a running VM. ●Virtual Machine Instance have their own kind. ●scheduled as pods and live inside the pods. ●Applications within VMI are exposed using service. –Example: virtctl expose vmi vmi-fedora-cdisk --name vmiservice --port 27017 --target-port 22 –ssh cirros@172.30.3.149 -p 27017 Example: apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: labels: special: vmi-fedora-cdisk name: vmi-fedora-cdisk spec: domain: devices: disks: - disk: {} name: containerdisk machine: type: "q35" resources: requests: memory: 1Gi volumes: - name: containerdisk image: kubevirt/fedora-cloud-container-disk-demo
  14. 14. 14 Create a new VMI
  15. 15. 15 Where do I find the domxml files
  16. 16. 16 Virtual Machine Virtual Machine provides additional management capabilities to VirtualMachineInstance inside the cluster. –Start/Stop/Restart –Offline configuration change apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt-vm: vm-fedora-cdisk name: vm-fedora-cdisk spec: running: false template: metadata: labels: kubevirt-vm: vm-fedora-cdisk <VMI spec here> spec: domain: devices: disks: - disk: {} name: containerdisk resources: requests: memory: 1Gi volumes: - containerDisk: image: kubevirt/fedora-cloud-container-disk-demo name: containerdisk
  17. 17. 17 Create a new Virtual Machine
  18. 18. 18 VM mgmt with virtctl ●Kubectl still used for basic VMI operations, virtctl binary required for advanced features such as : –Serial and graphical console access. –Start, Stop and Restart Vms. ●Virtctl is deployed and used from the client side. –Typical virtctl commands: ●Virtctl stop testvm ●Virtctl restart testvm ●Virtctl console testvm ●Virtctl vnc testvm
  19. 19. 19 KubeVirt Storage
  20. 20. 20 containerDisk ●Disks are pulled from container registry and reside on local node hosting the VMs. ●They are ephemeral storage devices ●Push VM disks to container registry using KubeVirt base container image kubevirt/container-disk-v1alpha Example: metadata: name: testvmi-containerdisk apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance spec: domain: resources: requests: memory: 64M devices: disks: - name: containerdisk disk: {} volumes: - name: containerdisk containerDisk: image: vmidisks/fedora25:latest cat << END > Dockerfile FROM kubevirt/container-disk-v1alpha ADD fedora25.qcow2 /disk END docker build -t vmidisks/fedora25:latest . docker push vmidisks/fedora25:latest
  21. 21. 21 Containerized Data Importer ●Persistent storage mgmt add-on for k8s. ●Primary goal is to build VM disks on PVCs for KubeVirt VMs. ●Use cases: –Import disk image from a URL to PVC ( HTTP/S3) –Upload a local disk image to a PVC –Clone an existing PVC
  22. 22. 22 persistentVolumeClaim ●Used when VMI disk needs to persist after the VM terminates. –Suitable when persistent storage is required. ●A PV can be in Filesystem or block mode. –Filesystem: Disk must be named disk.img and placed under root path. –Block: For consuming raw block devices (Block Volume feature gate) Example: metadata: name: testvmi-pvc apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance spec: domain: resources: requests: memory: 64M devices: disks: - name: fedora-standard-6g disk: {} volumes: - name: mypvcdisk persistentVolumeClaim: claimName: fedora-standard-6g
  23. 23. 23 DataVolume ●DataVolume is a custom resource provided by the Containerized Data Importer (CDI) project. ●DataVolume provides integration between KubeVirt and CDI, it automates both PVC creation and importing of a VM disk on PVC during the VM launch flow. ●VM is NOT SCHEDULED until the DataVolume is in success state.
  24. 24. 24 DataVolume Example dataVolumeTemplates: - metadata: name: fedora-datavolume spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 6Gi source: http: Url: https://download.example.com/Fedora29-1.1.x86_64.qcow2 Example: apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: labels: special: vmi-fedora-datavolume name: vmi-fedora-datavolume spec: domain: devices: disks: - disk: {} name: datavolumedisk1 machine: type: "q35" resources: requests: memory: 2048M volumes: - name: datavolumedisk1 dataVolume: name: fedora-datavolume
  25. 25. 25 KubeVirt Networking
  26. 26. 26 KubeVirt Networking ●Connecting a VM to networks consists of two parts. ●Interface defines a virtual network interface of a VM, which is frontend ●A network specifies the backend of an interface ●Each interface must have a corresponding network with same name. Example: kind: VirtualMachineInstance spec: domain: devices: interfaces: - name: default bridge: {} networks: - name: default pod: {} # Stock pod network
  27. 27. 27 KubeVirt Networking ●Virtual Machines are connected to regular pod network. ●From the outside no difference between a VM and a pod. ●KubeVirt does not bring additional network plugins. –But allows to utilize existing plugins.
  28. 28. 28 Network Interfaces (frontend) ●Describe properties of virtual interfaces as seen inside VM instance. ●Each interface should declare its type: –Bridge ( default ) –masquerade –sriov –slirp ( non production )
  29. 29. 29 Network Types ( Backend ) Example: kind: VM spec: domain: devices: interfaces: - name: default bridge: {} - name: ovs-net bridge: {} networks: - name: default pod: {} # Stock pod network - name: ovs-net multus: # Secondary multus network networkName: ovs-vlan-100 ●Each network should declare its type: –Pod – Default k8s network –Multus – secondary network –Genie – secondary network ●The networkName need to match the networkAttachementDefinition object name. Example: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ovs-vlan-100 spec: config: '{ "cniVersion": "0.3.1", "type": "ovs", "bridge": "br1", "vlan": 100 }'
  30. 30. 30 Other KubeVirt Features ●Live Migration: –Migration to other compute nodes. ●KubeVirt web-ui: –Extension of the OpenShift Console for Virtualization View. –https://github.com/kubevirt/web-ui-operator ●Foreman KubeVirt Plugin –Kubevirt as compute resource for Foreman – https://github.com/theforeman/foreman_kubevirt
  31. 31. 31 Collaboration ●Website: –https://kubevirt.io/ ●GitHub: –https://github.com/kubevirt/ ●Mailing list: –https://groups.google.com/forum/#!forum/kubevirt-dev ●Slack: –https://kubernetes.slack.com/messages/virtualization ●IRC: –#kubevirt on irc.freenode.net
  32. 32. 32 Q & A
  33. 33. 33 Thank You

×