The document discusses Cisco's Container Platform and provides the following key points:
1. Cisco's Container Platform provides a turnkey solution for production-grade Kubernetes container environments that is easy to acquire, deploy and manage on hybrid cloud infrastructures.
2. It features native Kubernetes integration that is 100% upstream compatible, integrated networking, management and security capabilities, and support for AI/ML workloads.
3. The platform architecture includes hardware from Cisco (UCS servers, Nexus switches), virtualization software (VMware, HyperFlex), and container-specific software like Kubernetes, Istio and Prometheus for orchestration, networking and monitoring of container workloads.
Currently Cisco first starts with the requirement of a infrsstructure platfrom like Cisco Hyperflex running VMware as an example.
Deployment of CCP doesn’t require internet access for deployment and therefore allows for air-gapped deployments.
From there we deply virtual machine instances running Ubuntu OS fully back-end suppored by Canonical, included in the subscription.
The Kubernetes releases are also fully back-end suppored by Google, also included in the subscription.
We offer a very easy UI driven wizard to install the Kubenetes cluster that runs the CCP Control Plane .
From the Control Plane, you can then again from a UI wizard or from API calls, easily deploy Kubernetes clusters (we call them tenant clusters) either on-premise or native AWS EKS, with other cloud providers to follow.
With each on-premise tenant cluster, you get:
A Choice of container networking: Cisco ACI, Contiv or Calio CNI plugin
Persistent volume storage on using Vmware volume plugin or when on using Hyperflex, their storage pluing is immediately avaible.
Every cluster comes with logged from a deployed ELF (elcastic search, fluentd and kibana) and monitoring with Prometheus and Grafana.
Other selectable turnkey options like a Harbor Container registry, Istio, and AWS IAM authentication are also avaible.
Finally, we deploy a new release each month and give the customer the opertunity to live migrate and upgrade clusters .
In CCP 4.0, we now have the ability to attach multiple GPUs in the server and thru the VMware Passthrough option to a Kubernetes node and run AI/ML workloads defined using a GPU annotated node pool.
GPU passthrough gives near native GPU performance compared to vGPU; good for machine learning workloads. GPU passthrough also provides better security.
VM with GPU passthrough cannot be live migrated
CCP installs nvidia-container-runtime=1.0.0+docker17.03.2-1, cuda-drivers