7. App
A
Containers vs. VMs
Hypervisor (Type 2)
Host OS
Server
Gue
st
OS
Bin
s/
Libs
App
A’
Gue
st
OS
Bin
s/
Libs
App
B
Gue
st
OS
Bin
s/
Libs
AppA’
Docker
Host OS
Server
Bins/Li
bs
AppA
Bins/Libs
AppB
AppB’
AppB’
AppB’
VM
Container
Containers are isolated,
but share OS and, where
appropriate, bins/libraries
Gue
st
OS
Gue
st
OS
…result is significantly faster
deployment, much less
overhead, easier migration,
faster restart
8. Why are Docker containers
lightweight?
Bin
s/
Libs
App
A
Original App
(No OS to take
up space, resources,
or require restart)
AppΔ
Bins/
App
A
Bin
s/
Libs
App
A’
Gue
st
OS
Bin
s/
Libs
Modified App
Copy on write
capabilities allow
us to only save
the diffs
Between
container A and
container
A’
VMs
Every app, every copy of an
app, and every slight modification
of the app requires a new virtual server
App
A
Gue
st
OS
Bin
s/
Libs
Copy of
App
No OS. Can
Share bins/libs
App
A
Gue
st
OS
Gue
st
OS
VMs Containers
52. Windows Server Node in Kubernetes
kubelet kube-proxy
Kubernetes
Master
Components
(unchanged)
Kubectl
(unchanged)
Windows Server 2016 Node
docker
Infra
container POD
container
Infra
container POD
container
69. Azure App Service
Quickly build, deploy, and scale web apps
Preview - now supports Windows Server containers
71. Fundamentals of Windows containers
and Windows container-based web
apps on Azure App Service
72. Azure Container Instances (ACI)
THR2206 - Run a serverless Kubernetes
cluster by bridging AKS and ACI
through the Virtual-Kubelet
73. Azure Kubernetes Service (AKS)
Fully managed Kubernetes orchestration service
Auto patching, auto scaling, auto updates
Use the full Kubernetes ecosystem (100% upstream)
Deeply integrated with Azure Dev Tools and services
74. Service Fabric
Windows and Linux Containers
Stateless and stateful microservices
Deploy on Azure, Azure Stack and on-premises
75. Service Fabric Mesh
Fully managed microservices platform, built on Service Fabric
Windows and Linux Containers
76. Optimizing
for microservice
development
Tracking multiple
deployment pipelines
while maintaining agile
updates
Focusing on business
logic instead of
microservice platform
maintenance
Dealing with the
complexity
of network communications
Monitoring and
governance at overarching
and granular levels
Achieving reliable state
and data consistency
without latency issues
Running highly
secure
applications at
scale
</>
.NET
</>
77. Optimizing
for microservice
development
Tracking multiple
deployment pipelines while
maintaining agile updates
Focusing on business logic
instead of microservice
platform maintenance
Dealing with the complexity
of interactions and network
communications
Monitoring and
governance at overarching
and granular levels
Achieving reliable state
and data consistency
without latency issues
Running highly
secure applications
at scale
Build Deploy Operate
Flexible
infrastructure
Lifecycle
managemen
t
24/7
availability &
performance
Elastic
scalabilit
y
Microservice
and
container
orchestratio
n
Security &
complianc
e
Health &
monitorin
g
Challenges
</>
Azure Service Fabric
.NET
</>
Build and deploy containers and microservices on Windows and Linux, at any scale, on any cloud
78. Programming
Models
Dev & Ops
Tooling
Orchestration Lifecycle
Management
Health &
Monitoring
Always On
Availability
Auto
Scaling
AzureOn-premises infrastructureAny cloudDev machine
Service Fabric: cloud application platform
Build OperateDeploy
79. Azure Service Fabric offerings
Bring your own infrastructure
Service Fabric
Standalone
On-premisesAny cloud
Dev machine
Dedicated Azure clusters
Azure
Service Fabric
Azure
Service Fabric Mesh
Serverless microservices
Service Fabric
Full Control Fully managed
80. Azure Service Fabric offerings
Bring your own infrastructure
Service Fabric
Standalone
On-premisesAny cloud
Dev machine
Dedicated Azure clusters
Azure
Service Fabric
Azure
Service Fabric Mesh
Serverless microservices
Virtual machines
OS patching
Runtime upgrades
Capacity planning
Network and
storage
Micro-billing
App deployment
You
Azure
Responsibility
Cluster capacity
Network and
storage
App deployment
Virtual machines
OS patching
Runtime
upgrades
Hardware
OS patching
Runtime upgrades
Cluster capacity
Network and storage
App deployment
81. Migrate existing applications as-is to the
cloud, including using containers, to reduce
cost and enable DevOps deployments
New applications conceived and built
with the cloud in mind using
microservices architecture
82. Azure Service Fabric is designed for mission-critical services
Power BI
Dynamic
s365
Intune
Cortana Skype for
business
Cosmos DB
IoT Hub
Event Hub
SQL Database
Azure
Monitor
Core
Azure
Services
Archive
Storage
Visual Studio
Team
Services
Stream
Analytics
Azure
Database for
PostgreSQL
Azure Database
for MySQL
Azure
Container
Registry
Event Grid
It’s important to note that the containers run on VMs in (most) public clouds, which is why node auto-scaling is important.
I would expect Azure to eventually provide node auto-scaling.
Examples of Kubernetes as a service: StackPointCloud (which is what I tried) and the new KUBE2GO (“Run Kubernetes Anywhere. Instantly. Free.”).
Firewall image licensed through Creative Commons license
While the Pod is the atomic unit of the Kubernetes resource model, you’ll almost never deal with Pods directly. Instead we use deployments.
A Deployment is effectively a dynamic set of pods with replication control and rollout management.
The core of a Deployment is actually a Pod spec.
As you can see here we have defined a single container in our pod, specifying an image stored in private container registry.
Additionally we specified the credentials for accessing the container registry using a Kubernetes secret. This allows us to keep credentials out of source control but still keep them secure and available for our application.
Finally a Deployment allows us to specify how many copies of a Pod should be running at a time.
Having a fixed number of replicas is useful, but the real strength of an orchestration tool is in dynamic scailing.
Autoscaling really is the ultimate promise of cloud hosting—elastic databases, elastic storage, and now, elasticity for you application
Kubernetes provides auto-scaling via a Horizontal Pod Auto-scaler.
Kubernetes tracks CPU utilization of the agent nodes and responds by scaling the replica set up or down.
In this example we have created a pod autoscaler for our Drupal deployment.
The auto scaler acts on any kubernetes resources matching the attributes in the target reference.
So here we’re targeting any deployment resource with the name “drupal”.
And instructing kubernetes to schedule an additional pod every time CPU utilization exceeds 50%, up to 10 copies.
Right now, kubernetes core only supports CPU utilization as a trigger, but other measures are available as add-ons.
But a word of caution. Pod autoscaling is not the same as scaling your VM set.
That functionality is available as add-ons for GCE and AWS.
At this point we have anywhere between 2 and 10 copies of our Drupal pod running on at most as many nodes.
We need a way to direct traffic to the correct nodes, and once there, to the correct container.
This is where a service comes in.
Just like the autoscaler, a service targets resources based on metadata. Here we target our drupal pods, specifying port 80
And finally, instruct kubernetes to provision an elastic load balancer from the cloud provider, configuring all the rules necessary to load balance across the set of nodes.
This gets us external access to our loadbalacned drupal cluster via a dynamically provisioned IP on the load balancer,
as well as including the service name in the Kubernetes cluster-internal DNS service.
In this case, “drupal” becomes a valid hostname for any pod in the cluster, allowing us to, say, trigger cron from another pod without knowing which nodes the drupal pods are running on.
Another use for Kubernetes service discovery is providing DNS resolution for external services.
The most common use for a Drupal application would be accessing a Relational Database Service.
With this configuration, drupal can access the database with the hostname `mysql-service`.
First off, databases.
They’re crucial part of any Drupal site, and it’s tempting to go ahead and add MySQL to your cluster
But, when it comes to persisting data, nothing is simple
Kubernets provides the building blocks to run a stateful set of pods
But it comes down to individual cloud provider support if the data disk can be reliably identified by kubernetes and consistently attached.
In our experience, whenever possible, use the Relational Database Server provided by your cloud host