3. Package Manager Systems (PMS)
One of the key differences of the two concept is the concept of dependencies.
Package manager systems (PMS) are design to work with a tree of package
dependencies. In fact PMS are all about re-usage and simplification of the
management of this dependency tree.
3
4. Containers
Contrary to that Docker don't define any hard dependencies between
images/containers that can prevent you to install any version of container you
like. Re-usage of software happens by layering. Since container is a run-time
concept, and defines standard boundary of container, it is possible to letting
application inside of the container be agnostic of OS resources available to the
host. E.g. it is always possible to link container exported ports to host system
ports, so you don't care to dev time about.
4
5. Packets inside, Containers outside
That is evidence enough. Use Docker as deployment artefact it just the
natural step in the cloud environment (I'm explicitly don't define the word
"cloud" now) cause it's gives you the ability to install any software in any version
on any host and wire it in standardized way to the rest. Doing so you don't have
to care about shape of the cloud during the design of your service.
Use Packet Managers to build your containers. Keep containers slim (yes, we
are in the microservices age) and provision them convenient and accurate via
packet manager of your choice.
5
6. Why using Docker deployment instead of RPM?
You can still use RPM to install Docker, but once you installed it you profit
from following:
1. Runtime Isolation: Configurable resource limits
2. Runtime Isolation: Ports reconfig even by third-party or legacy software
3. No packet dependency hell. Use different versions of PHP, perl, ruby, npm..
whatever on same host...
4. Integrate deployment of third-party or legacy software in your standard
Docker deployment
6
7. Why using Docker deployment instead of RPM?
5. Profit from that by unified container boundaries (Logging, monitoring,
backup)
6. Easier participate in cloud. As soon you package to standard container and
deploy to cloud, you profit from cloud features you have (e.g. hot migration,
automatic backup, autoscaling... and so on).
7. Deploy entire software stack (E.g. DB, engine, web) as one docker image.
Good idea sometimes.
7
8. Why using Docker deployment instead of RPM?
8. Easier to start everything you need on your laptop
9. A lot of predefined containers for every kind of third party software out there.
10. No distribution borders. Run everything for linux kernel on any distribution.
8
9. Kubernetes
• Kubernetes, or k8s (k, 8 characters, s...get it?), or “kube” if you’re into brevity, is
an open source platform that automates Linux container operations.
• All in containers, even kubernetes services in containers
• Distributed, fault-tolerant, multi-cloud
• Focuses on the microservice architecture
9
10. How to install application in Kubernetes?
• Build docker images
• Push docker-image in the docker-registry
• Describe the application in terms of (resources) k8s:
− Deployment + pod
− service + ingress
• Create resources:
kubectl apply -f resources.yaml
10
Is package manager? Hmmm No!!!!!
12. Helm
As the tag suggests, Helm is a tool to manage applications on Kubernetes, in
the form of Charts. Helm takes care of creating the Kubernetes manifests and
versioning them so that rollbacks can be performed across all kind of objects,
not just deployments. A chart can have deployment, service, configmap etc. It is
also templated so that variables can be easily changed. It can be used to define
complex applications with dependencies.
12
13. Helm
Helm is primarily intended as a tool to deploy manifests and manage them in
a production environment. In contrast to Draft or Gitkube, Helm is not for
developing applications, but to deploy them. There are a wide variety of
pre-built charts ready to be used with Helm.
13
16. Helm problems
1. Problem: k8s resources are written on pure yaml
2. Full copy for dev / stage / prod environments
3. Differences a little:
a. different number of replicas
b. passwords and URIs to databases, external resources
c. ...
4. Solution: Templates
16
24. Helm - Package manager
24
• You can not just take" and run the application in k8s
• Package format
• Signatures and Authentication
• Build and install the package and dependencies
• Build the package
• Parameters that you can override
• Working with the docker image? - No
25. Helm - Development
25
• Problem: fast development feedback
• If you change the code
− Build docker image
− Push in repository
− Download in k8s cluster
• Helm + Tiller
P.S. Tiller is gone, and there is only one functional component (helm).
• The difficulty of templating large deployment configurations
31. Ksonnet
31
Ksonnet is a framework for writing, sharing, and deploying Kubernetes
application manifests. With its CLI, you can generate a complete application
from scratch in only a few commands, or manage a complex system at scale.
32. Ksonnet: Motivations
32
• DRY: Kubernetes YAML are very repetitive
− ie see port numbers, volume names etc
• Configurable: Deploy same app multiple times in different environments
− ie staging vs prod - different image versions
− want differences to be isolated, manually reviewable
• Extensible: Want to build reusable components, and specialise them for
my env
− ie the prometheus-ksonnet package
33. Ksonnet: Motivations
33
• Extensible: Want to build abstractions and helpers
− ie pattern for constructing RBAC objects, or services
• Extensible: Want to impose organisation-wide opinions on my k8s config
− ie name label
• Accidents: Want to prevent accidental application of dev config to prod etc
34. Ksonnet: Jsonnet
34
JSONNET is domain specific configuration language from Google which
allows you to define data templates.
These data templates are transformed into JSON objects using Jsonnet
library or command line tool. As a language, Jsonnet is extension of JSON - a
valid JSON object is always valid Jsonnet template.
37. Ksonnet
37
The basic building blocks are called parts which can be mixed and matched
to create prototypes. A prototype along with parameters becomes a component
and components can be grouped together as an application. An application can
be deployed to multiple environments.
The basic workflow is to create an application directory using ks init,
auto-generate a manifest (or write your own) for a component using ks
generate, deploy this application on a cluster/environment using ks apply
<env>. You can manage different environments using ks env command.
38. Ksonnet
38
In short, Ksonnet helps you define and manage applications as collection
of components using Jsonnet and then deploy them on different Kubernetes
clusters.
Like Helm, Ksonnet does not handle source code, it is a tool for defining
applications for Kubernetes, using Jsonnet.
44. Ksonnet modules
44
• A directory is a “module”
• merge all files in a directory together in a file with same name as dir
• this file becomes the thing you import
• split files up by (micro)service, maybe separate one for config etc.
• A module should self contained / stand alone
− Should be able to be imported into a environment without extra
consideration.
− Can import other modules of course...
• Modules should have a hidden _config field
− Underscore is to signify its reserved; has no semantic meaning.
• Modules should be a big dict with a bunch of well-named “global” variables
45. Ksonnet modules
45
• Expose each object as a “global” variable, or well know field in a single dict
− This is to allow users to extend you modules by merging stuff into
them
• … even if the object “used” by another objects - just hide it; eg containers
− This is because you can’t merge into lists
• Cons: can’t import a module twice per env.
46. Ksonnet modules
46
_config
Only put stuff in _config if:
• It's used in the modules in more than one place
− eg port numbers, domains names
• You need the user to specify it for the module to work
− eg namespace
− In which case, make it an error “required”.
• It's a “meta” variable - ie controls flow in some way
− eg when deciding if to expose RBAC rules or not
47. Ksonnet modules
47
Don’t put everything in config (ie flags, resource requirements) - otherwise this
just ends up being template substitution, makes config unreadable.
Remember, users can merge in values to override arbitrary fields.
48. Ksonnet modules
48
__images
● It's very common to want to run different image versions in different ends
● Therefor, I always put images in a dict under _images
● Have containers refer to this by name
● Also makes it easier to CD tool
49. Ksonnet Abstractions
49
Now we get to the fun bit!
Can build abstractions to reduce repetition:
● serviceFor - given a deployment, make a service with same name & ports
● rbac - create a role, binding and service account with given permissions
● {config,host,secret,empty}VolumeMount - mount a volume into a given path
in every container in a pod.
● antiAffinity - mixin to Deployment, make sure only one pod per node
50. Ksonnet problems
50
1. Bad documentations
2. Practically there are no examples
3. Problem with upgrading from ksonnet 0.8 to 0.9 version
4. It is often necessary to add functionality via JSON code
51. Ksonnet problems
51
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
maxReplicas: 10
metrics:
- pods:
metricName: test_redisqlen
targetAverageValue: 3
type: Pods
minReplicas: 3
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: test-deploy
To add the highlighted red options, you must add the following to the
ksonnet code:
{spec+: {metrics+:
[{"pods":{"metricName":"test_redisqlen","targetAverageValue":3},"t
ype":"Pods"}]}}
{spec+: {scaleTargetRef+: {"apiVersion":"extensions/v1beta1"}}}
53. Draft
53
• Deploy code to k8s cluster (automates build-push-deploy)
• Deploy code in draft-pack supported languages without writing dockerfile
or k8s manifests
• Needs draft cli, helm cli, tiller on cluster, local docker, docker registry
• Draft builds upon Kubernetes Helm and the Kubernetes Chart format,
making it easy to construct CI pipelines from Draft-enabled applications.
• Works not good on MacOS
54. Gitkube
54
• Deploy code to k8s cluster (automates build-push-deploy)
• git push to deploy, no dependencies on your local machine
• Needs dockerfile, k8s manifests in the git repo, gitkube on cluster
56. Metapracticle
56
• Deploy your code in metaparticle supported languages to k8s (automates
build-push-deploy)
• Define containerizing and deploying to k8s in the language itself, in an
idiomatic way, without writing dockerfile or k8s yaml
• Needs metaparticle library for language, local docker
57. Skaford
57
• Deploy code to k8s cluster (automates build-push-deploy)
• Watches source code and triggers build-push-deploy when change
happens, configurable pipeline
• Needs skaffold cli, dockerfile, k8s manifests, skaffold manifest in folder,
local docker, docker registry
59. KSync
59
ksync speeds up developers who build applications for Kubernetes. It
transparently updates containers running on the cluster from your local
checkout. This enables developers to use their favorite IDEs, such as Atom or
Sublime Text to work from inside a cluster instead of from outside it. There is
no reason to wait minutes to test code changes when you can see the results
in seconds.
60. KSync
60
The local piece of ksync is operated via. the ksync binary. It provides some
general functionality:
● Cluster setup and initialization.
● Configuration of folders to sync to the cluster.
● Operating the details of the actual folder syncing (setting up the
connection, configuring the local and remote instances of syncthing to
move the files, managing the local syncthing process).
61. Telepresence
61
Have you ever wanted the quick development cycle of local code while still
having your code run within a remote Kubernetes cluster? Telepresence
allows you to run your code locally while still:
• Giving your code access to Services in a remote Kubernetes cluster.
• Giving your code access to cloud resources like AWS RDS or Google
PubSub.
• Allowing Kubernetes to access your code as if it were in a normal pod
within the cluster.
62. Conclusion
62
Helm works on a cluster by cluster basis whereas ksonnet can be used in a
multi-cluster view where you can have a consistent source of truth of what
should be applied across clusters. I don’t think that is possible right now with
Helm/Tiller.
Also ksonnet can also be used directly for those that don’t need the server
side application aspects of tiller. This would then pair well with the “GitOps”
approach where a source control repo is the system of record for what should
be running on a cluster.