7. Setup the published 6.4 Flowable open source
‘all-in-one’ container connected to a postgres
database
If you want to use separate containers, the
steps are the same, just more of them
Demo scenario
9. Existing state
Control machine
➔ doctl -
(DigitalOcean cli)
➔ kubectl
Pre-built K8S cluster
➔ 2 worker nodes
➔ Each
◆ 2 vCPU
◆ 4Gb RAM
In order to save some time in the demo I’ve pre-provisioned a K8S cluster on
DigitalOcean (it only takes 5 minutes but it would be a pretty boring 5
minutes.
10. Setup Tiller on cluster
# create the tiller service account
kubectl -n kube-system create serviceaccount tiller
# bind the tiller serviceaccount to the cluster-admin role
kubectl create clusterrolebinding tiller --clusterrole
cluster-admin --serviceaccount=kube-system:tiller
# init tiller on cluster
helm init --service-account tiller
11. Verify Tiller is running
kubectl get pods --namespace kube-system
12. Setup an ingress controller
# ingress defines port forwarding so the outside world can
communicate to the services within the cluster
helm install stable/nginx-ingress --name nginx-ingress
--set controller.publishService.enabled=true
# check ingress service (after a minute or so):
kubectl --namespace default get services -o wide -w
nginx-ingress-controller
13. Create DNS A record
This is the external IP address of the Ingress
Controller
14. Setup a database (postgres)
# here we’re putting the database inside the cluster. See
end for pros and cons of this
helm install stable/postgresql
15. Edit and apply config map
kubectl apply -f cfg/flowable-configmap.yaml
23. Summary of part 1
● Establishing an auto-scaling, resilient K8S
cluster is trivial with today’s cloud hosting
● Installing a functioning Flowable instance
requires:
○ a handful of YAML files
○ a very few kubectl and helm commands
25. ● Charts: encapsulate the YAML files and
permit many configurations via variables
● Repositories: a public URL where
snapshot and stable charts are published
● Release: Each time a new chart or chart
version is deployed it is captured allowing
reliable rollback and roll-forward
operations
● Tiller: Installed on the K8S cluster to
facilitate Helm client Going away in v3!
Helm terminology
27. Key findings
● Compared to previously non-containerised
deployment:
○ Comparable time though extra steps involved
■ Still fully automated
■ Dominant time is flowable start-up but by including
health check the server won’t switch the new service
into service while that happens
○ Trivial rollback = less nail-biting!
28. Key findings
● Jury still out on:
○ Treating database as part of process engine (same
chart)
■ nice to have the option for experimentation but a
K8S-external database seems more appropriate for
production
○ K8S namespaces for multi-tenancy offer greater
isolation
■ Currently still using Flowable tenant concept