Presented by Luke Marsden at Software Circus Amsterdam
Microservices are smashing monolithic databases into lots of pieces. CI and CD is making testing those consistently more and more challenging. This talk will explore the problem space and dive into detailed examples, exploring the pros and cons of both ephemeral data stores and storage orchestration.
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Why should i care about stateful containers?
1. Why should I care about
stateful containers?
Luke Marsden
CTO, ClusterHQ
@lmarsden @clusterhq
2. Microservices are smashing up
monolithic databases
Many more database instances, many more flavours
prod
staging in cloud 2 DR in cloud 3
hosted CI
dev laptop
17. Servers Volumes What is it?
Pets Pets just crap
Cattle Pets
storage
orchestration
Cattle Cattle
ephemeral stateful
containers
Pet ~= HA provided by infrastructure/platform
Cattle ~= HA provided by application
18. Pros & cons
Storage orchestration
• Run any database/stateful app
• Storage layer responsible for data
resilience
• Ops independent of database
types
• Works within a storage zone
Ephemeral stateful containers
• Requires distributed database(s)
• Database-defined data resilience
• Ops requires understanding each
data service
• Can span storage zones
40. Try Flocker today
For stateful microservices
luke@clusterhq.com
clusterhq.com
github.com/clusterhq/flocker
We are hiring in Bristol (1 hour hop from Schiphol)
+ SF Bay Area!
clusterhq.com/careers
Hinweis der Redaktion
hi i’m luke from clusterhq and today i’m going to try and convince you that it’s worth considering deploying databases and other stateful workloads in containers (if you’re not already!) and talk about some ways of doing that.
but first let me tell you a story
1. In the beginning there were monolithic apps.
They typically had one database and we knew where it was.
It was in that rack over there and it was powered by Oracle or MySQL or whatever.
The data was in one flavour and it was in one place.
2. But now we’re building applications as microservices using containers. Each microservice handles its own data and that data now comes in lots of different flavours.
Developers are being encouraged to “use the best tool for the job” for each microservice. You might have some MongoDB, ElasticSearch, MySQL, PostgreSQL, Cassandra, Redis.
So an app that would have been built as a monolith is now being built as 30 separate components.
So as microservices and containers spread, we’re seeing at least an order of magnitude more data services popping up in enterprises all over the planet.
It gets worse, there isn’t just one instance of your app.
4. You’ll want an entire staging copy of your app.
5. You have microservices popping up ephemerally on developers’ laptops.
6. You want a DR plan for what happens if Amazon goes south.
7. And you want continuous integration.
8. You’ve got all these copies of your microservices, all with separate but related silos of data — it’s lots of different parts to manage.
9. So what I want you to take away from this slide is that microservices are smashing monolithic databases into a large number of different types of databases.
10. Folks doing microservices therefore have many more database instances, in many more flavours (different types of databases). and they need to be able to deploy these data services alongside their applications throughout staged of their SDLC and across different environments, and they need to be able to manage them.
so the first objection i normally hear when i talk about stateful containers, is aren’t containers meant to be stateless?
after all, aren’t we all building 12-factor apps now? we don’t put data on the filesystem of the application containers, that means we can scale our apps by spinning up many of them, and scale them down just by blowing them away.
that’s true and 12 factor is great.
but doesn’t mean that applications don’t have data. quite the opposite in fact.
applications are a set of microservices, microservices are sets of containers and data services.
every application has data at its core.
it’s just that 12-factor convinced us to think of data services as external to our applications.
but that was then and this is now.
but i believe the platforms we’re building to support microservices and containers should not be limited to just stateless apps. here are some reasons to embrace the stateful container…
the first problem is that if you don’t support stateful containers, but you need to run databases (and who doesn’t) then you end up with not one but two platforms.
if you manage your stateless parts in one way, say with mesos or k8s or swarm,
and your stateful components separately, perhaps using virtualization or databases on bare metal,
then you end up with two or more separate sets of systems to manage.
as a devops team, we don’t want to have to think about talking to more than one system with more than one API if you want to stand up a new production database for my microservice.
i want to just be able to include the database in my docker compose file or marathon/k8s manifest and deploy it on the prod cluster! and then as far as possible have the database look after itself.
the other reason - apart from not wanting to have multiple platforms, is that containers promise that your app will run in the same exact environment on your laptop as it will in production.
but that’s problematic when you introduce third party — or separately-managed — data services because now it’s not the same version of, for example, mysql running there, in fact it’s not even necessarily *mysql*.
for example, you can get something that looks a bit like mysql or postgresql from AWS or GCE
but they’re not actually mysql or postgresql, they’re patched, or could even be completely different software, so they might behave differently at runtime to how the db ran on your laptop, and they can only be provisioned via different APIs, which brings me on to…
the promise of containers is portability!
the ability to run an app or a microservice locally on your laptop, and then be able to run that exact same app in production environment.
this whole movement around docker, containers, mesos, k8s, swarm is that you should be able to deploy apps in the same way where-ever they’re running.
that’s why we’re all trying to layer this portable infrastructure on top of different clouds means that you can have the same service APIs accessible on top of any infrastructure. be that bare metal, vSphere, AWS or GCE!
by deploying on this it means you don’t get tied into any specific cloud APIs.
i believe in a single platform for stateful and stateless components, and so what we’re working on at clusterhq is making it easy to spin up stateful services under your choice of platform with off the shelf open source components.
ok, so who’s in? who thinks we should consider trying to run our entire apps in containers on a portable infrastructure, including the data services?
next i want to look at the historical evolution of infrastructure architecture and look at some options for *how* to run stateful containers, try to derive how we should handle stateful containers.
so who’s heard of cattle vs. pets?
<describe idea>
pretty obvious when applied to stateless apps
ain’t got no time for individual pet app servers or VMs…
but less obvious when applied to stateful components - arguably your data “is” a pet. you certainly care your production database! so there are two approaches to dealing with the fact that shit happens - that i will outline…
on the left we’ve got the truth table of servers & volumes
back in the 90s, and before that even, we had computers with disks in them.
they were pet servers with pet data.
we cared about the servers and their RAID arrays and we nursed them back to health if they were sick.
some storage companies came along and invented storage boxes. and then when vmware started taking off, it was because it allowed us to take these pet servers, and by virtualising them, basically have vmware start automatically looking after them. so when a computer broke, the vmware cluster would bring back the VMs that were on that host. this used this expensive shared storage hardware but it was worth it because now when a disk fails or or a cpu fries or a cleaner trips over a power cable you don’t get paged in the middle of the night. vmware would just spin up the vm on another node.
then the cloud happened
abstracted away virtualisation behind a service boundary
and notice that VMs stopped being HA
at first EC2 only supported only ephemeral storage but users couldn’t hack it - there was much demand for data which persists beyond the lifetime of a VM.
so they invented EBS which looks after your storage independently from the instances
EBS tries to keep your data safe, whereas your EC2 VMs can be killed at any time.
storage and compute are fundamentally quite different types of things!
coming back into the datacenter, remember we had SANs and VMs..
<explain the diagram>
there has recently been a shift away from big expensive SANs towards what people are calling “hyperconverged” or “software defined storage” and this is starting to become mature and popular in the enterprise.
if you set up VMware vSAN or OpenStack with Ceph or ScaleIO it will look something like this
note that this could be an implementation strategy for a cloud!
it’s worth shouting out to distributed databases here
they treat both servers and volumes as ephemeral, which is a difficult thing to do.
which support two important attributes: they allow write workloads to scale out, which is hard, and they actually treat things down to the bottom layer, servers and volumes both as cattle for purposes of resilience. so you can take cassandra for example and run it straight on unreliable hardware and unreliable storage. this is great if your database supports it and does a good job of it, however you just need to google “aphyr call me maybe” to see this doesn’t always work that well.
it’s also possible to run databases, including distributed databases on top of software defined storage or cloud storage or SANs. then, if you have want to run multiple databases on a pool of servers, some which are good at doing automatic replication and failover, and some which are less good at that, you can run them on top of SDS. so if a computer fails in this scenario, SDS will keep the data in that db or shard safe, but at the moment there’s a mismatch - non distributed databases won’t come back automatically— you’d have to go in manually to bring it back.
now with our product flocker, it’s starting to become possible to run databases in containers underneath container schedulers like mesos, k8s or docker swarm. and in this mode, both distributed and singleton databases can safely coexist on top of this reliable storage layer, we can still get the benefits of scale-out we get from the distributed databases, but we can also get consistent data safety from the underlying replicated block storage layer whichever databases we’re running.
so here’s the truth table for pets versus cattle for compute and storage
to define our terms, by pet we mean <explain>
by cattle we mean <explain>
so we’ve already agreed that if both servers and volumes are pets that’s crap.
if servers are cattle but volumes are pets, i’m going to call that storage orchestration (in fact adrianco coined this term)
and if servers are cattle and volumes are cattle as well, which is apparently how netflix operates, then let’s call this ephemeral stateful containers.
i’m not going to bother defending pets, but i do want to take a minute to compare and contrast storage orch with ephemeral stateful containers.
<read the slides>
note on spanning storage zones - you may want to use a combination of these approaches as you look at different types of failure modes. within a storage zone, a more common failure such as a node going down, you may want to handle by using storage orchestration like flocker. across storage zones or even regions, like if EBS goes down in us-east-1a or you lose a whole zone, you’ll need to rely on replication performed at the database layer. and because of the latencies involved in doing cross-region replication, you may need to relax your consistency requirements to eventual consistency or something like that in those really large scale deployments.
fairly straightforward.
<explain>
this worked well enough for netflix, so if you’re ok using local storage and you really trust every data service you need to operate, you can do it this way.
this is an entirely viable strategy and it will work for people who are careful about their choice of databases.
but if you’re looking for operational consistency in the way you manage your stateful containers from an ops perspective, you’ll want to look at storage orchestration which you can do with flocker. for the rest of this talk i’m going to talk about how to do this — treating the volumes like pets while you treat the servers like cattle. we’re going to look at how flocker, the project we’re working on at clusterhq, helps you connect reliable storage to containers on unreliable compute, and some of the beneficial use cases you can get from that.
so why should data be pets when servers are cattle?
well applications are fundamentally lightweight, they scale out, they can come and go.
data is heavy.
when you’re building a platform, and you want developers to be able to throw applications, including stateful bits, like the microservices at the beginning, it’s far simpler to be able to assume some level of resilience in the storage layer than to require that every single data service provides availability itself.
in fact a distributed scale-out block storage solution (EBS, ceph, scaleio) often does a better job than *all* the databases you want to run on a platform.
so what does flocker do?
<describe problem with docker>
<describe solution with flocker>
<describe diagram>
so keep in mind here a fundamental principle of stateless and stateful things. stateless things can scale out - great - got more load on your web tier, throw more instances at it. this is totally like a solved problem.
but with stateful things, each singleton database, or each shard of a distributed database, reads and writes to a single filesystem. that filesystem should only be mounted in one place at a time and so there must only be one instance of it. if it dies, the container framework should bring it back up. so in mesos it’s a task, in k8s it’s a service with a replication controller = 1 copy. and swarm are still working on rescheduling tasks on failed nodes.
but that doesn’t mean you can’t scale out data services if you choose the right software. remember earlier i spoke about distributed databases - from the platform’s point of view each shard of a distributed database is a separate container, with its own independent filesystem.
with flocker, it becomes possible to run any stateful containers in your container cluster and manage them sensibly, because your containers don’t get detached from the data they’re operating on.
so now i want to talk about some use cases
so to summarize, at clusterhq we’re connecting the container universe to the storage universe.
because while we believe that ephemeral stateful containers are doable
we believe that the operational benefits of being able to treat containers and their data as atomic units, move them around and fail them over is useful.
and the customers we speak to are more often than not using EBS or SAN or SDS and so why not connect your storage to your containers?
thanks and so i’d just like to encourage you to try flocker today, you can go to our website now and spin up a 3 node environment which will allow you to play with it in minutes, then you can try out our integrations with docker swarm, compose, mesos, with k8s coming soon.
also we’re hiring in bristol which is just a 1 hour hop from schiphol in sunny england so if you’re interested in coming and helping build the data layer for containers get in touch!
questions?