5. MESSAGING SYSTEMS
Azure
Event Grid
Event Hubs
Service Bus
Storage Queues
GCP
Cloud Pub/Sub
Firebase Messaging
AWS
SQS
Amazon MQ
Amazon SNS
Amazon Pinpoint
Amazon Kinesis Streams
AWS IoT Message broker
Alibaba
AlibabMQ for Apache
RocketMQ
AlibabMQ for Apache Kafka
Message Service
Oracle
Who cares?
IBM
See “Oracle”
6. ADVANTAGES TO CLOUD MESSAGING
Super easy to manage
Generally very reliable
Takes advantage of known cloud constraints
Excellent APIs
7. NSERVICEBUS CUSTOMERS 5 YEARS AGO
On premise
Fixed hardware provisioning
In house transports like MSMQ, SQL or RabbitMQ
8. NSERVICEBUS CUSTOMERS IN 5 YEARS
Cloud hosted
Elastic provisioning
Cloud transports like SQS
9. DEPLOYMENT MODELS
Physical Machines
Fully owned, responsible for hardware and software
Virtual Machines
Hardware managed but OS patching owned
Kubernetes
Responsible for some patching
Serverless
Fully managed
12. DISADVANTAGES TO VM APPROACH
Still responsible for patching and updating
Management is largely done by logging into a machine and twiddling with it
Still difficult to scale in a granular way
15. KUBERNETES VS. SERVERLESS
Kubernetes is the Linga Franca of different clouds
Scaling requires some manual intervention or
defining of rules
Minimum unit of scale is a single container
Serverless provides deeper integration with the
cloud platforms
Scaling is totally transparent
Minimum unit of scale is 0
22. HOW NSERVICEBUS HELPS
Preview packages available for both Azure Functions and AWS Lambdas
Helps with serialization and deserialization
Makes handling multiple message types easy
Handles batching
Facilitates transactional processing with outbox
25. SERVERLESS GOTCHAS
May scale beyond the capabilities of your other services
May be more expensive than deploying to VMs
Some restrictions on what can be run
Time limits on processing
Runtimes are specific to the cloud on which they run
Messaging semantics can be difficult to understand
29. CATCH UP WITH ME
Weekly episodes of ASP.NET Monsters https://www.youtube.com/c/Aspnetmonsters
Occasional blogging https://aspnetmonsters.com/ https://blog.simontimms.com/
Complaining about anything and everything on Twitter @stimms
Hinweis der Redaktion
When I first took Udi’s distributed systems course it was a different age. We were building largely for on-premise deployment and messaging options were pretty limited. It was basically MSMQ or bust. Over the years we’ve seen a lot of changes in that space. There are loads of newer messaging systems like rabbitmq. There are also a lot of cloud-based message systems. Every major cloud player has one, probably several messaging systems. While many companies are still hosing their infrastructure in house it is becoming difficult to justify. It can be cheaper to do thing in-house but finding talent which can set up and maintain the sorts of data centers companies need these days is difficult
We are rapidly moving away from people building their own data “centers” in a closet and towards just using the cloud. For small and medium sized companies the could makes a lot of sense. For very large or very technically proficient companies it can still be cheaper to roll your own hardware in a data center. I’m specifically thinking of stack overflow here who have amazing technical skills on staff and do save themselves money by standing up their own servers. If you’re anything less than that, though clouds make sense.
The scale of these data centers is mind blowing. I think this is Ireland, but it is weirdly difficult to get pictures of data centers.
I cannot stress enough how much easier it is to build out a message queue in the cloud as compared with on premise. In the time it takes you to set up the meeting with IT to talk about provisioning servers for messaging you can have an instance of ASB up and running.
Messaging in the cloud is highly reliable. You’ll hardly ever find the service is down or that it is overloaded.
That the service runs in a known configuration means that they can provide guarantees like in order messaging that are difficult to achieve locally
The APIs these services also tend to be pretty good having had years to develop and supporting multiple languages.
This is where I suspect Nservicebus will be going over the next few years. There will certainly still be a mass of customers who remain on premise for various regulatory, cost or comfort reasons but a good number of customers are going to be out here in the cloud
I already see my clients moving a lot of their services to the cloud and I think it will continue, I hope it will
There are some cases where you need to do something highly specialized that requires bare metal server. Examples I can think of include high performance databases and things which have been configured to talk directly to non-virtualized disks.
We can of course host in the cloud using an extension of the current model of just installing the nsb host on a virtual machine. This doesn’t really offer us much in terms of advantages over running on data-center hardware
It is difficult to get to a really large scale if you’re logging into machines and twiddling with it
If you have a single VM image with all your services installed on it then when you go to scale out you get 2 machines with all your services on them. Often you need to just scale a sliver of your application instead of all the endpoints at once
If you have a bunch of VM images with individual endpoints on them then that gives you better scalability but you’re going to end up scaling in a way that is coarse grained. Maybe login service is loaded but is it loaded enough to provision an entire server and distribute to it?
There is also a pretty heavy startup cost to virtual machines, typically you see provision and start up times on the order of 2-3 minutes. That’s laughable when compared to getting IT to provision you a server but
Kubernetes support containers and containers are pretty cool. They reduce the minimum size of the thing you deploy to computer significantly from virtual machines. All you need to deploy now is a prepackaged image which contains your application and any support libraries. This makes it much easier to do deployments and remain confident that what you’re deploying will run the same in the cloud as it does on your local machine. There is, however, a pretty high bar for entry into the k8s space. While there are plenty of tutorials and certifications out there for it the truth is that there are a lot of concepts in the k8s space
Cloud providers all have their own Kubernetes hosting solution with various degrees of maturity. In general though they are pretty similar and support things like serverless Kubernetes allowing you to scale out for spiky load almost instantly .
Endpoints which receive very few messages have a low load so having a service which only spins up when there are messages is a real cost savings. K8s is always going to have a minimum load as are vms
If the load on a service isn’t known ahead of time then serverless can help. It will scale up to take the load and scale down when there is no load. You pay by execution so you basically pay per message
Just like electrical grids most messaging systems have a kind of base load to them. At most times of the day we expect to see n messages flowing through the system. However when people first log in in the morning or when we process a large batch file we might see extra load over the base load. We can handle the base load using virtual machines because they’re cheaper. Using serverless to handle the peak load.
Serverless has almost no cost to entry so if you need to get a company up quickly or you need to prototype a solution quickly then serverless can be a great option
You can go all in on serverless and put all your endpoint there. This is a great option if you don’t know about how much use the system will have or if you just don’t want to have to worry at all about how to scale the system up and down
You can cap the number of instances of a function you want running at a time on AWS. On azure the limits are a little harder to enforce.
Functions/lambdas while cheap may be more expensive that just putting together a full on VM to handle the message processing
For functions, at least, the integration with the .net runtime can cause problems with different versions of libraries and the framework
There are limits to how long you can take to process a message
The pricing of nservicebus use to be per server or per-core or something like that – I honestly don’t remember. But now it is a much friendlier model for both small installations and for serverless. If you only have a handful of messages and endpoints all of a sudden this pricing becomes remarkably good.