I’m Lorenzo, I’m Italian but live in the UK. \nI’ve been working on several large scale websites like the BBC, Channel 5, Ladbrokes, iPlayer.\nI spent the past two years as Chief Architect at DataSift, a hot big-data startup. \n
\n
\n
I’m going to introduce DataSift to explain what we do and how we do it.\nDon’t worry, this is not a sales pitch, I’m just using DataSift as an example of how to build a scalable architecture based on lessons learnt in the past.\n
Some architecture porn.\n\n
Sources are Twitter, Facebook, YouTube, Flickr, Boards, Forums, etc.\nNews agencies: Thomson Reuters, Associated Press, Al-Jazeera, NYT, Chicago Tribune, etc.\nData Normalisation + Augmentation. Make data rich and structured.\nLanguage detection, demographics (gender detection), trends analysis, sentiment analysis, influence ranking, topic analysis, entities.\n
2nd stage: the core filtering engine. A scalable, highly parallel, custom-built C++ Virtual Machine.\nCan process thousands of incoming messages per second, and thousands of custom filters.\n
Web site, public API, Output streams (HTTP Streaming, WebSockets), Buffered streams (batches of messages), and finally...\n
...storage. We record everything in our Hadoop cluster (historical access, analytics).\nWe also have watchdogs to keep track of usage limits, licenses, etc.\n
I’m going to give you some numbers to give you a sense of the scale we’re operating at.\nBetween 3 and 9K/sec depending on the time of the day.\n\n
\n
\n
Now, everyone here heard about service-oriented architectures, but I’m going to share some of the lessons I learnt in the past on how to scale a platform, that helped me designing and scaling DataSift and other large enterprise sites before it.\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
The first characteristic of a SOA is having several loosely-coupled services.\nSeparate consumers from service implementation\nOrchestration of distinct units accessible over a network\nCommunication with data in a well-defined interoperable format\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Having decoupled services means you can scale each one horizontally. \nIf a service is under heavy load, on fire, you can have more node of the same to keep the service up, without having to duplicate the entire monolithic platform.\n
Avoid failover (hot-swap) configuration. They don’t work well and usually involve downtime or data loss.\nCells provide a unit of parallelization that can be adjusted to any size as the user base grows.\nCell are added in an incremental fashion as more capacity is required.\nCells isolate failures. One cell failure does not impact other cells.\nCells provide isolation as the storage and application horsepower to process requests is independent of other cells.\nCells enable nice capabilities like the ability to test upgrades, implement rolling upgrades, and test different versions of software.\nCells can fail, be upgraded, and distributed across datacenters independent of other cells.\n\n
As an example, this is the current cardinality of servers we have for each service.\nEach box in the diagram has between 2 and 60+ nodes.\n
Let’s have a look at how to practically implement load-balancing and application caching.\n
You can buy a hardware appliance (excellent, expensive), or use a software like HA-Proxy.\nSet the service nodes as backend servers.\nHA-Proxy will do health-checks, and reroute the traffic to the healthy nodes.\n
Use a random director to have weights (send more load to a more powerful machine).\nThe random director uses a random number to seed the backend selection.\nThe client director picks a backend based on the clients identity. You can set the VCL variable client.identity to identify the client by picking up the value of a session cookie or similar.\nThe hash director will pick a backend based on the URL hash value (req.hash).\nThe fallback director will pick the first backend that is healthy. It considers them in the order in which they are listed in its definition.\n
\n
It works out of the box, just set Cache-Control headers.\nIt supports ETags to cache several versions of the same page for different customers.\nEdge-Side Includes. Thijs\n
We’ve seen some characteristics of Service Oriented Architectures, what they are and why they are useful. \nThere’s another incredibly important defining characteristic of SOAs: the API, i.e. the contract between any two services. It’s a software-to-software interface, not a user interface.\n
Keep it simple: RESTful verbs, actions on resources, simple data structures in exchange data format \nDefine the action, the endpoint, the parameters, the response\nReserve endpoint for description of the service’s API.\nUse the response to generate API docs.\nFeed to test console as configuration.\n
I recommend a tool that really makes your API docs alive.\nMashery IO Docs: example of working documentation.\nDefine an API for all services (internal AND external)\nReserve an endpoint to describe the API for the service itself\nRESTful. Personal preference for plain-text format (XML or JSON)\n
Reserve the root endpoint (or a /discovery or /self endpoint) to a description of the service’s API.\nBonus: if the response is in the Mashery IO Docs’ format, you can have a web interface to document and test the API.\n
Instead of hard-coding the configuration of all the services everywhere, expose the configuration via a separate service.\n\n
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.\nIt looks like a distributed file system, each node can have children and properties.\nEach service can register itself at startup and become available to receive requests.\n
The consumer simply reads the properties of a node (file / path)\n\n
As we saw, each component should be able to scale horizontally. \n\n
There are two possible problems:\n- when processing itself is expensive\n- when there’s too much data\n
There are two possible problems:\n- when processing itself is expensive\n- when there’s too much data\n
Internally\nUse queues and workers to make processes asynchronous, distribute data to parallel workers. \nCurl-multi, low timeouts.\n\n\n
Internally\nUse queues and workers to make processes asynchronous, distribute data to parallel workers. \nCurl-multi, low timeouts.\n\n\n
\n
\n
don’t move the data to the processing nodes. I/O is very expensive.\n
2nd part of the talk: moving data around (communication across services).\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
At DataSift we use different message systems, depending on volume, destination, communication type.\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
Source/sink, Producer/consumer\n- Asynchronous communication\n- Decoupling (buffers)\n- Load balancing\n- Distribution\n- High throughput\n- In memory, Persistent, Distributed\n
\n
http://www.justincarmony.com/blog/2012/01/10/php-workers-with-redis-solo/\nhttp://blog.meltingice.net/programming/creating-processing-queues-redis/\n\n
\n
http://www.justincarmony.com/blog/2012/01/10/php-workers-with-redis-solo/\nhttp://blog.meltingice.net/programming/creating-processing-queues-redis/\n\n
\n
We’ve seen simple buffering. Let’s now see a few more useful patterns.\nThe first example shows how to move from one processor to several nodes, to distribute the data and process it in parallel.\nPUSH-PULL is an efficient pattern for workload distribution \n
\n
\n
Workload distribution with workers\n
You can also invert producers and consumer and have a multiplexer to join messages coming from several nodes back into a single one.\n
The second pattern shows how to distribute data in a non-exclusive way: each consumer gets a copy of the same data, the items are not removed from the queue when one consumer gets them. \nThe producer doesn’t need to know who’s listening, it doesn’t need to have a registry of addresses of connected consumers.\nMongrel2\n
You can also broadcast to different datacenters.\nListeners can only subscribe to one or more topics. Different output channels.\nZeroMQ v3: filtering done on the publisher side\n
broadcasting\n
\n
\n
\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
An interesting idea if you have a highly dynamic site / service, with each update affecting several other users / pages, is to have an internal data bus that carries all the information, with updates labelled with topics, and all the services/users subscribing to the relevant topics.\nThumbler: internal firehose. Each service subscribes to interesting events.\n
Statistics are better than logs. At certain volumes, logs are just noise (and a waste of space), make your application dynamically configurable to turn logging on only when strictly necessary.  Statsd / Graphite.\nMonitor everything. Set alerts based on deviance from norm, not just on absolute thresholds.\n\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
Logging at scale is useless. Too much noise. Instrumentation is essential.\nYou need to identify bottlenecks quickly or suffer prolonged and painful outages. The question of "How come we didn't catch that earlier?" addresses the incident, not the problem. The alternative question "What in our process is flawed that allowed us to launch the service without the appropriate monitoring to catch such an issue?" addresses the people and the processes that allowed the event you just had and every other event for which you didn't have appropriate monitoring.\nDesigning to be monitored is an approach wherein one builds monitoring into the application rather than around it. "How do we know when it's starting to behave poorly?" First, you need to answer the question "Is there a problem?" with user experience and business metrics monitors (lower click-through rate, shopping cart abandonment rate, ...). Then you need to identify where the problem is with system monitors (the problem with this is that it's usually relying on threshold alerts - i.e. checking if something is behaving outside of our expectations - rather than alerting on when it's performing significantly differently than in the past). Finally you need to identify what is the problem thanks to application monitoring. \nNot all monitoring data is valuable, too much of it only creates noise, while wasting time and resources. It's advisable to only save a summary of the reports over time to keep costs down while still providing value. In the ideal world, incidents and crises are predicted and avoided by a robust monitoring solution.\n
We collect millions of events every second.\nThe importance of people: devops who know what to monitor, how, how to use and write tools, and have 100% dedication. Useful: mobile phone apps receiving alerts from Zenoss.\nWe use different technologies. It’s very easy to set up a new ZeroMQ listener.\nWe use StatsD (from Flickr / Etsy), Zenoss, Graphite\n
Here’s a photo of our monitoring wall. We even have an emergency lighting with a siren, triggered by Zenoss alerts.\n
Here’s a photo of our monitoring wall. We even have an emergency lighting with a siren, triggered by Zenoss alerts.\n
http://www.apievangelist.com/2011/06/23/api-ecosystem-tracking-with-statsd-and-graphite/\nhttp://mat.github.com/statsd-railscamp2011-slides/\n\n
With the Etsy library, you can sample the sending rate. UDP.We created a wrapper to buffer and aggregate stats in memory for a while and then to flush them at regular intervals, to save a LOT of bandwidth.\n
With the Etsy library, you can sample the sending rate. UDP.We created a wrapper to buffer and aggregate stats in memory for a while and then to flush them at regular intervals, to save a LOT of bandwidth.\n
With the Etsy library, you can sample the sending rate.We created a wrapper to buffer and aggregate stats in memory for a while and then to flush them at regular intervals, to save a LOT of bandwidth.\n
Monitoring at application level, system level, infrastructure level. Heatmap of any link of the pipeline (physical and logical). Network rib-cages like this one are NOT ENOUGH! You want to contextualise the metrics you receive.\n + Cacti\n
\n
When you process real-time data in a complex pipeline made of several stages, you need a way of immediately telling IF there is a problem and WHERE it is. You don’t have time to debug, you need to SEE. \nMeasure throughput and latency.\n
When you process real-time data in a complex pipeline made of several stages, you need a way of immediately telling IF there is a problem and WHERE it is. You don’t have time to debug, you need to SEE. \nMeasure throughput and latency.\n
When you process real-time data in a complex pipeline made of several stages, you need a way of immediately telling IF there is a problem and WHERE it is. You don’t have time to debug, you need to SEE. \nMeasure throughput and latency.\n
When you process real-time data in a complex pipeline made of several stages, you need a way of immediately telling IF there is a problem and WHERE it is. You don’t have time to debug, you need to SEE. \nMeasure throughput and latency.\n
When you process real-time data in a complex pipeline made of several stages, you need a way of immediately telling IF there is a problem and WHERE it is. You don’t have time to debug, you need to SEE. \nMeasure throughput and latency.\n
Information density is important, but don’t overdo it: keep the signal-to-noise high.\nUse colours. Cognitive process: let the visual cortex do the work. Normalise.\nIntuition is involuntary, fast, effortless, invisible.\nAttention is voluntary, slow, difficult, visible.\n
\n
happy to talk about any of them\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
- N+1 design (ensure that everything you develop has at least one additional instance of that system in the event of failure)\n- Designing the capability to roll back into an app helps limit the scalability impact of any given release.\n- Designing to disable features adds the flexibility of keeping the most recent release in production while limiting / containing the impact of offending features or functionality.\n- Design to be monitored: you want your system to identify when it’s performing differently than it normally operates in addition to telling you when it’s not functioning properly.\n- Design for multiple live sites: it usually costs less than the operation of a hot site and a cold disaster recovery site.\n- Use mature technology: early adopters risk a lot in finding the bugs; availability and reliability are important.\n- Asynchronous design: asynchronous systems tend to be more fault tolerant to extreme load.\n- Stateless Systems (if necessary, store state with the end users)\n- Buy when non-core\n- Scale out not up (with commodity hardware; horizontal split in terms of data, transactions and customers).\n- Design for any technology, not for a specific product/vendor\n
Synchronous calls, if used excessively or incorrectly cause undue burden on the system and prevent it from scaling.\nSystems designed to interact synchronously have a higher failure rate than asynchronous ones. Their ability to scale is tied to the slowest system in the chain of communications. It’s better to use callbacks, and timeouts to recover gracefully should they not receive responses in a timely fashion.\nSynchronisation is when two or more pieces of work must be in a specific order to accomplish a task. Asynchronous coordination between the original method and the invoked method requires a mechanism that the original method determines when or if a called method has completed executing (callbacks). Ensure they have a chance to recover gracefully with timeouts should they not receive responses in a timely fashion.\nA related problem is stateful versus stateless applications. An application that uses state relies on the current condition of execution as a determinant of the next action to be performed. \nThere are 3 basic approaches to solving the complexities of scaling an application that uses session data: 1) Avoidance (using no sessions or sticky sessions) avoid replication: Share-nothing architecture; 2) Decentralisation (store session data in the browser’s cookie or in a db whose key is referenced by a hash in the cookie); 3) Centralisation (store cookies in the db / memcached).\n\n
You must be able to isolate and limit the effects of failures within any system, by segmenting the components. Decouple decouple decouple! A swim lane represent both a barrier and a guide (ensure that swimmers don’t interfere with each other. Help guide the swimmer toward their objective with minimal effort). AKA Shard.\nThey increase availability by limiting the impact of failures to a subset of functionality, make incidents easier to detect, identify and resolve. The fewer the things are shared between lanes, the more isolative and beneficial the swim lane becomes to both scalability and availability. They should not have lines of communication crossing lane boundaries, and should always move in the direction of the communication. When designing swim lanes, always address the transactions making the company money first (e.g. Search&Browse vs Shopping Cart), then move functions causing repetitive problems into swim lanes; finally consider the natural layout or topology of the site for opportunities to swim lanes (e.g. customer boundaries within an app / environment. If you have a tenant who is very busy, assign it a swim lane; other tenants with a low utilisation can be all put into another swim lane).\n
You must be able to isolate and limit the effects of failures within any system, by segmenting the components. Decouple decouple decouple! A swim lane represent both a barrier and a guide (ensure that swimmers don’t interfere with each other. Help guide the swimmer toward their objective with minimal effort). AKA Shard.\nThey increase availability by limiting the impact of failures to a subset of functionality, make incidents easier to detect, identify and resolve. The fewer the things are shared between lanes, the more isolative and beneficial the swim lane becomes to both scalability and availability. They should not have lines of communication crossing lane boundaries, and should always move in the direction of the communication. When designing swim lanes, always address the transactions making the company money first (e.g. Search&Browse vs Shopping Cart), then move functions causing repetitive problems into swim lanes; finally consider the natural layout or topology of the site for opportunities to swim lanes (e.g. customer boundaries within an app / environment. If you have a tenant who is very busy, assign it a swim lane; other tenants with a low utilisation can be all put into another swim lane).\n
You must be able to isolate and limit the effects of failures within any system, by segmenting the components. Decouple decouple decouple! A swim lane represent both a barrier and a guide (ensure that swimmers don’t interfere with each other. Help guide the swimmer toward their objective with minimal effort). AKA Shard.\nThey increase availability by limiting the impact of failures to a subset of functionality, make incidents easier to detect, identify and resolve. The fewer the things are shared between lanes, the more isolative and beneficial the swim lane becomes to both scalability and availability. They should not have lines of communication crossing lane boundaries, and should always move in the direction of the communication. When designing swim lanes, always address the transactions making the company money first (e.g. Search&Browse vs Shopping Cart), then move functions causing repetitive problems into swim lanes; finally consider the natural layout or topology of the site for opportunities to swim lanes (e.g. customer boundaries within an app / environment. If you have a tenant who is very busy, assign it a swim lane; other tenants with a low utilisation can be all put into another swim lane).\n
You must be able to isolate and limit the effects of failures within any system, by segmenting the components. Decouple decouple decouple! A swim lane represent both a barrier and a guide (ensure that swimmers don’t interfere with each other. Help guide the swimmer toward their objective with minimal effort). AKA Shard.\nThey increase availability by limiting the impact of failures to a subset of functionality, make incidents easier to detect, identify and resolve. The fewer the things are shared between lanes, the more isolative and beneficial the swim lane becomes to both scalability and availability. They should not have lines of communication crossing lane boundaries, and should always move in the direction of the communication. When designing swim lanes, always address the transactions making the company money first (e.g. Search&Browse vs Shopping Cart), then move functions causing repetitive problems into swim lanes; finally consider the natural layout or topology of the site for opportunities to swim lanes (e.g. customer boundaries within an app / environment. If you have a tenant who is very busy, assign it a swim lane; other tenants with a low utilisation can be all put into another swim lane).\n
You must be able to isolate and limit the effects of failures within any system, by segmenting the components. Decouple decouple decouple! A swim lane represent both a barrier and a guide (ensure that swimmers don’t interfere with each other. Help guide the swimmer toward their objective with minimal effort). AKA Shard.\nThey increase availability by limiting the impact of failures to a subset of functionality, make incidents easier to detect, identify and resolve. The fewer the things are shared between lanes, the more isolative and beneficial the swim lane becomes to both scalability and availability. They should not have lines of communication crossing lane boundaries, and should always move in the direction of the communication. When designing swim lanes, always address the transactions making the company money first (e.g. Search&Browse vs Shopping Cart), then move functions causing repetitive problems into swim lanes; finally consider the natural layout or topology of the site for opportunities to swim lanes (e.g. customer boundaries within an app / environment. If you have a tenant who is very busy, assign it a swim lane; other tenants with a low utilisation can be all put into another swim lane).\n
What is the best way to handle large volumes of traffic? Answer: “Establish the right organisation, implement the right processes and follow the right architectural principles”. Correct, but the best way is not to have to handle it at all. The key to achieving this is through pervasive use of caching. The cache hit ratio is important to understand its effectiveness. The cache can be updated/refreshed via a batch job or on a cache-miss. If the cache is filled, some algorithms (LRU, MRU...) will decide on which entry to evict. When the data changes, the cache can be updated through a write-back or write-through policy. There are 3 cache types:\n- Object caches: used to store objects for the app to be reused, usually serialized objects. The app must be aware of them. Layer in front of the db / external services. Marshalling is a process where the object is transformed into a data format suitable for transmitting or storing.\n- Application caches: A) Proxy caches, usually implemented by ISPs, universities or corporations; it caches for a limited number of users and for an unlimited number of sites. B) Reverse proxy caches (opposite): it caches for an unlimited number of users and for a limited number of applications; the configuration of the specific app will determine what can be cached. HTTP headers give much control over caching (Last-Modified, Etag, Cache-Control).\n- Content Delivery Networks: they speed up response time, off load requests from your application’s origin server, and usually lower costs. The total capacity of the CDN’s strategically placed servers can yield a higher capacity and availability than the network backbone. The way it works is that you place the CDN’s domain name as an alias for your server by using a canonical name (CNAME) in your DNS entry\n
What is the best way to handle large volumes of traffic? Answer: “Establish the right organisation, implement the right processes and follow the right architectural principles”. Correct, but the best way is not to have to handle it at all. The key to achieving this is through pervasive use of caching. The cache hit ratio is important to understand its effectiveness. The cache can be updated/refreshed via a batch job or on a cache-miss. If the cache is filled, some algorithms (LRU, MRU...) will decide on which entry to evict. When the data changes, the cache can be updated through a write-back or write-through policy. There are 3 cache types:\n- Object caches: used to store objects for the app to be reused, usually serialized objects. The app must be aware of them. Layer in front of the db / external services. Marshalling is a process where the object is transformed into a data format suitable for transmitting or storing.\n- Application caches: A) Proxy caches, usually implemented by ISPs, universities or corporations; it caches for a limited number of users and for an unlimited number of sites. B) Reverse proxy caches (opposite): it caches for an unlimited number of users and for a limited number of applications; the configuration of the specific app will determine what can be cached. HTTP headers give much control over caching (Last-Modified, Etag, Cache-Control).\n- Content Delivery Networks: they speed up response time, off load requests from your application’s origin server, and usually lower costs. The total capacity of the CDN’s strategically placed servers can yield a higher capacity and availability than the network backbone. The way it works is that you place the CDN’s domain name as an alias for your server by using a canonical name (CNAME) in your DNS entry\n
What is the best way to handle large volumes of traffic? Answer: “Establish the right organisation, implement the right processes and follow the right architectural principles”. Correct, but the best way is not to have to handle it at all. The key to achieving this is through pervasive use of caching. The cache hit ratio is important to understand its effectiveness. The cache can be updated/refreshed via a batch job or on a cache-miss. If the cache is filled, some algorithms (LRU, MRU...) will decide on which entry to evict. When the data changes, the cache can be updated through a write-back or write-through policy. There are 3 cache types:\n- Object caches: used to store objects for the app to be reused, usually serialized objects. The app must be aware of them. Layer in front of the db / external services. Marshalling is a process where the object is transformed into a data format suitable for transmitting or storing.\n- Application caches: A) Proxy caches, usually implemented by ISPs, universities or corporations; it caches for a limited number of users and for an unlimited number of sites. B) Reverse proxy caches (opposite): it caches for an unlimited number of users and for a limited number of applications; the configuration of the specific app will determine what can be cached. HTTP headers give much control over caching (Last-Modified, Etag, Cache-Control).\n- Content Delivery Networks: they speed up response time, off load requests from your application’s origin server, and usually lower costs. The total capacity of the CDN’s strategically placed servers can yield a higher capacity and availability than the network backbone. The way it works is that you place the CDN’s domain name as an alias for your server by using a canonical name (CNAME) in your DNS entry\n
\n
\n
\n
\n
shameless plug\n
\n
\n