4. Characteristic of cloud from NIST
⢠On-demand self-service. A consumer can unilaterally
provision computing capabilities, such as server time and
network storage, as needed automatically without requiring
human interaction with each serviceâs provider.
5. Scale in the Cloud
⢠Many people think that you get scalability just by virtue
of being in the cloud
⢠This isnât true
⢠What the cloud gives you is the ability to quickly and
easily add resources
â It doesnât guarantee that this results in additional capacity
⢠Just like with security you need to design scalability in
6. What is Scalability?
⢠(Problem definition) Scalability is the ability of a
system to support growing amount of work.
â May be from additional users
â May be from additional requests from current users
â May be from operational activities.
⢠(Solution definition) Scalability is the ability to
increase or decrease the resources available to
your application by either changing the number
of servers or disks or changing the size of the
servers or disks.
7. Why scale?
⢠Are more users always a good thing?
â This is a cost/benefit question.
â More users have benefits â presumably more people
receive service and the organization more revenue.
â More users have a cost â hardware, software, and
personnel.
⢠Do costs scale linearly with users?
â For Netflix, the answer is yes.
â For Linkedin, the answer is no.
8. The different aspects of scalability
⢠Adding users
â Large amounts of new users may require new computation
facilities
⢠Adding data
â Large amounts of new data requires
⢠More computation
⢠Careful attention to the distribution of this data.
⢠Adding computation
â Computation is embedded in virtual machines
â Elasticity means adding new virtual machines
⢠Scaling should not impact existing activities
⢠May need to scale by adding computation capacity (CPU) or
by adding I/O capacity
8
9. Scaling Up vs Scaling Out
⢠Scaling up means adding more capacity to
existing hardware
â More memory
â More disk
â Faster CPU or more cores
⢠Scaling out means adding additional hardware
â More systems
10. Costs in scaling out
⢠Each virtual machine has a cost â per hour
⢠Licensing costs.
â Many software packages charge licenses per CPU or per (virtual)
computer.
â Every new instance that utilizes one of these packages incurs
licensing costs
⢠Personnel costs
â In small to medium size organizations, one sysadmin can
administer ~30 machines.
â In large, highly automated organizations, one sysadmin can
administer ~1000s of machines.
â Movement called âDevOpsâ has as one goal the reduction of
personnel costs in operations. (more on this later).
11. How much lead time for growth of
number of users?
⢠Some things are predictable
â Seasonal variation.
⢠Christmas
⢠Tax season
â Daily variation
⢠Working hours or non-working hours in various time zones
⢠Holidays
â Promotions or special offers
â Sporting events
⢠Other things are not predictable
â Being âSlashDottedâ
â News items
â Rapid growth in popularity of a company.
â Disaster
12. Managing growth in number of users
⢠A lead time allows planning
â Restructure database
â Add or restructure software
⢠When no lead time is available, elasticity of
the cloud is the main mechanism.
13. Outline
⢠Introduction to scalability
⢠CPU scaling
â Load balancers
â Rule Based Scaling
â Scaling Patterns
⢠I/O scaling
14. Why have a load balancer?
⢠Suppose there are too many users for a single instance of a service
⢠The cloud allow us to create another instance of that service
(elasticity)
⢠We would like to have the half the users use one instance and half
use the other
⢠Two options:
1. Couple instances and users (half and half). This is accomplished by
having users access an instance of a service directly by IP address.
2. Use an intermediary (load balancer) to distribute half of the
requests to one instance and the other half to the other.
Option 2 is preferable for a variety of reasons which we will see.
14
16. Load Balancer
Logically, a load balancer takes requests from
clients and distributes them to copies of an
application executing on multiple different
servers
Servers
Clients
Load
Balancer
22. Hierarchy of Load Balancers
⢠Server always sends message back to client.
⢠Load balancers use variety of algorithms to choose
instance for message
â Round robin. Rotate requests evenly
â Weighted round robin. Rotate requests according to some
weighting.
â Hashing â IP address of source to determine instance.
Means that a request from a particular client always sent
to same instance as long as it is still in service.
⢠Note that these algorithms do not require knowledge
of an instanceâs load. That situation we will cover in a
little bit.
23. Outline
⢠Introduction to scalability
⢠CPU scaling
â Load balancers
â Rule based scaling
â Scaling Patterns
⢠I/O scaling
25. Server
⢠A server is a virtual machine without any software
⢠A virtual machine can be allocated with varying amounts of
memory, CPU, disk
⢠Each variant has different cost, typically per hour
26. Machine Image
⢠A machine image is a copy of the contents of the memory
of a computer.
⢠A machine image may be created from any contents of a
computer. Some options:
â Bare metal
â With OS
â With LAMP Stack
⢠Linux
⢠Apache HTTP Server
⢠MySQL
⢠PhP or Python
⢠If licensed software is contained in the machine image,
then a license fee is paid when it is loaded
27. Executable Virtual Machine
⢠An executable virtual machine is created by
loading a machine image into a server.
⢠Executable virtual machine can then be
â Booted
â Paused
â Shut down
Machine Image Server
28. Adding/Removing Resources
⢠Example shows two servers with one to be removed.
⢠Could be N servers with one to be added or removed
⢠Creating a new instance
takes some time
⢠Removing an instance also
takes time â it must satisfy
existing requests and be
detached from existing
connections.
29. Autoscaling group
⢠An autoscaling group is a collection of
instances that have been defined to be scaled
together.
⢠Typically these represent instances of the
same application.
30. Creating an autoscaling group
⢠An autoscaling group needs to know
â Machine instance id
â VM type
â Scaling policy
31. Scaling Policy
⢠Specify minimum, maximum, and desired
number of instances
⢠Can specify scaling based on time of day
â E.g. scale up during 9:00-5:00 and down other times
⢠Can scale based on average CPU usage
â E.g. average CPU utilization <40% means delete
instance
â Average CPU utilization >60% means add instance.
â Values come from monitor.
32. Outline
⢠Introduction to scalability
⢠CPU scaling
â Load balancers
â Rule Based Scaling
â Scaling Patterns
⢠I/O scaling
35. Push Pattern Description
⢠Client sends a request (e.g. HTTP message) to
the app in the cloud.
⢠Request arrives at a load balancer
⢠Load balancer forwards request to one of the
VMs in the resource pool.
⢠Load balancer uses scheduling strategy to
decide which VM gets the request, e.g.
dispatch to VM with lowest CPU utilization.
36. How does the load balancer know?
⢠The load balancer knows CPU utilization of the VMs and it
knows how many requests it (the load balancer) has received,
and possibly how long it took to service the requests. It does
not know application specifics such as how many requests a
VM can process.
⢠When resource pool is overloaded, new resources are
allocated.
⢠The monitor decides (based on controller rules) when new
resources are needed. It must have direct insight into the VM
instances in order to do this. Hence, the monitor utilizes a
monitoring service provided by the cloud for each instance.
36
38. Pull architecture description
⢠Each request from the client is application
specific and typed.
⢠The queue keeps separate queues for each
application running on the VMs.
⢠A VM requests the next message of a particular
type (pull) and processes it.
⢠The monitor can now see how long a request
waits in a queue or the average queue length and
this is an indication of the load on the VMs that
have applications that service requests of that
type.
39. Differences
⢠Push is more responsive to requests. They are
immediately forwarded to a service. There is a
possibility that the service is overloaded.
⢠Pull is less responsive since it relies on servers to
de-queue messages.
⢠In the pull architecture, a service polls for new
messages even if there is nothing in its queue and
this introduces overhead.
⢠It is easier to monitor and control workload in the
pull architecture since messages are application
specific and typed.
40. Outline
⢠Introduction to scalability
⢠CPU scaling
⢠I/O scaling
â Multiple sites
â Software techniques
41. I/O Scaling
⢠Scaling out assumes scaling requirement is
solved with more CPUs.
⢠It may be that I/O is also a problem.
â You may run your application in multiple sites
â Half the clients go to one site, half to another
42. Questions when you have multiple
sites
How do clients know which site to use?
How are databases used by the applications
coordinated across sites (we defer this question).
43. Domain Name Server (DNS)
Client sends URL to DNS
DNS takes as input a URL and returns an IP address
Client uses IP address to send message to load balancer for a site
Site 1 Site 2
Domain Name Server
Website.com
123.45.67.89
123.45.67.89
DNS
44. DNS with multiple sites
⢠DNS server returns IP address of both sites.
⢠DNS server will vary which address is listed
first.
⢠Client will, typically, choose first entry.
Site 1 Site 2
Domain Name
Server
Website.com
123.45.67.89
456.77.88.99123.45.67.89
DNS
45. Outline
⢠Introduction to scalability
⢠CPU scaling
⢠I/O scaling
â Multiple sites
â Software techniques
47. To Scale for I/O - Make the queue
manager more sophisticated
Key Value Store
Publisher â takes values from key-
value store and distributes them
Clients
48. Summary
⢠Scalability is the ability to respond to
increasing or decreasing workload
â Add CPU capacity through utilizing features of
cloud provider
â Add I/O capacity through
⢠Distributing requests to multiple sites
⢠Have fast message passing software
53. Cost of Downtime
⢠According to a recent survey the average cost of
unplanned downtime is $7,900/minute*
⢠91% of reporting companies have experienced an
unplanned outage in the last 24 months
⢠The average outage lasts 118 minutes
⢠The average frequency of outages over a 24
month period were:
â 10.16 limited outages
â 5.88 local outages
â 2.04 total outages
* Emerson Network Power, Ponemon Institute Study 2013
54. Cost of Downtime II
⢠As the previous numbers indicate downtime can be expensive
⢠Experienced in August 2013
â New York Times had a 2 hour outage (stock price declined, twitter
exploded, and Wall Street Journal dropped their fees to try and
capture readership)
â Google had between 1 â 5 minutes of downtime (~$500,000 direct
loss and 40% reduction in overall web traffic)
â Amazon had an outage of under an hour (> $5 million)
⢠In addition to direct losses indirect losses are experienced
â Loss of confidence, reputation, and good will
â Productivity losses
â Compliance penalties
â âŚ
55. Availability: a Business Concern
⢠The availability of the business service impacts
the earnings and associated value of an
organization
⢠If the organization relies on an IT system to
deliver business service then the availability of
the IT system impacts the value of the
organization
⢠In this section we are going to look at the
availability of the system
â We want to keep in mind, however, that the objective
is the availability of the business service
56. What Is Availability?
⢠Availability in general refers to the degree to
which a system is in an operable state
⢠This is typically articulated as the percentage
of time the system is available (or weâd like to
have the system available) e.g. 99.99%
⢠There are many related terms e.g.
â Availability
â Fault-Tolerance
â Reliability
57. How is Availability Measured?
Availability is typically measured as:
MTBF
MTBF + MTTR
MTBF = Mean Time Between Failures
MTTR = Mean Time To Repair
59. Calculating System Availability I
⢠Each component = 99% (3.65 days a year)
⢠The overall system, however, has an availability that
is the product of each componentâs availability
â 99% X 99% = 98% (7.26 days a year)
99% 99%
60. Calculating System Availability
⢠Each component = 99% (3.65 days
a year)
⢠The overall system in this case,
however, is based on the
likelihood that both components
would fail at the same time
1 â ((100% - 99%) X (100% - 99%) )=
99.99% (3.65 hours a year!!)
Redundant Elements
99%
99%
61. Availability Measures
⢠A couple of things to keep in mind
â These measures refer to the mean not the minimum
time between failures
â As the MTBF increases the impact of MTTR decreases
â As the MTTR approaches 0 the overall availability
approaches 1
⢠Historically these measures were developed for
hardware components
62. Availability Requirements
⢠MTBF can be measured for operational systems
⢠How do you predict the MTBF for a system that
is yet to be built, however?
⢠Does it make sense to use the previously
defined availability measure as a requirement?
⢠If not, how should requirements be articulated?
63. Actionable Requirements
⢠Remember that as a business the concern is that the
services are available as needed
⢠In order to determine the likely availability of a
system (or design) you must
â Understand the likelihood that various kinds of faults could
occur
â Understand the impact of these faults on overall system
availability
⢠You must therefore translate the desired
business objective into a set of fault scenarios
64. End to End Availability
⢠Engineers often think about availability of some
portion of the system e.g.
â Availability of the database or web server
⢠Organizations, however, are concerned with end to
end availability
⢠When thinking about availability requirements you
should think about the organizational perspective
â Once youâve done this youâll then need to map this to the
engineering perspective
65. Requirements Vary
⢠We start with the desired requirements from a business perspective
⢠We then look at the system context to determine what faults might
disrupt the desired behavior
â This is likely an iterative process
⢠One thing to keep in mind is that different business contexts imply
different requirements
⢠Consider the needs of Discreet Manufacturing vs. Continuous
Manufacturing
⢠Discreet manufacturing is when you manufacture discreet products
â e.g. an automobile assembly line
⢠Continuous process automation is when you manufacture things like
chemicals or concrete
⢠How might the systems respond differently in the event of a fault?
66. Example Scenario
If a processor in one of the servers fails during
peak load, the system shall continue to operate
without dropping any of the current tasks and
without any noticeable delay
67. Relationship to Goals
⢠How does this scenario relate to availability goals?
â It does not in and of itself guarantee a particular level of
availability
⢠This in conjunction with scenarios for other faults that
could impact a service do improve availability, however
⢠In order to understand how to think about the design
we need to:
â Identify the activities that require availability
â Identify the related faults
â Identify the desired response if the fault occurs
69. Fault Characteristics
⢠âFail silentâ vs. âfail operationalâ
â Fail silent ď when a component fails it no longer operates
â Fail operational ď a component continues to operate (although not
correctly) when a fault is present
⢠Transient vs. deterministic
â Some faults will always occur in a consistent way
â Others may come and go intermittently
⢠Some will look similar to other faults e.g.
â A hung process, a processor crash, and a network outage can all look
the same
70. Whatâs the matter with
this $#@!#% computer âŚ
A System Can Fail Silently âŚ
Letâs look at an example interaction
Client Machine
Network
Server
FileSystem
Hmm ⌠whatâs the best
vegetarian restaurant in
Bogota?
71. Symptoms of Faults
⢠From an end users perspective many faults
exhibit themselves similarly
⢠These faults could all look the same to an end
user:
â A hung process
â A crashed processor
â A network outage
â An overloaded element
72. Or Fail Operational âŚ
Client Machine
Network
Server
FileSystem
Carnes de Res is the best
vegetarian restaurant???
Hmm ⌠whatâs the best
vegetarian restaurant in
Bogota?
73. Fault Manifestation
⢠These types of faults could occur in any of the
elements of the system
⢠Depending on where they occur different mitigation
strategies might be appropriate
⢠As a result you need to
â Analyze your system and determine what faults might
occur
â Identify the desired response if they do occur
⢠This is called a fault model
74. Fault Model
⢠A fault model describes the system faults that
could disrupt the critical functionality
⢠The fault model is going to depend on both the
critical functionality and the specific architecture of
the system
⢠Once the fault model is identified youâll need to
describe the desired response if the fault occurs
75. Cost of Availability
⢠Weâve established that downtime can be
expensive
⢠Itâs also the case that âuptimeâ can be expensive
â Implementing a mechanism to be resilient to faults
can be expensive
⢠We want to understand the cost and benefit for
proposed strategies and select the set that make
sense from a business perspective
⢠This means the initial requirements might change
âŚ
76. Example
⢠We want âappropriateâ availability
⢠A study has been done for mobile carrier
customers
â This study has determined that customers will
tolerate 2 dropped calls per 100 calls made
â As soon as the system drops 3 calls per 100 they
will start to change providers
⢠What does this say about the âappropriateâ
availability of the system?
78. Elements of Availability
⢠Fault detection
â The system recognizes that a fault has occurred
⢠Masking faults
â The system is able to continue to operate despite the fault
⢠Recover from the fault
â The system is able to repair the faulty element of the
system
79. Fault Detection
⢠There are standard âtacticsâ that we can use for
fault detection
⢠They donât detect the same types of faults,
however
⢠They also have different âcostsâ
â This cost can be in terms of effort or overhead of one
kind or another
⢠We need to understand something about the
kinds of faults we are trying to detect before we
can select the appropriate tactic
80. Detecting Silent Faults
⢠Itâs much easier to detect elements that fail
silently
⢠Essentially we monitor the âlivenessâ of the
element where the fault could exist
⢠Example tactics are:
â Exceptions
â Heartbeat
â Ping/echo
81. Exceptions
⢠When an anomalous or exceptional event occurs
it can be detected by exception handlers
⢠When the exception is âcaughtâ an alternate path
of execution is triggered
⢠The exception handling code can notify other
portions of the system of the issue
⢠Doesnât impose significant overhead on the
system
82. Heart Beat
⢠A component emits a regular âheart beatâ
⢠Another element will listen for this
⢠If this heart beat is not detected it is assumed
that the component is no longer operational
⢠Does add overhead to the system
⢠Only an indication of the âlivenessâ of the
component
83. Ping/Echo
⢠Similar to heart beat except a âwatchdogâ sends a
ping and listens for a response
⢠If no response is heard it is assumed the component
is not operational
⢠Requires more coupling than heart beat
⢠Increases network traffic
⢠Again itâs only an indication of the liveness of the
component
84. Failing Operational
⢠If an element or system fails operational itâs
more difficult to detect
⢠You donât just monitor if the system responds
but also need to determine if the results are
âcorrectâ
⢠Example tactics include:
â Exceptions
â Voting
â Check sum
85. Voting
⢠You compare the response of multiple elements
performing the same operation
⢠If the results of one of the elements doesnât
match the others you assume itâs faulty
⢠Can detect erroneous output
⢠Adds overhead (must wait for multiple responses
and compare)
86. Check Sum
⢠A mathematical calculation thatâs applied to a
piece of data to determine if itâs been altered
⢠Does add some processing overhead to the
system
⢠Can detect data corruption
87. Tolerating Faults
⢠In many cases you realize that faults will occur
â Particularly in large distributed systems
⢠You canât tolerate outages every time one of the
nodes experiences a fault
⢠You therefore need to hide the fact that the
system has a faulty component
⢠This is called âfault maskingâ
⢠Again the strategies associated with masking the
fault are going to be dependent on the kind of
fault being masked
88. Strategies For Fault Masking
⢠Modular redundancy
⢠Rollback
â Restoring the system to a previously identified âsafe stateâ
⢠Roll forward
â âskippingâ an operation that is causing a problem
⢠Retrying an operation
⢠Shedding load
⢠âŚ
89. Modular Redundancy
⢠Redundant systems have multiple replicated elements
(copies)
â Not to be confused with load balancing approaches
â The thing to realize is that the state is replicated across the copies
⢠There are multiple strategies for software replication
â Cold standby
â Warm standby
â Hot standby
90. Redundancy: Cold Standby
⢠There are non-operational copies available
⢠State is stored (e.g. in logs) but is not loaded on the
copies until they are needed
⢠When a failure occurs the state is reconstructed and
the replica is introduced
⢠Reduces operational overhead associated with
maintaining copies
⢠Increases MTTR
92. Redundancy: Warm Standby
⢠In this configuration you have a primary replica that is actively
processing requests
⢠You have passive replicas that are not actively processing
requests although they are online
⢠State is periodically loaded into the backup replicas
⢠As with cold standbys the processing overhead is reduced
⢠The MTTR is dependent on the state checkpoints (typically
less than with cold standbys)
94. Redundancy: Hot Standby
⢠All copies are processing requests
⢠All of the duplicate responses will be suppressed
⢠The copies need to be synchronized continuously
â Thus the processing overhead is increased as the number of replicas
increases
⢠The MTTR is reduced to virtually zero, however, in the event
that one of the replicas fail
96. Considerations
⢠State management
â If there is state that is managed in the replicated elements you need to
worry about synchronizing state
⢠State can be pushed to other elements âŚ
â This impacts other concerns such as performance or security, however
â Caching commonly accessed data is a typical strategy for dealing with
performance concerns
⢠Kinds of replicas
⢠Frequency of check pointing
98. Roll Back
⢠Roll back is when you undo a transaction
⢠You need to manage state appropriately
â You need to define an atomic set of actions
⢠This could be taking complete snap shot of
system state or just roll back of a transaction
99. Roll Forward
⢠Roll forward essentially skips a task and then
applies the changes involved in the
transactions
⢠The system will then be in the state consistent
with the desired change
100. Retrying an Operation
⢠This is as simple as it sounds
⢠When a given operation fails you retry it
⢠It can be used in conjunction with a detection
mechanism like exceptions
101. Shedding Load
⢠Sometimes issues occur due to an overload
situation
⢠This can lead to:
â Timing errors
â Buffer overflows
â Memory consumption issues
⢠Shedding less critical load can help alleviate the
problem
102. Strategies For Fault Recovery
⢠Reboot
â This could be a partial (e.g. restarting an
application or process) or total system reboot
⢠Removal of faulty component
⢠Restore component to a previously
identified safe state
⢠âŚ
103. Reboot
⢠Rebooting the system can often correct the
issue
⢠This can also be done as a preventative
measure
⢠It can be a complete or partial reboot
⢠There is such a thing as a âmicro rebootâ that
takes milliseconds
104. Component Removal
⢠If you have a faulty component you can
remove it from service
⢠You might try other remedies such as
restarting first
105. Checkpointing State
⢠You can periodically take a snap shot of the
system
⢠If at some point you have an issue, you can
restore the system to the previously defined
state
⢠The more frequently you take a snap shot of the
state the smaller the loss but the more
overhead
106. Availability in the Cloud
⢠From a high level achieving availability in the
cloud is the same process as elsewhere
â It needs to be designed in
⢠That means you need to understand the faults
that could occur
⢠You then need to apply the appropriate
decisions to achieve the desired result
107. Fault Model
⢠We will give specific faults that occur later in the
course
â This requires first a better understanding of the
architecture of the cloud
⢠At this point itâs useful to understand that the cloud is
made up of faulty components
â Failures happen on a regular basis
⢠There are mechanisms built in to handle this, but
â They arenât always successful
â They donât deal with application specific concerns
â Some things that might be a fault for your application isnât
considered a fault by the infrastucture
108. Summary
⢠Availability measures are not adequate for design
⢠You need to be able to translate availability goals into
a set of actionable requirements that identify the
possible faults and desired responses
⢠The approaches should support the desired
responses in the event that a fault occurs