1. Design Patterns High Availability of Azure Applications
2. Practical Demo on points to take care for High Availability from Infrastructure point of view(the points we discussed in last seminar)
3. Different Patterns for High Availability
3.1 Health Endpoint Monitoring Pattern
3.2 Queue-based Load Leveling Pattern
3.2 Throttling Pattern
3.3 Retry Pattern
3.4 Multiple Datacenter Deployment Guidance
4. Architecture for High Availability of Azure Applications
5. best practices for developing High Available Azure Applications
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
Design patterns and plan for developing high available azure applications
1. Design Patterns and Plan for
developing High Available Azure
Applications
Himanshu Sahu
Mindfire Solutions
himanshus@mindfiresolutions.com
2. Availability
Availability
Availability defines the proportion of time that the system is
functional and working. It will be affected by system errors,
infrastructure problems, malicious attacks, and system load. It is
usually measured as a percentage of uptime. Cloud applications
typically provide users with a service level agreement (SLA), which
means that applications must be designed and implemented in a
way that maximizes availability.
3. Why Design Patterns
Design patterns can help to solve a specific and commonly
occurring problems that may be encountered when building
applications that run in the cloud.
Problem Areas in the Cloud
Availability , Data management, Design and Implementation,
Messaging, Management and Monitoring, Performance and
Scalability, Resiliency, Security
4. Design Patterns and Plan for developing
High Available Azure Applications
Different Patterns for High Availability
3.1 Health Endpoint Monitoring Pattern
3.2 Queue-based Load Leveling Pattern
3.2 Throttling Pattern
3.3 Retry Pattern
3.4 Multiple Datacenter Deployment Guidance
5. Health Endpoint Monitoring Pattern
Implement functional checks within an application that external
tools can access through exposed endpoints at regular intervals.
This pattern helps to verify that applications and services are
performing correctly.
Many factors that affect cloud-hosted applications are network
latency, the performance and availability of the underlying
compute and storage systems, and the network bandwidth
between them. The service may fail entirely or partially due to any
of these factors. Therefore, you must verify at regular intervals that
the service is performing correctly to ensure the required level of
availability—which might be part of your Service Level Agreement
(SLA).
6. Health Endpoint Monitoring Pattern
Solutions
Implement health monitoring by sending requests to an endpoint
on the application.
A health monitoring check typically combines two factors: the
checks (if any) performed by the application or service in response
to the request to the health verification endpoint, and analysis of
the result by the tool or framework that is performing the health
verification check.
8. Health Endpoint Monitoring Pattern
When to Use this Pattern
•
Monitoring websites and web applications to verify availability.
•
Monitoring websites and web applications to check for correct
operation.
•
Monitoring middle-tier or shared services to detect and isolate a
failure that could disrupt other applications.
•
To complement existing instrumentation within the application,
such as performance counters and error handlers. Health
verification checking does not replace the requirement for logging
and auditing in the application. Instrumentation can provide
valuable information for an existing framework that monitors
counters and error logs to detect failures or other issues. However,
it cannot provide information if the application is unavailable.
9. Health Endpoint Monitoring Pattern
How to do in Azure
•
Use the built-in features of Microsoft Azure, such as the
Management Services or Traffic Manager.
•
Use a third party service (NewRelic, and Statuscake)
•
Create a custom utility or a service that runs on your own or on a
hosted server.
10. Queue-Based Load Leveling Pattern
Use a queue that acts as a buffer between a task and a service that
it invokes in order to smooth intermittent heavy loads that may
otherwise cause the service to fail or the task to time out. This
pattern can help to minimize the impact of peaks in demand on
availability and responsiveness for both the task and the service.
Temporal decoupling
Load leveling
Load balancing
Loose coupling
12. Queue-Based Load Leveling Pattern
When to Use this Pattern
•
This pattern is ideally suited to any type of application that uses
services that may be subject to overloading.
•
This pattern might not be suitable if the application expects a
response from the service with minimal latency.
15. Queue-Based Load Leveling Pattern
How to do in Azure
•
Using Windows Azure Storage Queue or Windows Azure Service
Bus
16. Throttling Pattern
Control the consumption of resources used by an instance of an
application, an individual tenant, or an entire service.
This pattern can allow the system to continue to function and meet
service level agreements, even when an increase in demand places
an extreme load on resources.
Many factors that affect cloud-hosted applications are network
latency, the performance and availability of the underlying
compute and storage systems, and the network bandwidth
between them. The service may fail entirely or partially due to any
of these factors. Therefore, you must verify at regular intervals that
the service is performing correctly to ensure the required level of
availability—which might be part of your Service Level Agreement
(SLA).
17. Throttling Pattern
Solutions
Auto Scaling
Queue-Based Load Leveling Pattern
Priority Queue Pattern
Rejecting requests from an individual user who has already
accessed system APIs more than n times per second over a given
period of time.
Disabling or degrading the functionality of selected nonessential
services so that essential services can run unimpeded with
sufficient resources.
19. Throttling Pattern
When to Use this Pattern
To ensure that a system continues to meet service level
agreements.
To prevent a single tenant from monopolizing the resources
provided by an application.
To handle bursts in activity.
To help cost-optimize a system by limiting the maximum resource
levels needed to keep it functioning.
20. Retry Pattern
Enable an application to handle anticipated, temporary failures
when it attempts to connect to a service or network resource by
transparently retrying an operation that has previously failed in the
expectation that the cause of the failure is transient.
An application that communicates with elements running in the
cloud must be sensitive to the transient faults (that means Faults
which are self-correcting such as Network failure, temporarry
serivice available or time-out error due to busy server ) that can
occur in this environment.
21. Retry Pattern
Solutions
Implement retry logic when trying to connect any service in cloud.
Implement retry logic when trying to connect
SQL Azure
Azure Service Bus
Azure Storage
Azure Caching Service
22. Retry Pattern
Solutions
The Transient Fault Handling Application Block
The Transient Fault Handling Application Block includes the
following retry strategies
FixedInterval
Incremental
ExponentialBackoff
The FixedInterval retry strategy retries an operation a fixed
number of times at fixed intervals.
23. Retry Pattern
Solutions
The Incremental retry strategy retries an operation a fixed number
of times at intervals that increase by the same amount each time.
For example, at two-second, four-second, six-second, and eight-
second intervals.
The ExponentialBackoff retry strategy retries an operation a fixed
number of times at intervals that increase by a greater number
each time. For example, at two-second, four-second, eight-second,
and sixteen-second intervals. This retry strategy also introduces a
small amount of random variation into the intervals.
24. Retry Pattern
Solutions
Connection Resiliency / Retry Logic (EF6 onwards)
Connection Resiliency refers to the ability for EF to automatically
retry any commands that fail due to connection throttling, or to
instability in the network causing intermittent timeouts and other
transient errors.
25. Multiple Datacenter Deployment
Guidance
Growing capacity over time.
Providing global reach with minimum latency for users
Maintaining performance and availability
Providing additional instances for resiliency
Providing a facility for disaster recovery