2. What is Cloud Computing?
Cloud Computing is a general term used to describe a new class of network
based computing that takes place over the Internet,
basically a step on from Utility Computing
a collection/group of integrated and networked hardware, software and Internet
infrastructure (called a platform).
Using the Internet for communication and transport provides hardware, software
and networking services to clients
These platforms hide the complexity and details of the underlying
infrastructure from users and applications by providing very simple graphical
interface or API (Applications Programming Interface).
3. What is Cloud Computing?
In addition, the platform provides on demand services,
that are always on, anywhere, anytime and any place.
Pay for use and as needed,
elastic scale up and down in capacity and functionalities
The hardware and software services are available to
general public, enterprises, corporations and businesses markets
4. What is Cloud Computing?
Cloud computing is an umbrella term used to refer to
Internet based development and services
A number of characteristics define cloud data, applications
services and infrastructure:
Remotely hosted: Services or data are hosted on remote
infrastructure.
Ubiquitous: Services or data are available from anywhere.
Commoditized: The result is a utility computing model similar to
traditional that of traditional utilities, like gas and electricity - you
pay for what you would want!
5. What is Cloud Computing?
Many companies are delivering services from the cloud. Some notable
examples include the following:
Google — Has a private cloud that it uses for delivering Google Docs and many
other services to its users, including email access, document applications, text
translations, maps, web analytics, and much more.
Microsoft — Has Microsoft@ Office 3650 online service that allows for content and
business intelligence tools to be moved into the cloud, and Microsoft currently
makes its office applications available in a cloud.
Salesforce.com — Runs its application set for its customers in a cloud, and its
Force.com and Vmforce.com products provide developers with platforms to build
customized cloud services.
6. Basic Concepts
Basic Concepts There are certain services and models working behind the
scene making the cloud computing feasible and accessible to end users.
Following are the working models for cloud computing:
1. Deployment Models
2. Service Models
7. Deployment Models
Deployment models define the type of access to the cloud, i.e., how the
cloud is located?
Cloud can have any of the four types of access:
Public, Private, Hybrid and Community.
8. Deployment Models
PUBLIC CLOUD : The Public Cloud allows systems and services to be easily
accessible to the general public. Public cloud may be less secure because of
its openness, e.g., e-mail.
PRIVATE CLOUD : The Private Cloud allows systems and services to be
accessible within an organization. It offers increased security because of its
private nature.
COMMUNITY CLOUD : The Community Cloud allows systems and services to be
accessible by group of organizations.
HYBRID CLOUD : The Hybrid Cloud is mixture of public and private cloud.
However, the critical activities are performed using private cloud while the
non- critical activities are performed using public cloud.
9. Service Models
Service Models Service Models are the reference models on which the Cloud
Computing is based. These can be categorized into three basic service models
as listed below:
1. Infrastructure as a Service (laaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
11. Service Models - Iaas
Infrastructure as a Service (laaS) laaS is the delivery of technology
infrastructure as an on demand scalable service.
laaS provides access to fundamental resources such as physical machines,
virtual machines, virtual storage, etc.
Usually billed based on usage
Usually multi tenant virtualized environment
Can be coupled with Managed Services for OS and application support
13. Service Models - Paas
Platform as a Service (PaaS) the runtime environment for
applications, PaaS provides development & deployment
tools, etc. PaaS provides all of the facilities required to
support the complete life cycle of building and delivering
web applications and services entirely from the Internet.
Typically applications must be developed with a particular
platform in mind.
•Multi tenant environments
•Highly scalable multi tier architecture
15. Service Models - Saas
Software as a Service (SaaS) SaaS model allows to use
software applications as a service to end users. SaaS is a
software delivery methodology that provides licensed
multi-tenant access to software and its functions remotely
as a Web-based service.
Usually billed based on usage
Usually multi tenant environment
Highly scalable architecture
17. Virtualization
Virtualization Virtual workspaces:
An abstraction of an execution environment that can be made dynamically
available to authorized clients by using well-defined protocols,
Resource quota (e.g. CPU, memory share),
Software configuration (e.g. O/S, provided services).
Implement on Virtual Machines (VMS):
Abstraction of a physical host machine,
Hvpervisor intercepts and emulates instructions from VMS, and allows
management of
VMS,VMWare, Xen, etc.
Provide infrastructure API:
Plug-ins to hardware/support structures
18. Virtualization
Virtualization in General
Advantages of virtual machines:
Run operating systems where the physical hardware is unavailable,
Easier to create new machines, backup machines, etc.,
Software testing using "clean" installs of operating systems and software,
Emulate more machines than are physically available,
Timeshare lightly loaded systems on one host,
Debug problems (suspend and resume the problem machine),
Easy migration of virtual machines (shutdown needed or not).
Run legacy systems!
19. What is the Purpose and Benefits ?
Cloud computing enables companies and applications, which are
system infrastructure dependent, to be infrastructure-less.
By using the Cloud infrastructure on "pay as used and on demand", all
of us can save in capital and operational investment!
Clients can:
Put their data on the platform instead of on their own desktop PCs
and/or on their own servers.
They can put their applications on the cloud and use the servers
within the cloud to do processing and data manipulations etc.
20. Cloud - Sourcing
Why is it becoming a Big Deal:
Using high-scale/low-cost providers,
Any time/place access via web browser,
Rapid scalability; incremental cost and load sharing,
Can forget need to focus on local IT.
Concerns:
Performance, Reliability, and SLAs,
Control of data, and service parameters,
Application features and choices,
Interaction between Cloud providers,
No standard API - mix of SOAP and REST!
Privacy, security, compliance, trust
21. The use of the cloud provides a number of opportunities:
It enables services to be used without any understanding of their
infrastructure.
Cloud computing works using economies of scale:
It potentially lowers the outlay expense for start up companies, as they
would no longer need to buy their own software or servers.
Cost would be by on-demand pricing.
Vendors and Service providers claim costs by establishing an ongoing
revenue stream.
Data and services are stored remotely but accessible from
"anywhere".
22. In parallel there has been backlash against cloud computing:
Use of cloud computing means dependence on others and that
could possibly limit flexibility and innovation:
The others are likely become the bigger Internet companies
like Google and IBM, who may monopolies the market.
Some argue that this use of supercomputers is a return to the
time of mainframe computing that the PC was a reaction
against.
Security could prove to be a big issue:
It is still unclear how safe out-sourced data is and when using
these services ownership of data is not always clear
23. There are also issues relating to policy and access:
If your data is stored abroad whose policy do you adhere to?
What happens if the remote server goes down?
How will you then access files?
There have been cases of users being locked out of accounts
and losing access to data.
24. Cost Savings - Companies can reduce their capital expenditures and use
operational expenditures for increasing their computing capabilities. This is a
lower barrier to entry and also requires fewer in-house IT resources to provide
system support.
Scalability/Flexibility — Companies can start with a small deployment and
grow to a large deployment fairly rapidly, and then scale back if necessary.
Also, the flexibility of cloud computing allows companies to use extra resources
at peak times, enabling them to satisfy consumer demands.
Reliability — Services using multiple redundant sites can support business
continuity and disaster recovery.
Maintenance — Cloud service providers do the system maintenance, and
access is through APIs that do not require application installations onto PCs,
thus further reducing maintenance requirements.
Mobile Accessible — Mobile workers have increased productivity due to
systems accessible in n-infrastructure-available-from-anywhere
25. Requires a constant Internet connection:
Cloud computing is impossible if you cannot connect to the
Internet.
Since you use the Internet to connect to both your
applications and documents, if you do not have an Internet
connection you cannot access anything, even your own
documents.
A dead Internet connection means no work and in areas
where Internet connections are few or inherently unreliable,
this could be a deal-breaker
26. Stored data might not be secure:
With cloud computing, all your data is stored on the cloud.
The questions is How secure is the cloud?
Can unauthorized users gain access to your confidential
data?
Stored data can be lost:
Theoretically, data stored in the cloud is safe, replicated
across multiple machines.
But on the off chance that your data goes missing, you
have no physical or local backup.
Put simply, relying on the cloud puts you at risk if the
cloud lets you down.
27. Many of the activities loosely grouped together under cloud
computing have already been happening and centralized
computing activity is not a new phenomena
Grid Computing was the last research-led centralized
approach
However there are concerns that the mainstream adoption
of cloud computing could cause many problems for users
Many new open source systems appearing that you can
install and run on your local cluster
should be able to run a variety of applications on these
systems
28. Definition Of Cloud
The term cloud has been used historically as a
metaphor for the Internet. This usage was originally
derived from its common depiction in network
diagrams as an outline of a cloud, used to represent
the transport of data across carrier backbones
(which owned the cloud) to an endpoint location on
the other side of the cloud.
29. The Emergence of Cloud Computing
Utility computing can be defined as the provision of
computational and storage resources as a metered service, similar
to those provided by a traditional public utility company. This, of
course, is not a new idea. This form of computing is growing in
popularity, however, as companies have begun to extend the
model to a cloud computing paradigm providing virtual servers
that IT departments and users can access on demand.
30. The Global Nature of the Cloud
The cloud sees no borders and thus has made the world a much
smaller place. The Internet is global in scope but respects only
established communication paths. People from everywhere now
have access to other people from anywhere else. Globalization of
computing assets may be the biggest contribution the cloud has
made to date. For this reason, the cloud is the subject of many
complex geopolitical issues.
31. Grid Computing
or
Cloud Computing?
Grid computing is often confused with cloud computing. Grid
computing is a form of distributed computing that implements
a virtual supercomputer made up of a cluster of networked or
Internetworked computers acting in unison to perform very
large tasks. Many cloud computing deployments today are
powered by grid computing implementations and are billed
like utilities, but cloud computing can and should be seen as an
evolved next step away from the grid utility model.
32. Is the Cloud Model Reliable?
The majority of today’s cloud computing infrastructure consists of time-tested and
highly reliable services built on servers with varying levels of virtualized technologies,
which are delivered via large data centers operating under service-level agreements that
require 99.99% or better uptime. Commercial offerings have evolved to meet the
quality-of-service requirements of customers and typically offer such service-level
agreements to their customers
33. What About Legal Issues When Using
Cloud Models?
1. Notify individuals about the purposes for which information is
collected and used.
2. Give individuals the choice of whether their information can be disclosed to a third
party.
3. Ensure that if it transfers personal information to a third party,
that third party also provides the same level of privacy protection.
4. Allow individuals access to their personal information.
5. Take reasonable security precautions to protect collected data
from loss, misuse, or disclosure.
6. Take reasonable steps to ensure the integrity of the data collected.
7. Have in place an adequate enforcement mechanism.
34. What Are the Key Characteristics of
Cloud Computing?
Centralization of infrastructure and lower costs
Increased peak-load capacity
Efficiency improvements for systems that are often underutilized
Dynamic allocation of CPU, storage, and network bandwidth
Consistent performance that is monitored by the provider of the service
35. The Evolution of Cloud
Computing
It is important to understand the evolution of computing in order
to get an appreciation of how we got into the cloud environment.
Looking at the evolution of the computing hardware itself, from
the first generation to the current (fourth) generation of
computers, shows how we got from there to here. The hardware,
however, was only part of the evolutionary process. As hardware
evolved, so did software. As networking evolved, so did the rules
for how computers communicate. The development of such rules,
or protocols, also helped drive the evolution of Internet software.
36. Hardware Evolution –
First-Generation Computers
The Harvard Mark I computer.
(Image from www.columbia.edu/acis/history/mark1.html, retrieved 9 Jan 2009.)
40. Hardware Evolution –
Fourth-Generation Computers
The fourth-generation computers that were being developed at this time utilized a
microprocessor that put the computer’s processing capabilities on a single
integrated circuit chip. By combining random access memory (RAM), developed
by Intel, fourth-generation computers were faster than ever before and had much
smaller footprints.
42. Internet Software Evolution
The SAGE system. (Image from USAF Archives, retrieved from http://
history.sandiego.edu/GEN/recording/images5/PDRM0380.jpg.)
43. Internet Software Evolution
An Interface Message Processor. (Image from luni.net/wp-content/
uploads/2007/02/bbn-imp.jpg, retrieved 9 Jan 2009.)
45. Internet Software Evolution - Establishing a Common
Protocol for the Internet
Since the lower-level protocol layers were provided by the IMP host interface, the
NCP essentially provided a transport layer consisting of the ARPANET Host-to-
Host Protocol (AHHP) and the Initial Connection Protocol (ICP). The AHHP
specified how to transmit a unidirectional, flow-controlled data stream between two
hosts.
46. Internet Software Evolution - Evolution of Ipv6
The amazing growth of the Internet throughout the 1990s caused a vast reduction
in the number of free IP addresses available under IPv4. IPv4 was never designed to
scale to global levels. To increase available address space, it had to process data
packets that were larger (i.e., that contained more bits of data). This resulted in a
longer IP address and that caused problems for existing hardware and software.
47. Internet Software Evolution - Building a Common
Interface to the Internet
While Marc Andreessen and the NCSA team were working on
their browsers, Robert Cailliau at CERN independently
proposed a project to develop a hypertext system. He joined
forces with Berners-Lee to get the web initiative into high
gear. Cailliau rewrote his original proposal and lobbied CERN
management for funding for programmers. He and Berners-
Lee worked on papers and presentations in collaboration, and
Cailliau helped run the very first WWW conference
48. Internet Software Evolution - Building a Common
Interface to the Internet
The first web browser, created by Tim Berners-Lee. (Image from
www.tranquileye.com/cyber/index.html, retrieved 9 Jan 2009.)
49. Internet Software Evolution - Building a Common
Interface to the Internet
The original NCSA Mosaic browser. (Image from http://www.nsf.gov/od/lpa/news/03/images/mosaic.6beta.jpg.)
50. Server Virtualization
Virtualization is a method of running multiple independent virtual operating
systems on a single physical computer. 24 This approach maximizes the return on
investment for the computer. The term was coined in the 1960s in reference to a
virtual machine (sometimes called a pseudo-machine). The creation and
management of virtual machines has often been called platform virtualization.
51.
52. Underlying Principles of Parallel
and
Distributed Computing
The terms parallel computing and distributed computing are often used
interchangeably, even though they means lightly different things. The term parallel
implies a tightly coupled system, whereas distributed refers to a wider class of
system, including those that are tightly coupled
54. Underlying Principles of Parallel
and
Distributed Computing
More precisely, the term parallel computing refers to a model in which the
computation is divided among several processors sharing the same memory. The
architecture of a parallel computing system is often characterized by the
homogeneity of components: each processor is of the same type and it has the same
capability as the others. The shared memory has a single address space, which is
accessible to all the processors. Parallel programs are then broken down into several
units of execution that can be allocated to different processors and can
communicate with each other by means of the shared memory.
55. Elements of parallel computing
The first steps in this direction led to the development of parallel computing, which
encompasses techniques, architectures, and systems for performing multiple
activities in parallel. As we already discussed, the term parallel computing has
blurred its edges with the term distributed computing
56. What is parallel processing?
Processing of multiple tasks simultaneously on multiple processors is called parallel
processing. The parallel program consists of multiple active processes(tasks)
simultaneously solving a given problem. A given task is divided into multiple
subtasks using a divide-and-conquer technique, and each sub task is processed on a
different central processing unit (CPU). Programming on a multiprocessor system
using the divide-and-conquer technique is called parallel programming.
57. What is parallel processing?
The development of parallel processing is being influenced by many factors. The
prominent among them include the following:
Computational requirements are ever increasing in the areas of both scientific and
business computing.
Sequential architectures are reaching physical limitations as they are constrained by the
speed of light and thermodynamics laws.
Hardware improvements in pipelining, superscalar, and the like are nonscalable and
require sophisticated compiler technology.
Vector processing works well for certain kinds of problems. It is suitable mostly for
scientific problems (involving lots of matrix operations) and graphical processing.
The technology of parallel processing is mature and can be exploited commercially; there
is already significant R&D work on development tools and environments.
Significant development in networking technology is paving the way
for heterogeneous computing.
58. Hardware architectures for parallel
processing
The core elements of parallel processing are CPUs. Based on
the number of instruction and data streams that can be
processed simultaneously, computing systems are classified
into the following four categories
• Single-instruction, single-data (SISD) systems
•Single-instruction, multiple-data (SIMD) systems
•Multiple-instruction, single-data (MISD) systems
•Multiple-instruction, multiple-data (MIMD) systems
59. Single-instruction, single-data (SISD)
systems
An SISD computing system is a uniprocessor machine capable
of executing a single instruction, which operates on a single
data stream. In SISD, machine instructions are processed
sequentially; hence computers adopting this model are
popularly called sequential computers. Most conventional
computers are built using the SISD model.
Single-instruction, single-data (SISD) architecture.
60. Single-instruction, multiple-data (SIMD)
systems
An SIMD computing system is a multiprocessor machine capable of
executing the same instruction on all the CPUs but operating on different
data streams
Single-instruction, multiple-data (SIMD) architecture.
61. Multiple-instruction, single-data (MISD)
systems
An MISD computing system is a multiprocessor machine capable of
executing different instructions on different PEs but all of them operating
on the same data set
Multiple-instruction, Single-data (MISD) architecture.
62. Multiple-instruction, multiple-data (MIMD)
systems
An MIMD computing system is a multiprocessor machine capable of
executing multiple instructions on multiple data sets Each PE in the MIMD
model has separate instruction and data streams; hence machines built
using this model are well suited to any kind of application. Unlike SIMD
and MISD machines, PEs in MIMD machines work asynchronously.
Multiple-instruction, Multiple-data (MIMD) architecture.
63. Shared memory MIMD machines
In the shared memory MIMD model, all the PEs are connected to a single
global memory and they all have access to it Systems based on this model
are also called tightly coupled multiprocessor systems.
Shared (left) and distributed (right) memory MIMD architecture.
64. Approaches to parallel programming
A sequential program is one that runs on a single processor and has a single line
of control. To make many processors collectively work on a single program, the
program must be divided into smaller independent chunks so that each
processor can work on separate chunks of the problem.
A wide variety of parallel programming approaches are available. The most
prominent among them are the following:
• Data parallelism
• Process parallelism
• Farmer-and-worker model
These three models are all suitable for task-level parallelism. In the case of data
parallelism, the divide-and-conquer technique is used to split data into multiple
sets, and each data set is processed on different PEs using the same instruction.
65. Levels of parallelism
Levels of parallelism are decided based on the lumps of code (grain size)
that can be a potential candidate for parallelism. Their approaches have a
common goal: to boost processor efficiency by hiding latency.
66. Elements of Distributed computing
we extend these concepts and explore how multiple activities can be performed by
leveraging systems composed of multiple heterogeneous machines and systems.
67. General concepts and definitions
of Distributed Computing
A distributed system is a collection of independent computers that appears to its
users as a single coherent system.
A distributed system is one in which components located at networked computers
communicate and coordinate their actions only by passing messages.
As specified in this definition, the components of a distributed system communicate
with some sort of message passing. This is a term that encompasses several
communication models.
68. Components of a distributed system
A distributed system is the result of the interaction of several components that
traverse the entire computing stack from hardware to software. It emerges from the
collaboration of several elements that—by working together—give users the
illusion of a single coherent system
69. Components of a distributed system
A layered view of a distributed system.
70. Components of a distributed system
A cloud computing distributed system.
71. Architectural styles for distributed
computing
Architectural styles are mainly used to determine the vocabulary of components and
connectors that are used as instances of the style together with a set of constraints
on how they can be combined.
We organize the architectural styles into two major classes:
• Software architectural styles
• System architectural styles
73. Cloud Characteristics
Five essential characteristics of Cloud Computing
1. On demand self-service
2. Broad network access
3. Resource pooling
4. Rapid Elasticity
5. Measured service
74. On demand self-service
Computer services such as Email, Application
Network, or Server service can be provided without
requiring interaction with each service provider.
Self-service means that the consumer performs all
the actions needed to acquire the service himself,
instead of going through an IT department. For
example – The consumer’s request is then
automatically processed by the cloud
infrastructure, without human intervention on the
provider’s side.
75. Broad Network Access
Cloud capabilities are available over the network
and accessed through standard mechanism that
promote use by heterogeneous client such as
mobile phone, laptop
76. Resource pooling
– The providers computing resources are pooled together to serve
multiple customers, with different physical and virtual resources
dynamically assigned and reassigned according to the customers
demand.
– There is a sense of location independence in that the customer
generally has no control or knowledge over the exact location of
the provided resources but may be able to specify lication at a
higher level of abstraction (e.g. contry, state, or datacenter).
– Example of resources include storage, processing, memory, and
network bandwidth.
77. Rapid elasticity
– Capabilities can be elastically provisioned and released, in some
cases automatically, to scale rapidly outward and inward
commensurate with demand.
– To the consumer, the capabilities available for provisioning often
appear to be unlimited and can be appropriated in any quantity at
any time.
78. Measured service
– Cloud systems automatically control and optimize resource use
by leveraging a metering capability at some level of abstraction
appropriate to the type of service (e.g. storage, processing,
bandwidth, and active use account).
– Resource usage can be monitored, controlled, and reported,
providing transparency for both the provider and consumer of the
utilized service.
79. Multi-tenancy
In a private cloud, the customers are also called tenants, can have
different business divisions inside the same company. In a public
cloud, the customers are often entirely different organizations.
Most public cloud providers use the multi-tenancy model. Multi-
tenancy allows customers to run one server instance, which is less
expensive and makes it easier to deploy updates to a large number
of customers.
80. Elasticity in Cloud
Elastic computing is nothing but a concept in cloud computing in
which computing resources can be scaled up and down easily by
the cloud service provider. Cloud service provider gives you
provision to flexible computing power when and wherever
required. The elasticity of these resources depends upon the
following factors such as processing power, storage, bandwidth,
etc.
82. Types of Elastic Cloud
Computing
While scalability can rely on elasticity, it can also be
achieved with over provisioning.
There are two types of scalability:
83. Types of Elastic Cloud
Computing
The first option is Scale Vertically or Scale-Up –
this type of scalability can work with any application
to a limited degree. In an elastic environment,
scaling up would be accomplished by moving the
application to a bigger virtual machine or by resizing
the VM.
84. Types of Elastic Cloud
Computing
The second option is Scale Horizontally or Scale-
out, by provisioning more instances of the
application tiers on additional virtual machines and
then dividing the load between them.
85. Types of Elastic Cloud
Computing
Horizontal scaling is similar to elasticity; it allows
the re-division of resources between applications by
provisioning, or by claiming back virtual machines.
Horizontal scaling uses the infrastructure elasticity,
but the application needs to be able to scale by
adding more nodes and by distributing the load.
86. Is there any difference
between Scalability and
Elasticity?
Then answer is yes. Scalability refers to the ability of
system to accommodate larger loads just by adding
resources either making hardware stronger (scale up)
or adding additional nodes (scale out).
87. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
1. Cost Efficiency: - Cloud is available at much cheaper rates
than traditional approaches and can significantly lower the
overall IT expenses. By using cloud solution companies can
save licensing fees as well as eliminate overhead charges
such as the cost of data storage, software updates,
management etc.
88. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
2. Convenience and continuous availability: - Cloud makes
easier access of shared documents and files with view and
modify choice. Public clouds also offer services that are
available wherever the end user might be located. Moreover
it guaranteed continuous availability of resources and In
case of system failure; alternative instances are automatically
spawned on other machines.
89. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
3. Backup and Recovery: - The process of backing up and
recovering data is easy as information is residing on cloud
simplified and not on a physical device. The various cloud
providers offer reliable and flexible backup/recovery
solutions.
90. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
4. Cloud is environmentally friendly:-The cloud is more
efficient than the typical IT infrastructure and it takes fewer
resources to compute, thus saving energy.
91. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
5. Scalability and Performance: - Scalability is a built-in
feature for cloud deployments. Cloud instances are deployed
automatically only when needed and as a result enhance
performance with excellent speed of computations.
92. Benefits/Pros of Elastic Cloud
Computing
Elastic Cloud Computing has numerous advantages.
Some of them are as follow:-
6. Increased Storage Capacity:-The cloud can accommodate
and store much more data compared to a personal computer
and in a way offers almost unlimited storage capacity.
93. Elasticity in Cloud Computing
Cloud Computing or Cloud is defined as using various
services such as software development platforms, servers,
storage, over the Internet.
Elastic computing is nothing but a concept in cloud
computing in which computing resources can be scaled up
and down easily by the cloud service provider. Cloud service
provider gives you provision to flexible computing power
when and wherever required. The elasticity of these
resources depends upon the following factors such as
processing power, storage, bandwidth, etc.
94. Elasticity in Cloud Computing
Schematic Example of an (unrealistically) ideal elastic System with immediate and
fully-compensating elasticity:
95. Elasticity in Cloud Computing
However, in reality, resources are actually measured
and provisioned in larger discrete units (i.e. one processor
core, processor time slices, one page of main memory, etc.),
so a continuous idealistic scaling/elasticity cannot be
achieved. On an elastic cloud platform, the performance
metric (here: response time) will rise as workload intensity
increases until a certain threshold is reached at which the
cloud platform will provide additional resources.
97. Elasticity in Cloud Computing
Definition
•Changes in resource demands or explicit scaling requests trigger run time
adaptations of the amount of resources that an execution platform provides to
applications.
•The magnitude of these changes depends on the current and previous state of
the execution platform, and also on the current and previous behavior of the
applications running on that platform.
•Consequently, elasticity is a multi-valued metric that depends on several run
time factors. This is reflected by the following definitions, which are illustrated
by Fig 13.
•Elasticity of execution platforms consists of the temporal and quantitative
properties of runtime resource provisioning and un-provisioning, performed
by the execution platform; execution platform elasticity depends on the state of
the platform and on the state of the platform-hosted applications.
98. Elasticity in Cloud Computing
Reconfiguration point is a time point at which a platform adaptation (resource
provisioning or un-provisioning) is processed by the system.
Elasticity Metrics
There are several characteristics of resource elasticity, which are parameterised by
the platform state/history, application state/history and workload state/history:
Effect of reconfiguration is quantified by the amount of added/removed resources
and thus expresses the granularity of possible reconfigurations/adaptations.
Temporal distribution of reconfiguration points describes the density of
reconfiguration points over a possible interval of a resource’s usage amounts or
over a time interval in relation to the density of changes in workload intensity.
Provisioning time or reaction time is the time interval between the instant when a
reconfiguration has been triggered/requested until the adaptation has been
completed. An example for provisioning time would be the time between the
request for an additional thread and the instant of actually holding it.
100. Elasticity in Cloud Computing
Direct and Indirect Measuring of Elasticity Metrics
Directly on the execution platform
In general, the effects of scalability are visible to the user/client
via changing response times or throughput values at a certain
scaling level of the system. On the other hand, the elasticity,
namely the resource resizing actions, may not be directly visible
to the client due to their shortness or due to client’s limited access
to an execution platform’s state and configuration.
101. Elasticity in Cloud Computing
Direct and Indirect Measuring of Elasticity Metrics
Directly on the execution platform
“Independent workload element
For elasticity measurements on any elastic system, it is necessary to fill
the system with a variable intensity of workloads. The workload itself
consists of small independent workload elements that are supposed to
run concurrently and designed to stress mainly one specific resource
type (like Fibonacci calculation for CPU or an array sort for memory).
102. Elasticity in Cloud Computing
Direct and Indirect Measuring of Elasticity Metrics
Directly on the execution platform
“Independent workload element
“Independent workload element” means in this case that there is no
interdependency between the workload elements that would require
communication or synchronisation and therefore induce overheads. It is
necessary to stress mainly the “resource under test”, to avoid bottlenecks
elsewhere in the system.
103. Elasticity in Cloud Computing
Direct and Indirect Measuring of Elasticity Metrics
Directly on the execution platform
“Independent workload element
As the concepts of resource elasticity are validated in the following
example using Java thread pools as virtual resources provided by a Java
Virtual Machine. Java thread pools are designed to grow and shrink
dynamically in size, while still trying to reuse idle resources.
104. Elasticity in Cloud Computing
Direct and Indirect Measuring of Elasticity Metrics
Directly on the execution platform
“Independent workload element
In general , differentiate between the following orders of values
and can be applied to the Java thread pool example.
.
105. Elasticity in Cloud Computing
Two different Approaches for Measuring Provisioning Times
106. Elasticity in Cloud Computing
If we measure elasticity of a virtual resource that shares
physical resources with other tasks, no precise view on a
correlation between cause and effect will be given
anymore when trying to interpret the characteristics of
waiting and processing times. In this case observing
response times does not allow direct and exact
extraction of elasticity metrics.
107. Elasticity in Cloud Computing
The provisioning times of a elastic system, which were
defined as the delay between a trigger event and the
visibility of a resource reallocation, cannot be measured
directly without having access to system internal log
files and setting them in relation with measured
resource amounts.
108. Elasticity in Cloud Computing
If a new workload task is added and cannot be served
directly by the system, we assume that a trigger event
for resizing is created within the execution platform.
Not every trigger event results in a resource
reallocation. In addition, the measurement log files on
execution platform side must be enabled to keep track of
any changes in resource amounts
109. On-demand Provisioning.
It simply requires these two things to be true:
•The service must be always available (or some
reasonable approximation of always)
•The service received must be modifiable by the client
organization without contacting the hosting provider.
It is the second that is typically the most difficult to
meet.
110. On-demand Provisioning.
While the public providers like Amazon, Google, and
Microsoft have this facet, smaller niche providers
typically do not. This is more likely when the provider
is also supplying services or managed hosting for the
application itself, especially during an Infrastructure as
a Service (IaaS)- type scenario. In enterprise
application hosting scenarios, there are also potential
contractual issues to consider when decreasing (or
even possibly increasing) capacity without interacting
with the vendor.
111. On-demand Provisioning.
It is important to determine these issues before making
changes to your own environment. If your
service/uptime SLAs require a certain level of hardware
to support, remember to ensure that you do not
compromise them by unduly changing the capacity
available to your applications.
113. Objectives
• Show the benefits of the separation of resource
provisioning from job execution management for HPC, cluster
and grid computing
• Introduce OpenNEbula as the Engine for on-demand resource
provisioning
• Present Cloud Computing as a paradigm for the on-demand
provision of virtualized resources as a service
• Describe Grid as the interoperability technology for the
federation of clouds
114. Contents
1. Local On-demand Resource Provisioning
1.1. The Engine for the Virtual Infrastructure
1.2. Virtualization of Cluster and HPC Systems
1.3. Benefits
1.4. Related Work
2. Remote On-demand Resource Provisioning
2.1. Access to Cloud Systems
2.2. Federation of Cloud Systems
3. Conclusions
115. 1. Local on-Demand Resource Provisioning
1. Local On-demand Resource Provisioning
1.1. The Engine for the Virtual Infrastructure
• OpenNEbula creates a distributed virtualization layer
• Extend the benefits of VM Monitors from one to multiple resources
• Decouple the VM (service) from the physical location
• Transform a distributed physical infrastructure into a flexible and
elastic virtual infrastructure, which adapts to the changing demands
of the VM (service) workloads
Any service, not only
cluster working nodes
116. 1. Local on-Demand Resource
Provisioning
1.2. Virtualization of Cluster and HPC Systems
Separation of Resource Provisioning from Job Management
• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users
122. 1. Local on-Demand Resource
Provisioning
1.3. Benefits
Benefits for Existing Grid Infrastructures (EGEE, TeraGrid…)
• The virtualization of the local infrastructure supports a virtualized
alternative to contribute resources to a Grid infrastructure
• Simpler deployment and operation of new middleware distributions
• Lower operational costs
• Easy provision of resources to more than one infrastructure or VO
• Easy support for VO-specific worker nodes
• Performance partitioning between local and grid clusters
123. 1. Local on-Demand Resource
Provisioning
1.4. Related Works
Integration of Job Execution Managers with Virtualization
• VMs to Provide pre-Created Software Environments for Jobs
• Extensions of job execution managers to create per-job basis VMs so
as to provide a pre-defined environment for job execution
• Those approaches still manage jobs
• The VMs are bounded to a given PM and only exist during job
execution
• Condor, SGE, MOAB, Globus GridWay…
• Job Execution Managers for the Management of VMs
• Job execution managers enhanced to allow submission of VMs
• Those approaches manage VMs as jobs
• Condor, “pilot” backend in Globus VWS…
124. 1. Local on-Demand Resource
Provisioning
1.4. Related Works
Differences between Job and VM Management
• Differences between VMs and Jobs as basic Management Entities
• VM structure: Images with fixed and variable parts for migration…
• VM life-cycle: Fixed and transient states for contextualization, live
migration…
• VM duration: Long time periods (“forever”)
• VM groups (services): Deploy ordering, affinity, rollback
management…
• VM elasticity: Changing of capacity requirements and number of
VMs
• Different Metrics in the Allocation of Physical Resources
• Capacity provisioning: Probability of SLA violation for a given cost
of provisioning including support for server consolidation,
partitioning…
• HPC scheduling: Turnaround time, wait time, throughput…
125. 1. Local on-Demand Resource
Provisioning
1.4. Related Works
Other Tools for VM Management
• VMware DRS, Platform Orchestrator, IBM Director, Novell ZENworks,
Enomalism, Xenoserver…
• Advantages:
• Open-source (Apache license v2.0)
• Open and flexible architecture to integrate new virtualization
technologies
• Support for the definition of any scheduling policy (consolidation,
workload balance, affinity, SLA…)
• LRM-like CLI and API for the integration of third-party tools
126. 2. Remote on-Demand Resource
Provisioning
2.1. Access to Cloud Systems
What is Cloud Computing?
• Provision of virtualized resources as a service
VM Management Interfaces
• Submission
• Control
• Monitoring
INFRASTRUCTURE CLOUD COMPUTING SOLUTIONS
• Commercial Cloud: Amazon EC2
• Scientific Cloud: Nimbus (University of Chicago)
• Open-source Technologies
• Globus VWS (Globus interfaces)
• Eucalyptus (Interfaces compatible with Amazon EC2)
• OpenNEbula (Engine for the Virtual Infrastructure)
127. 2. Remote on-Demand Resource
Provisioning
2.1. Access to Cloud Systems
On-demand Access to Cloud Resources
• Supplement local resources with cloud resources to satisfy peak or
fluctuating demands
128. 2. Remote on-Demand Resource
Provisioning
2.2. Federation of Cloud Systems
Grid and Cloud are Complementary
• Grid interfaces and protocols enable the interoperability between the clouds
or infrastructure providers
• Grid as technology for federation of administrative domains (not as
infrastructure for job computing)
• Grid infrastructures for computing are one of the service use cases that
could run on top of the cloud
129. 3. Conclusions
• Show the benefits of the separation of resource provisioning from
job execution management for HPC, cluster and grid computing
• Introduce OpenNEbula as the Engine for the local Virtual
Infrastructure
• Present Cloud Computing as a paradigm for the on-demand
provision of virtualized resources as a service
• Describe Grid as the interoperability technology for the federation of
clouds
Hinweis der Redaktion
B. RAVIKUMAR AP/CSE – VELAMMAL ENGINEERING COLLEGE