9 points de vue d'experts sur le cloud. Sélection d'articles issus de la veille et de la curation faire par Loic Simon pour le Compte des membres du Club Cloud des Partenaires
8447779800, Low rate Call girls in Tughlakabad Delhi NCR
Le cloudvupardesexperts 9pov-curationparloicsimon-clubclouddespartenaires
1. Voici un peu de lecture pour l’été avec 9 articles issus de la veille/
curation Cloud que je réalise quasi quotidiennement pour les
membres du Club Cloud des Partenaires.
Retrouvez également sur le Blog du Club Cloud des Partenaires 2 autres ebooks qui
présentent mon propre point de vue sur l’écosystème Cloud et sur l’intérêt des blogs pour
votre Marketing.
Sommaire
● Devops and the Cloud
● Devops is a Verb
● I’ve virtualized my systems…isn’t that a cloud?
● End-to-End Cloud Offerings for Large Enterprises
● NoOps Is As Legitimate As DevOps
● IBM Research Shows How the Cloud is Driving Business Model
Innovation
● Cloud Computing Goes Far Beyond Virtualization
● Top 5 Things The Cloud Is Not
● CIOs Don't Need to Be Business Leaders
DevOps and the Cloud
2. June 18, 2012 2:05 pm by Edwin Schouten
One of the hot topics around these days in IT is DevOps. But what is it exactly, why would I
want it, and if so, how do I get it? This blog posting discusses these three questions; it’s up
to you to decide when you start using it.
What is it?
Are you, as developers, fed up with the
operations group that doesn’t understand your
coding, while taking all the time in the world to
make a mess of implementing your application?
Or are you from operations, frustrated that
you’ve been given a half-developed, half-tested
application that needs to be implemented without
proper documentation? Or are you from the business side, overlooking the struggle
between development and operations while wondering why it all takes so long to get this not-
too-difficult request implemented? For all of you; DevOps is here!
The name DevOps is derived from a combination of the two words development and
operations. DevOps is more than a new development methodology like agile software
development, it’s about communication and collaboration between all three stated
stakeholders (development, operations, and business) within an organization. It is mainly
targeted at product delivery, quality testing, feature development, and maintenance releases
in order to improve reliability, security, and faster development and deployment cycles.
To support DevOps, collaborative tools are needed to support the agile service delivery
approach, accelerating application deployment from weeks to minutes.
DevOps has received increased attention over the last year or so, which makes perfect
sense for two reasons: the application landscapes are becoming increasingly complex, the
time-to-market of new functionality needs to be decreased. Organizations need to reduce
cost while maintaining a satisfactory level of quality — is DevOps a solution to this problem?
Why would I want it?
An IBM CIO study of hundreds of companies revealed that a number of organizations
are struggling to just get their software into production consistently. In fact, 50 percent of
deployed applications must be rolled back, with rework accounting for more than 30 percent
of project costs. Ultimately, the driver is to reduce the costs of managing applications while
keeping agile to be able to quickly respond to market demand.
As Werner Vogels, the CTO of Amazon, explains in his presentation in 2011 for HackFwd
video (at minutes 3:00 – 5:00), Amazon struggled with the exact same problem: an immense
unmanageable application landscape. The solution as he explains it is that each service (set
of functionalities) is developed and operated by a small team that can be no larger than it
can be fed on two pizzas. Even shorter is “you build it, you run it.”
Now I’m not saying that every organization should do the exact same, but the underlying
thought is DevOps. Make functionality and maintainability a shared developers/operations
responsibility, supported by focusing on inter-team collaboration and communication.
As Ovum, an organization that provides clients with independent and objective analysis,
describes it in its article: “The solution is to provide both teams with a shared objective
that is described in business outcomes. This comes from a governance layer that must
mandate the behaviours. The roadmaps and demos shown at Pulse indicate that IBM
clearly “gets” this and is working to bridge the gap between development and operations at
all levels”.
3. How do I start using it?
Back in February 2011 an excellent white paper Collaborative DevOps with Rational and
Tivoli was made available. It described the challenges that exist between development
and operations. It also described how integrations between products from IBM Rational
and IBM Tivoli support effective collaboration to achieve improved accuracy, efficiency,
agility, and security in the deployment and monitoring of software systems. The scope of
the paper spans the areas of strategic planning, deployment planning, automation, and the
identification and remediation of production problems.
Sure you can implement DevOps tools yourself in your own
data center, but wouldn’t it be great to get this “as a service”
from the cloud? No hardware, installation, or licenses to worry
about. No long-term investments needed. Just switch it on and
start using it. This is now possible using IBM SmartCloud
Application Services. At the platform as a service web page,
click the picture (4:08) shown on the right [i1] to obtain a quick
but thorough understanding of how DevOps can work for your business as offered from
the IBM SmartCloud.
And what’s even better; you can register now for the pilot program for IBM SmartCloud
Application Services to be able to use DevOps from the cloud! Just navigate to IBM
SmartCloud and follow the instructions — it’s that simple.
Still hungry for more? Go to the IBM SmartCloud Continuous Delivery web page that holds a
wealth of information about DevOps and the various implementation scenarios.
About Edwin Schouten
Edwin is the Cloud Services Leader for IBM Global Technology Services in the Benelux
region (Belgium, Netherlands and Luxembourg) and IT Architect by hart. Edwin has almost
15 years experience in IT, of which the last 8 years in IT Architecture, which is backed-up
with a Masters of Science degree in IT Architecture. He is an optimist by nature, analytic but
realistic and has a can-do mentality. He has an ever growing drive to add business value
using IT, that’s also where his biggest strength comes in; the ability to communicate with
both business and IT.
Devops is a Verb
posted on Wednesday, June 20, 2012 4:28 AM
#devops Devops is not something you build, it’s something you do
4. Operations is increasingly responsible for deploying and managing applications within this
architecture, requiring traditionally developer-oriented skills like integration, programming
and testing as well as greater collaboration to meet business and operational goals for
performance, security, and availability. To maintain the economy of scale necessary to keep
up with the volatility of modern data center environments, operations is adopting modern
development methodologies and practices.
cloud computing and virtualization have elevated the API as the next generation
management paradigm across IT, driven by the proliferation of virtualization and pressure
on IT to become more efficient. In response, infrastructure is becoming more programmable,
allowing IT to automate, integrate and manage continuous delivery of applications within the
context of an overarching operational
framework.
The role of infrastructure vendors in devops
is to enable the automation, integration, and
lifecycle management of applications and
infrastructure services through APIs,
programmable interfaces and reusable
services. By embracing the toolsets, APIs,
and methodologies of devops, infrastructure
vendors can enable IT to create repeatable
processes with faster feedback mechanisms
that support the continuous and dynamic delivery cycle required to achieve efficiency and
stability within operations.
DEVOPS MORE THAN ORCHESTRATING VM PROVISIONING
Most of the attention paid to devops today is focused on automating the virtual machine
provisioning process. Do you use scripts? Cloned images? Boot scripts or APIs? Open
Source tools?
5. But devops is more than that and it’s not what you use. You don’t suddenly get to claim
you’re “doing devops” because you use a framework instead of custom scripts, or vice-
versa. Devops is a broader, iterative agile methodology that enables refinement and
eventually optimization of operational processes. Devops is lifecycle management with the
goal of continuous delivery of applications achieved through the discovery, refinement and
optimization of repeatable processes. Those processes must necessarily extend beyond the
virtual machine. The bulk of time required to deploy an application to the end-user lies not
in provisioning it, but in provisioning it in the context of the entire application delivery chain.
Security, access, web application security, load balancing, acceleration, optimization. These
are the services that comprise an application delivery network, through which the application
is secured, optimized and accelerated. These services must be defined and provisioned as
well. Through the iterative development of the appropriate (read: most optimal) policies to
deliver specific applications, devops is able to refine the policies and the process until it is
repeatable.
Like enterprise architects, devops practitioners will see patterns emerge from the repetition
that clearly indicate an ability to reuse operational processes and make them repeatable.
Codifying in some way these patterns shortens the overall process. Iterations refine until
the process is optimized and applications can be completely deployed in as short a time
as possible. And like enterprise architects, devops practitioners know that these processes
span the silos that exist in data centers today. From development to security to the network;
the process of deploying an application to the end-user requires components from each
of these concerns and thus devops must figure out how to build bridges between the ivory
towers of the data center. Devops must discern how best to integrate processes from each
concern into a holistic, application-focused operational deployment process.
To achieve this, infrastructure must be programmable, it must present the means by which
it can be included the processes. We know, for example, that there are over 1200 network
attributes spanning multiple concerns that must be configured in the application delivery
network to successfully deploy Microsoft Exchange to ensure it is secure, fast and available.
Codifying that piece of the deployment equation as a repeatable, automated process goes
a long way toward reducing the average time to end-user from 3 months down to something
more acceptable.
Infrastructure vendors must seek to aid those on their devops journey by not only providing
the APIs and programmable interfaces, but actively building an ecosystem of devops-
focused solutions that can be delivered to devops practitioners. It is not enough to say “here
is an API”, go forth and integrate. Devops practitioners are not developers, and while an
6. API in some cases may be exactly what is required, more often than not organizations are
adopting platforms and frameworks through which devops will be executed. Infrastructure
vendors must recognize this reality and cooperatively develop the integrations and the
means to codify repeatable patterns. The collaboration across silos in the data center
is difficult, but necessary. Infrastructure vendors who cross market lines, as it were, to
cooperatively develop integrations that address the technological concerns of collaboration
will make the people and process collaboration responsibility of devops a much less difficult
task.
Devops is not something you build, it’s something you do.
Lori MacVittie is responsible for education and evangelism of application services available
across F5’s entire product suite. Her role includes authorship of technical materials and
participation in a number of community-based forums and industry standards organizations,
among other efforts. MacVittie has extensive programming experience as an application
architect, as well as network and systems development and administration expertise. Prior to
joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing
Magazine, where she conducted product research and evaluation focused on integration
with application and network architectures, and authored articles on a variety of topics
aimed at IT professionals. Her most recent area of focus included SOA-related products and
architectures. She holds a B.S. in Information and Computing Science from the University
of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern
University.
I’ve virtualized my systems…isn’t that
a cloud?
April 17, 2012 2:12 pm by Joe Bohn
When many people think of cloud computing they immediately think of virtualization and
virtual machines in particular. This is completely natural and not at all surprising. After all,
one of the core underlying technologies used in cloud computing is virtualization. However,
it is important not to confuse one element of cloud computing with the entire thing – and this
can sometimes happen. Actually, I don’t really think people literally confuse virtualization
with cloud computing – but I have heard people refer to their collection of virtual images
as their “private cloud.” They are too easily satisfied and view their collection of virtual
machines as being “good enough.” They don’t see how moving to a real cloud – private,
public, or hybrid – could transform their data center. They are greatly mistaken.
First, let’s consider what is typically meant by cloud computing. I think we need an impartial
definition here so let’s look at what has been produced by the National Institute of Standards
and Technology (NIST). It’s their job to define standards and guidelines, including minimum
requirements, for use in federal agencies and nongovernmental organizations. NIST
published this definition of cloud computing in September of 2011. The definition is very
complete and yet also concise (just two real pages of definition). They define the five
essential characteristics of the cloud model: three service models and four deployment
models.
Let’s start by looking at the essential characteristics:
● On-demand, self-service
● Broad network access
● Resource pooling
● Rapid elasticity
7. ● Measured service
So, for a solution to be called a cloud it should meet these essential characteristics. In
this list, it is clear that managing a collection of virtual machines is certainly not a cloud.
You must be able to have resources allocated when they are needed and in a self-service
fashion. You must be able to do this from anywhere with broad network access. You must
provide resource pooling for use by multiple consumers in a multitenant model based upon
consumer demand. Indeed, virtualization certainly comes into play with resource pooling
– but only to the extent that it can be leveraged to easily manage and move workloads.
The consumers should not even be concerned with the physical location of the workloads.
Elasticity is imperative and must be supported to both scale outward and inward as demand
dictates, optimally in an automatic fashion. This is something not easily done when you
are just managing virtual machines. Finally, the ability to measure usage of the services is
important and should provide transparency for both the provider and consumer of the utilized
service.
The NIST also defines the three primary service models: software as a service (SaaS),
platform as a service (PaaS), and infrastructure as a service (IaaS); and also four primary
deployment models: private cloud, community cloud, public cloud, and hybrid cloud.
So I think this gives us a good working model for cloud computing and the value that it can
bring over only virtual machines. That isn’t to say that virtualization isn’t important. I think it
is very important for cloud computing – bordering on critical. It’s just that virtualization must
be thought of within this broader notion of cloud computing to gain all of the value that this
new paradigm has to offer. We’ve been aware of this for some time at IBM and you can
easily see it in the solutions that we’ve delivered for both private and public clouds. I think
our competitors are just now starting to see this too and are beginning to think beyond only
virtual machines. But there are many differences between our cloud solutions and those
of our competitors. One element is that our competitors are often focused only on their
particular hypervisor (virtualization infrastructure) technologies to facilitate cloud solutions;
IBM, however, gives you a choice.
The next question that I think you should ask is – “what role does virtualization play in the
cloud and what exactly is it that I should virtualize?” You can probably guess that it is more
than only virtual machines. At IBM, we have always been taking the broader view – looking
beyond where we are today so that we are prepared for tomorrow.
So what does it mean to take a broader view of virtualization? At its core, virtualization
is about introducing a level of abstraction between the producer and the consumer of
something. We began this journey by virtualizing the hardware – memory, CPU, storage,
networking, and others. This led to virtual machines combining these building blocks
typically with an operating system.
Let’s not stop there – we can do so much more as we move up the software stack. For
example, IBM Workload Deployer and its predecessor the WebSphere Cloudburst Appliance
have provided what we call Virtual System Patterns for three years now. The motivation
behind this is that – although virtual machines are great – there are very few business
solutions that are only dependent upon a single virtual machine running some software.
We saw a need to create an abstraction of a complete system with multiple federated
machines to support complete application solutions provisioned as a single entity. We
provide deep integration for complete systems built upon standards to support middleware
provisioning; leveraging best practices, and based on our years of customer experience.
We also provide utilities in an open, extendable structure to support customization and
integration of third-party solutions. This is all accomplished using a patterns-based
approach with a very simple drag-and-drop interface. Once more, our competitors are just
starting to play catch-up here by introducing similar concepts with graphical user interfaces
that allow you to build topologies of virtual systems. It’s amusing to see how similar they
8. look to what we’ve had for years. I guess imitation really is the most sincere form of flattery.
Once again, we didn’t stop there. We took the abstraction up a level higher and introduced
application virtualization last year in our IBM Workload Deployer private cloud management
solution. By application virtualization I mean providing the capabilities to abstract the
application from the underlying infrastructure such that it can be elastic, highly available, and
provide agility across a pool of application infrastructure resources.
This type of application virtualization is built into our virtual application pattern (hence the
name) – an application-centric way of defining, provisioning, and managing the complete
lifecycle of your application. Features such as elasticity of the application itself and shared
services to support non-functional requirements are delivered in policies using common
metrics such as response time service level agreements (SLA). Requirements that are
common for nearly any application, such as high availability, are “baked” right into the
solution without any definition required. Virtual application patterns support specific types of
applications in a highly integrated solution – integrated both on the front-end user interface,
and on the back-end implementation of the running systems. Management is from an
application perspective – not focused on the various middle components that are necessary
to support the application. This a true platform as a service (PaaS) solution where IBM
Workload Deployer dynamically builds the necessary platform infrastructure to support the
specific needs of the application. I don’t see anything similar in scope and user simplicity
from our competitors – they’ll be playing catch-up yet again.
So this is what cloud computing is all about. It is about much more than simply virtualization
– it is about transforming the data center. It is about innovation and simplicity. IBM is
including intelligence into the solutions that we provide and integrating the expertise we have
gained over years of experience right into those solutions to simplify IT for our customers.
We are providing our customers with the IT for their business so that they don’t have to be in
the business of IT themselves.
9. But we’re not done yet! You’ve no doubt heard a lot of buzz about our recently announced
expert integrated systems – and in particular IBM PureApplication System – a system with
built-in expertise, integrated by design, and a simplified experience. As you can see we’ve
been on this mission for a while and we’re continuing to build our expertise and knowledge
into these systems to simplify IT. IBM PureSystems is here! To find out more visit http://
ibm.co/HxXzwB.
Joe Bohn is a senior software architect at IBM. He is currently serving as a technical
evangelist for WebSphere cloud computing. Before becoming a technical evangelist, he
represented IBM on OSGi Alliance specifications and open source projects including
Apache Aries and Apache Geronimo. Prior to these activities he worked in multiple product
areas within IBM as architect, designer, team lead, and developer on a variety of IBM
products including the Integrated Solutions Console, Tivoli Presentation Services, and
multiple Tivoli solutions.
End-to-End Cloud Offerings for Large
Enterprises
Cloud Computing Journal
By Srinivasan Sundara Rajan
With cloud adoption becoming a de-facto option for small and medium enterprises, large
enterprises are relatively slower in their adoption of cloud. The main reason is that large
enterprises have a very complex existing IT setup and no single offering from various cloud
providers has yet to satisfy all their needs.
However we find the recent announcements and offerings from IBM provide a perfect
platform for large enterprises to on board to Cloud to make their businesses more agile.
Blueprint of Large Enterprises on Cloud-Enabled IT
The following reference architecture provides an overview of how the large enterprises
would like to position their cloud-enabled IT so that they get best of their traditional
operations while merging with the benefits of cloud.
10. Mission-Critical Workloads
These will be bread and butter for any large enterprise IT. For example if the enterprise is in
the telecom domain, the workloads like provisioning, billing and network management will
be a part of this workload category. If the enterprise is from the manufacturing domain, then
product life cycle management, warranty analysis, supply chain management will be a part
of this.
Large enterprises' utmost concern is the proper functioning of these workloads. Typically
enterprises will receive the maximum benefit if these workloads are moved to private
clouds which can handle heterogeneous platforms like mainframe, Unix flavors, Linux and
Windows. Typically enterprises would like this workload to run without any changes even
after migration to cloud.
Elastic Workloads
This is the new terminology coined to represent the workloads, which may not be 100%
mission-critical for the enterprises, but are still important, but whose processing is dynamic
in nature and are the ones that benefit from the elastic nature of the public clouds.
Examples like data warehousing systems, content management systems, massively parallel
processing operations, consumer-facing web sites form part of this workload. This workload
will receive the highest benefit for migrating into public clouds.
Consumable Workloads
With the great proliferation of SaaS (Software as a Service), enterprises have many avenues
to consume certain types of workloads directly from a public cloud without even hosting
them. Popular CRM software like Salesforce are good examples. There are so many other
BpaaS players and situations that I have been covering separately in my ‘Industry SaaS
11. Series' form part of this workload.
Cloud Integration
With the large enterprises running their business partly in on-premise private clouds and
partly on public clouds, robust application integration options are required to ensure that
the applications are in sync with each other and information generated out of the extended
enterprise is consistent and preserves the integrity across transactions.
Hybrid Environment Management
With the increased workload management across the public clouds and the new ways of
monitoring the virtualization techniques adopted by private clouds, enterprises needed to
move away from the traditional ways of IT environment management and move towards a
hybrid environment management.
Application Development Life Cycle for the Cloud Environment
No enterprise can afford to be static with respect to their business capability needs. This
means the cloud-enabled IT has to rapidly expand to the new market needs. This is only
possible if the application development platform is cloud-aware and can support both private
cloud deployment and public cloud deployment. The platform should also free enterprises
from future upgrades and maintenance activities so that enterprises can become truly agile.
PaaS (Platform as a Service) is the best option for large enterprises in this regard.
Enterprise Innovation with Cloud
While the above mentioned components of the reference architecture are still part of the
traditional enterprises even without cloud in a different form, however in today's competitive
environment large enterprises needed some options which are truly innovative and
disruptive in nature. Cloud value proposition like high performance computing (HPC) and
some other research options form part of this.
IBM SmartCloud Product Mapping for the Blueprint
SmartCloud is the IBM vision for cloud computing, which accelerates business
transformation with capabilities from IBM cloud offerings. The following product mapping will
help to realize the blue print for cloud enablement for large enterprises with the respective
SmartCloud offerings.
Mission-Critical Workloads
As explained large enterprises will choose private clouds for running their mission critical
workloads. IBM SmartCloud has a multitude of private cloud infrastructure and associated
software options.
● IBM SmartCloud Entry on Power Systems. It works with a client's existing Power
Systems infrastructure.
● IBM zEnterprise heterogeneous cloud solution
● IBMStarter Kit for Cloudx86 Edition
Additionally there are IBM SmartCloud Foundation infrastructure offerings that include
servers, storage and virtualization components for building private and hybrid cloud
computing environment.
Together the above options help an enterprise to run their mission critical, Mainframe, Big
Endian Unix, Linux and Windows workloads on a private cloud.
Elastic Workloads
Public Cloud offerings form part of the management of these kinds of workloads. IBM
SmartCloud Enterprise and Enterprise+ provide the enterprise class public cloud platforms,
with the following major features.
● Greater choice and flexibility with Enterprise class operating systems and software
images
12. ● Availability and performance
● Security and isolation
● Payment and billing
Consumable Workloads
SaaS and BpaaS offerings form part of this workload, and IBM SmartCloud has a variety
of SaaS offerings, and like any other community contributed application exchange this can
grow further in the future. Current offerings are categorized into:
● Business Process Management
● Analytics
● Social Business
● Government Offerings
● Buying, Procurement and Sourcing
● Selling and Merchandising
● Marketing and Web Analytics
● Other Business Process As A Service
● Payment and billing
To facilitate a collaborative ecosystem for more SaaS/ BpaaS products ‘IBM Application
Development Services For Cloud' help partners to develop new SaaS applications.
Cloud-Based Integration
Continuous functioning of large enterprises on hybrid environments (i.e., public, private
clouds) is only possible if there are strong cloud integration platforms. WebSphere Cast Iron
Cloud Integration is part of SmartCloud platform and enables cloud application to application
integration.
Hybrid Environment Management
IBM SmartCloud Monitoring is a cloud monitoring tool for cloud infrastructure as well as the
virtual servers running within it. SmartCloud Monitoring is designed to monitor very large
environments. Coverage for KVM, VMware, Citrix XenServer, Citrix XenDesktop Citrix
XenApp, Cisco UCS, NetApp.
With respect to Public Cloud, IBM SmartCloud Enterprise+ provides robust management
options.
Application Development Life Cycle for Cloud Environments
IBM SmartCloud Application Services allow your organization to develop, deploy, manage
and integrate applications in the cloud. Initially, IBM SmartCloud Application Services will
provide initial support for Java, and expand to include PHP, Ruby, C, C++, .Net and others
later. IBM SmartCloud Application Services portal, tooling will be provided to manage the
deployment and management of applications. In addition, Rational products will be available
for application lifecycle management. The service will use the IBM DB2 Enterprise Edition
9.7 database, which is Oracle-compatible. Data can be extracted from an Oracle database
and moved into the DB2 database.
Enterprise Innovation with Cloud
The HPC cloud offerings from IBM provide the methods and means to manage HPC
infrastructure using cloud computing technology. The HPC cloud offerings from IBM
complement the IBM SmartCloud.
Summary
Lot of material is available on the IBM site about the detailed offerings as part of IBM
SmartCloud. The aim of this article is to position how the large enterprises can adopt to this
robust cloud offering and can take comfort of an end-to-end offering, which will ensure that
they handle any fear or uncertainty or doubt among the stake holders.
13. NoOps Is As Legitimate As DevOps
By Krishnan Subramanian on March 14, 2012
Ever since Lucas Carlson, CEO of AppFog, brought the term “NoOps” into the focus of
discussion, there is quite a bit of backlash against the term. The debate sometimes borders
along insanity and I thought I will add my 2 cents to this cacophony. In fact, this backlash
is nothing new. Whenever I make a statement about the role of ops fading away in a cloud
based world, I get similar brickbats on Twitter and other online fora. Let me use this post to
add clarity to the point I am advocating with respect to Ops. Before the Ops guys and gals
pounce on me, I also want to highlight that I am not a developer and my background is on
the ops side. Having made it clear, let me add my thoughts on why NoOps is a legitimate
use of the term, much like DevOps.
It’s all fine, what is NoOps, BTW?
Regular readers of my blog know that I am bullish on PaaS being the future of cloud
services. Right now, PaaS adoption is in the early adopter stage and in the next five years,
it is expected to be mainstream in the enterprise. When that happens and enterprises
adopt hosted PaaS from providers like Heroku, Engine Yard, CloudBees, AppFog, Azure,
etc., there will be no need for enterprises to invest on operations as these PaaS offerings
gives an interface to their developers which they can use to build and deploy their apps
without worrying about the underlying infrastructure including security and scaling. This
scenario is what Lucas calls as NoOps and I have also emphasized it on my posts with the
slogan “Forget DevOps, embrace the damn PaaS”.
So? What is the big deal?
When such a transformation happens in the industry, the role of operations people is going
to be diminished compared to what it is today. Many ops people and pundits (with their
hearts on the operations side) take it personally and argue that NoOps is bordering on FUD
and operations are not going away anytime. They push back against the term because it
seems to suggest that operations are going to vanish in the coming years. Their argument is
that operations are critical part of these technologies including PaaS and SaaS and any term
that diminishes their value is just a marketing term with FUD value.
You are part of the marketing FUD, Huh?
Not really. Whether we (pundits and ops people) like it or not, even the public cloud services
like AWS made ops people less visible. Gone or the days when a developer will put a help
desk ticket and wait for IT to provision a server for his/her needs. Today, the self service
part of the cloud offerings lets developers provision the instances needed for them with a
few clicks or an API call. Since public clouds offers them a way to operate the infrastructure
through code, the DevOps movement came into picture calling the need for developers
and ops people to work together closely and cross pollinate. At the infrastructure services
level itself, the role of ops got “reduced” a bit. To put it in another way, from ops being the
face of IT and the go to folks for anything IT related, they were forced to take a reduced role
in the DevOps culture. But, let us keep in mind that ops are critical to the very success of
infrastructure cloud services. The only change from the traditional era is that they have given
the limelight to service interfaces and do their magic (as usual) in the background helping
the cloud service providers run their infrastructure smoothly. Not only they have faded into
the background, the number of ops people needed to run the infrastructure got drastically
reduced due to the automation at scale. Cloud services pushed Ops from being the face of
IT to the invisible face of IT. PaaS takes this one step further by making even DevOps less
relevant because the PaaS providers absorb almost all of the operations underneath and
14. offer a simple interface for developers to deploy their apps. Hence, we are seeing the raise
of the term NoOps.
Makes sense. Why are Ops people whining then?
Well, the reality is not so simple and there are many shades of grey. Yes, NoOps is a
marketing term but it is a great term that clearly highlights what the service is offering. If
terms like converged infrastructure, cloud computing, DevOps, etc. can be valid terms to
describe the respective offerings, NoOps is a very legitimate term to define what hosted
PaaS offers organizations. However, the ground reality is different from a simplistic evolution
to hosted PaaS. In the next several years, we are going to see a complex evolution with
most of the workloads moving to clouds while some of them staying inside the firewall. Also,
we are going to see a more federated ecosystem of infrastructure players. We will be seeing
adoption of both hosted PaaS as well as Private PaaS (yet another term used to describe
the platform layer put on top of private cloud infrastructure). All these different choices are
going to give us an environment where the relevance of Ops will be visible in some cases
and invisible in others. In most cases, ops people will be in the background (on the service
provider side) and doing their magic quietly and the service interface is going to be the
future face of IT. DevOps will stay put as long as we have organizations wanting to have
much deeper control over the infrastructure and even in the case of hosted PaaS, some
developers may need to assume operational responsibilities (albeit, very rarely). Since the
ground reality is a bit more complex and the NoOps term completely sweeps away the reality
under the carpet, people are getting upset. But it is time for them to get used to the term and
the decibel levels are going to rise as more and more organizations start embracing PaaS.
If buzz words can be a competitive advantage in a free market, NoOps is as legitimate as
other terms like DevOps or Cloud. #justsayin
IBM Research Shows How the Cloud is
Driving Business Model Innovation
CMS Wire
By Barb Mosher (@barbmosher) Mar 8, 2012
The decision to move to the cloud has traditionally been about operational efficiency, but
according to IBM's research, we'll soon start to see organizations take advantage of the
cloud for business initiatives and that kind of stuff is a whole lot more fun.
We had the opportunity to talk about the IBM study with one its authors, Saul Berman,
Global Lead Partner for Strategy Consulting and Innovation and Growth for IBM Global
Business Services. It's important to point out that this study was conducted from a
business perspective and not a technology perspective, which is a refreshing approach to
understanding how the cloud can work for your business.
Editor's Note: Read the full study: The power of cloud. Driving business model innovation
(1.35 MB PDF)
The Power of the Cloud IBM Study
This study was conducted through the IBM Institute for Business Value, in conjunction with
the Economist Intelligence Unit. It included 572 business and technology executives across
the world, in organizations ranging from large (greater than US$ 20 billion) to small (less
than US$ 1 billion). The results?
That, although many organizations focus cloud initiatives on operational efficiency, we'll
see that slowly decrease over the next few years (from 55% to 31%) in favor of innovative
business plans, like new lines of business/industries, new pricing models and better partner
collaboration.
15. When surveyed, here's what items topped the cloud adoption list for most organizations:
Tapping Into That Power
One issue that has the ability to slow down the use of cloud models for innovation is that
many organizations still see the cloud as an IT solution. But as its importance starts to reach
further into the business, the opportunities available are being recognized.
IBM notes six "game-changing" business enablers that will transform how organizations
leverage the cloud for business innovation, shown below:
Cloud Enablement Framework
16. It's not a cloud maturity model, you aren't going to move your organization through each
phase:
IBM Cloud Enablement Framework
The framework looks at two things: the customer value proposition and the value chain.
Along each of these dimensions there are different types of organization models. You need
to look internally and decide where in this model your organization fits. Are you:
● An Optimizer: Optimizers are about enhancing what they have now and improving
operational efficiency. They aren't ready to take the risks and therefore won't get
the revenue and market share gains that Innovators or Disruptors will. But the
opportunities are there to deepen relationships with customers and enhance products
and services.
● An Innovator: Innovators take advantage of the cloud to greatly extend the value
proposition. This can change their role in the industry and/or lead to new markets or
industries. It's about extending what they have and transforming in ways that lead to
new revenue streams and market opportunities, thus gaining competitive advantage.
● A Disruptor: For a Disruptor, it's about radical change, creating new markets/
industries or disrupting existing ones. It's big risk for big reward.
17. Berman says you can choose to evolve over time from one type to the next, but some
organizations are going to be innovative or disruptive from day one. He points to the media/
entertainment industry as an example of an industry where you might want to focus more on
innovation and disruption than worry about an existing business model that may be under
attack.
Cloud Computing Goes Far Beyond
Virtualization
Virtualization vs. Private Cloud (Part 1)
Virtualization Journal
By Yung Chou
Virtualization vs. private cloud has confused many IT pros. Are they the same? Or different?
In what way and how? We have already virtualized most of my computing resources, is a
private cloud still relevant to us? These are questions I have been frequently asked. Before
getting the answers, in the first article of this two-part series listed below let's first go through
a few concepts.
Part 1: Cloud Computing Goes Far Beyond Virtualization (This article)
Part 2: A Private Cloud Delivers IT as a Service
Lately, many IT shops have introduced virtualization into
existing computing environmentw. Consolidating
servers, mimicking production environment, virtualizing
test networks, securing resources with honey pots,
adding disaster recovery options, etc. are just a few
applications of employing virtualization. Some also run
highly virtualized IT with automation provided by system
management solutions. I imagine many IT pros
recognize the benefits of virtualization including better
utilization of servers, associated savings by reducing
the physical footprint, etc. Now we are moving into a cloud era, the question then becomes "
Is virtualization the same with a private cloud?" or "We are already running a highly
virtualized computing today, do we still need a private cloud?"The answers to these
questions should always start with "What business problems are you trying to address?"
Then assess if a private cloud solution can fundamentally solve the problem, or perhaps
virtualization is sufficient. This is of course assuming there is a clear understanding of what
is virtualization and what is a private cloud. This point is that virtualization and cloud
computing are not the same. They address IT challenges in different dimensions and
operated in different scopes with different levels of impact on a business.
Virtualization
To make a long story short, virtualizationin the context of IT is to "isolate" computing
resources such that an object (i.e. an application, a task, a component) in a layer above
can be possibly operated without a concern of those changes made in the layers below.
A lengthy discussion of virtualization is beyond the scope of this article. Nonetheless, let
me point out that the terms, virtualization, and "isolation" are chosen for specific reasons
since there are technical discrepancies between "virtualization" and "emulation", "isolation"
and "redirection." Virtualization isolates computing resources, hence offers an opportunity to
relocate and consolidate isolated resources for better utilization and higher efficiency.
18. Cloud Computing
Cloud computing on the other hand is an ability to make resources available on demand.
There are statements made on what to expect in general from cloud computing. A definition
of cloud computing published in NIST SP-800-145 outlines the essential characteristics, how
to deliver, and what kind of deployment models to be cloud-qualified. Chou further simplifies
it and offers a plain and simple way to describe cloud computing with the 5-3-2 Principle as
illustrated below.
The essence of cloud computing is rooted at the appreciation of a "service." In the context of
cloud computing, a service simply means the state of being available on demand.
So SaaS means software, i.e. an application, is available on demand and the focus is
on functions available within and not beyond the application. PaaS provides a run-time
environment on demand and the scope becomes what are the common set of capabilities
available on demand for applications deployed to this run-time environment. Since the
run-time environment is available on demand, an application deployed to the run-time
environment then can be brought to a running state on demand. Namely those applications
deployed to a PaaS environment are delivered, as a consequence, with SaaS. And
IaaS denotes infrastructure available on demand, which means the ability to provision
infrastructure on demand. For IT professionals, provisioning infrastructure at an operational
level translates to deploying servers. And in the context of cloud computing, all servers are
virtualized and deployed in the form of a virtual machines or VMs. So, IaaS ultimately is the
ability to deploy VMs on demand.
"On-demand" is not to be casually used. This is a loaded term
with a strong connotation of the five essential characteristics of
cloud computing. On-demand means high accessibility and
always-on readiness since it must be accessible and ready per
SLA. In cloud, they are represented by self-service model and
19. ubiquitous access. On-demand suggests there are likely
standardization, automation, optimization, and orchestration in
place, which are presented collectively as resource pooling and
elasticity. On-demand implies the need for auditing and
metering, i.e. analytics, so capacity can be planned
accordingly. And that is why consumption-based charge-back
or show-back model is included in the essential characteristics
of cloud computing.
Unequivocally Different
With what has been described above, to realize the fundamental differences between
virtualization and private cloud becomes rather straightforward. Noticeably, virtualization is
not based on the 5-3-2 principle as opposed to cloud computing is. For instance, a self-
serving model is not an essential component in virtualization, while it is essential in cloud
computing. One can certainly argue some virtualization solution may include a self-serving
component. The point is that self-service is not a necessary , nor sufficient condition for
virtualization. While in cloud computing, self-service is a crucial concept to deliver anytime
availability to user, which is what a service is all about. Furthermore, self-service is an
effective mechanism to in the long run reduce training and
support at all levels. It is a crucial vehicle to accelerate the
ROI of a cloud computing solution and make it sustainable
in the long run.
Virtualization is centered on virtual machines and rooted in
infrastructure management, operations, and deployment
flexibility. Virtualizationis about the abilities to consolidating
servers, managing VMs, streaming desktops, and so on.
How to productively configure, deploy, and manage a
workload in a deployed VM and
At the same time, cloud is about "service"and "service" is about the readiness and
responsiveness relevant to market opportunities. Cloud is about go-to-market. Cloud focuses
on making a requested LOB application available on demand and not just on just how to
deploy a VM. Cloud is interested in not only operating VMs, but providing insights of a target
application running in those VMs.
No, virtualization is not cloud computing. And cloud goes far beyond virtualization. So what
are the specifics about virtualization vs. a private cloud? [To be continued in part 2]
Yung Chou is currently a Sr. IT Pro Evangelist in Microsoft. Within the company, he has had
opportunities serving customers in the areas of support account management, technical
support, technical sales, and evangelism. Prior to Microsoft, he had established capacities
in system programming, application development, consulting services, and IT management.
His recent technical focuses have been in virtualization and cloud computing with strong
interests in private cloud with service-based deployment and emerging enterprise computing
architecture. He is a frequent speaker in Microsoft conferences, roadshow, and TechNet
events.
Top 5 Things The Cloud Is Not
Peder Ulander
posted in Blog, Featured ⋅ June 22, 2012 1:57 pm
20. The cloud looks set to be the next king, but there are five things the cloud is not. Can you
think of others? Image: akakumo/Flickr
It’s clear that the technology industry is moving from the PC era to the cloud era in several
significant ways. While cloud represents a new way for IT to deliver — and end users to
consume — IT applications and services, this transition also represents a significant change
in how applications, services and systems are defined. The move to cloud computing is the
most important technology disruption since the transition from mainframe to client-server, or
even since Al Gore invented the internet. While industry veterans like Oracle’s commander
in chief declared it a fad, this is a decade-long trend that is here to stay, and one that will
define the next generation of IT.
The movement itself has been in play for the last decade, however there continues to be a
lot of (mis)information in the marketplace about the cloud. So much so that it is difficult for
organizations to figure out what is real and what is not to help them develop a successful
cloud strategy, or simply learn about technologies that have been specifically designed and
purpose-built to meet this dramatic shift in technology. While it’s important to know what the
cloud is, it’s just as important to separate the wheat from the chaff, and for IT to understand
what cloud is not.
To this end, I encourage you not to add yet another definition of the cloud to your glossary,
but to truly understand the top 5 things the cloud is not.
1. Cloud is not a place.
People often talk about moving to the cloud as if they were moving to another city. But the
cloud is not a place. In fact, the cloud can be anywhere, in your data center or someone
else’s. Organizations that believe they are moving to a strategy that leaves legacy apps and
systems behind are in for a rude awakening. The single most important way for enterprise
organizations to prepare themselves for the cloud is to understand that the cloud is a
radically new way of delivering, consuming and adopting IT services in a far more agile,
efficient, and cost-effective manner, which will spread throughout the ether and be a mix of
public, private, managed or hybrid clouds. By looking holistically at the cloud, organizations
can optimize its benefits for their budgets, privacy needs, geographies and overall business
needs.
21. 2. Cloud is not server virtualization.
Despite what many believe, and what many will tell you, the cloud is not the same as next-
gen server virtualization. It doesn’t surprise me that many believe that by virtualizing their
data center they will create a private cloud. Some vendors are intentionally trying to blur that
line, aiming to convince customers that their vCenter clusters somehow deliver a private
cloud. On the contrary, that is a gross exaggeration of the term cloud.
If you take a look at the way Amazon has built its cloud architecture, it becomes very clear
that there are some fairly stark differences between a server virtualization environment and
a true cloud architecture. While Amazon starts with Xen virtualization technology, the brains
of its architecture comes with a new layer of software that Amazon built in an effort to create
a new control plane, a new cloud orchestration layer that can manage all the infrastructure
resources (compute, storage, networking) across all of their data centers. This is at the
heart of the cloud’s technology disruption. Some analysts refer to this as the “hypervisor of
hypervisors,” or a “new software category of cloud system software.”
The fact of the matter is that some of the major players are doing cloud without server
virtualization. Take Google for example. They have deployed a cloud architecture that is not
using server virtualization, but rather a bare metal infrastructure. So while virtualization can
be an important ingredient of cloud, it is not always a requirement.
3. Cloud is not an island.
Depending on what you’re reading, you’ll hear a lot about public clouds versus private
clouds, and it may feel as if enterprises must make a wholesale decision on which way to
go. But the cloud is not an island, it is not a place where you put all of your IT services, and
then lose all interconnectivity and access. The recent Amazon outages have proven this to
be an important point for any organization leveraging the cloud. The right cloud strategy will
be one that enables you to have a hybrid approach with the ability to easily connect private
and public clouds. Even the recent move by NASA to include Amazon Web Services as part
of its cloud rollout after a significant investment in the build-out of its own technology proves
that the market is moving to open, interoperable multi-cloud environments.
4. Cloud is not top-down.
The cloud has up-ended the traditional IT approach to delivering services. The lines of
business have been leading the charge in making the decision to move to cloud computing.
With specific needs to get to market quickly, functional business leaders are consuming
cloud services to avoid traditional IT processes. But we don’t need surveys to clarify this
movement. The reality is that with the simple swipe of a credit card and the creation of an
account, end users can gain instant access to infinite pools of IT resource to help test out a
new idea, get their job done or even become more agile in their daily work. This is part of
why this revolution is so powerful. The Consumerization of IT is driving this new movement.
Users are already there and the C-level offices are just now trying to catch up with them.
Those that embrace this move sooner rather than later will learn how to use the cloud as
a strategic weapon before their competitors do. So the cloud is not top down, but rather a
bottoms-up phenomenon.
5. Cloud is not hype.
As I started this piece, I wrote about the (mis)information that has flooded the market
and slowed progression and adoption of the cloud for some organizations. I’ve spoken
with people in many organizations who are still skeptical of the cloud and believe that it is
something that is very far off into the future. No doubt there is a lot of noise in the market
with many claiming early victory in the hearts and minds of developers, with open source
momentum, or beta products. The reality is that the cloud is ready now, and Citrix has
more than 100 organizations that are running clouds in production today. Companies
like AutoDesk, Edmunds.com, Nokia, Chatham Financial and others, already reaping the
benefits.
22. My words of advice to companies considering a move to the cloud – learn from others who
have already built highly scalable, successful clouds that have helped them transform the
way they deliver and consume IT resources.
This is just the beginning of the discussion. There are many more topics that we will continue
to talk about in the coming weeks, months, years (such as, cloud is not only an infrastructure
and cloud is not just for service providers). All with the goal of helping organizations and the
market understand what the cloud is and what it is not.
Peder Ulander is vice president of product marketing for the Cloud Platforms Group at
Citrix, overseeing the company’s marketing strategy for its cloud infrastructure and server
virtualization products.
CIOs Don't Need to Be Business
Leaders
Given the complexity of today's applications, it's
folly to suggest that the future role of the CIO is less
technical and more businesslike, columnist Bernard
Golden writes. If anything, it's the opposite -- the
business side of the enterprise should embrace
technology.
By Bernard Golden
Fri, May 18, 2012
It seems like every week I come across an article stating that being a CIO means thinking
more like a business person and less like an engineer. Often I see articles that say that CIOs
need to talk the language of business, not technology. Occasionally I'll see one that says
that CIOs need to be business leaders and stop focusing on technology.
I have seen pieces asserting that future heads of IT will be from disciplines such as
marketing or finance, since technology really isn't that important anymore. I've even seen
analyses that say that CIOs no longer need to manage technically capable organizations
because infrastructure is being offloaded to outsourcers and on-premise applications are
being displaced by SaaS applications.
The implication of all these viewpoints is that technology qua technology is no longer
significant and that, overall, it's so standardized and commoditized that it can be treated like
any other area of the business. In fact, it can be managed by someone with no technical
background at all.
The general rap against technical IT executives is that they talk about technology too much
and fail to communicate with CEOs in so-called "business terms." The thinking is that CIOs
fail to use the language of business and thereby bore—or, worse, alienate—CEOs, with the
result that CIOs are dismissed from the inner ranks of corporations.
If only CIOs could learn to communicate in business terms, the argument goes, then they
would be accepted into the inner circle, embraced by CEOs no longer discomfited by
technical jargon.
23. Notion of CIO as Business Leader Just Plain Wrong
The shorthand version of this argument is the CIO needs to be a business leader, not a
technologist. The implication is clear: The CIO leaves the technical details to others and
focuses on the big picture.
There's only one thing wrong with this perspective. It's wrong. In fact, nothing could be
further from the truth.
Technical skills in IT management are important today like never before—and that fact is
becoming increasingly evident. In the future, CIOs will need deep technical skills. A CIO
with even average technical skills will be not only inadequate for his or her job, he or she will
represent a danger to the overall health of the company.
Frankly, even on its surface, this argument of "CIO as business leader" doesn't make sense.
Marketing, for example, is undergoing radical transformation as it shifts to online and digital.
Today sophisticated analysis of click patterns, A/B testing, big data analytics and so on
are a core marketing competence. Do you think that CEOs want a head of marketing who
doesn't know the details of how these kind of marketing tools operate? That marketing is run
by someone who can use the language of business, even though he or she doesn't really
understand the details of what is done in the marketing programs? Of course not.
IT, too, is becoming increasingly complex. Ten years ago, a company's website was
primarily a display application designed to deliver static content. Today, a website is a
transaction and collaboration application that supports far higher workloads. Websites
commonly integrate external services that deliver content or data that is mixed with a
company's own data to present a customized view to individual users. The application may
expose APIs to allow other organizations to integrate it with their applications, and those
same APIs may be used to support a mobile website. Finally, the site probably experiences
high variability of load throughout the year as seasonal events or specific business initiatives
drive large volumes of traffic.
Application performance management depends on an exquisite tuning of a multitude of
elements, any of which can affect response time and each of which must be monitored to
assess an app's ongoing health. To be sure, one can expect the application to constantly
change as new business arrangements, partnerships, or corporate events such as mergers
or acquisitions require functionality changes.
The complexities of these applications is of an order of magnitude higher than those of
a decade ago. For a discussion of what these applications look like from an enterprise
architecture perspective, read this post by friend and colleague James Urquhart and just try
to come away thinking that this highly complex, dynamic, constantly evolving environment
can be managed by someone without technology chops.
You Can't Discuss Tech Without Knowing Tech
Here's the thing: Complex as they are, these new applications are critical to the success of
the overall business. The website of 2000 was important, but if it wasn't operating properly,
the company could still function. If today's Web-enabled application isn't available, business
grinds to a halt. This reflects how, over time, these applications have insinuated themselves
into the core functionality of the company—and made their successful operation critical to
the operation of the business.
Now, do you think a CIO can get by without understanding the key elements of these type
of applications? Without recognizing the weak aspects of the application where failure or
performance bottlenecks can ruin successful user engagement with the application?
The counter argument to this perspective is that the technology is too complex andBelieve
me, there is a world of difference between someone who understands technology—and
as a result has to weigh alternatives and disputes among different groups involved in a
technology discussion—and someone who doesn't really have any technology background
24. and arbitrates by non-technical criteria. The difference between them is the difference
between an organization that gets things right on technology—or, when it gets things wrong,
can recognize the issue and quickly correct it—and one that makes poor decisions that
result in fragile, constrained applications.
In Today's Economy, CEOs Obligated to Know Tech
Frankly, that issue of talking to the CEO in business language with which he or she is
comfortable is a red herring. The fact is, businesses today are technology businesses.
Information technology is core to what they do. Something so critical to a company's
success imposes an obligation on a CEO to comprehend it. After all, do you think the
CEO of GM refuses to engage with the head of manufacturing on supply chain issues
even though it's a highly technical subject? Why, then, is it OK for a CEO to deflect an IT
discussion because it's highly technical?
Now that I think about it, it might be time to turn the whole argument on its head. The
statement shouldn't be that CIOs aren't businesslike enough. It's that too many of today's
CEOs are insufficiently technical.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization,
cloud computing and related issues. He is also the author of "Virtualization for Dummies,"
the best-selling book on virtualization to date. Most recently Wired.com namd him one of
the Top 10 Cloud Influencers and Thought Leaders. Follow Bernard Golden on Twitter
@bernardgolden.
Follow everything from CIO.com on Twitter @CIOonline, on Facebook, and on Google +.