8. Gartner “Magic Quadrant for Cloud Infrastructure as a Service,” Lydia Leong, Douglas Toombs, Bob Gill, Gregor Petri, Tiny Haynes, May 28, 2014. This Magic Quadrant graphic was published by Gartner, Inc. as part of a larger research note and should
be evaluated in the context of the entire report. The Gartner report is available at http://aws.amazon.com/resources/analyst-reports/. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise
technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed
or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
2015 Gartner魔術象限
將全球雲端業者(IaaS)分類
Energy, Utilities and Gas:
Hess:
Hess Corporation is a leading global independent energy company engaged in the exploration and production of crude oil and natural gas.
In March 2013 Hess Corporation announced divestiture plans for its downstream businesses and looked at AWS to meet their needs
The IT department decided to migrate the infrastructure associated with this business to AWS in a way that would completely uncouple dependence from its on-premises data center.
Haven Power:
Haven Power serves the electricity needs of business customers in the East of England.
The company began using AWS for disaster recovery, and has since migrated its billing services and data warehouse to AWS.
By using AWS, the company has seen response times drop from 500 milliseconds to 80 milliseconds and saved significant upfront capital expenditure by deciding not to upgrade its on-premise data center.
Healthcare and Life Sciences:
EMDEON:
Emdeon is a leading provider of revenue and payment cycle management and clinical information exchange solutions in the U.S. healthcare system.
Emdeon is migrating its high performance, transactional and data analytics healthcare IT solutions that operate on a national scale to AWS.
By moving to AWS, Emdeon will accelerate its ability to innovate and provide payers, providers, and pharmacy customers more value than before.
Philips Healthcare:
Philips’ Healthcare Informatics Solutions and Services division manages and analyzes data for health care providers around the world.
Using AWS, Philips Healthcare can stream vitals from more than 190 million patients around the world.
The company built its Philips HealthSuite application on AWS to simplify the diagnosis of patients worldwide.
To date, the platform has generated more than 15 PB of data and grows at the rate of 1 PB per month.
Financial Services:
Pacific Life:
Pacific Life turned to AWS for its hybrid IT strategy, using the AWS cloud in combination with data centers in California and Nebraska to run actuarial workloads used to set insurance pricing and create new product offerings.
The company provides financial services and products to individuals, businesses, and pension plans.
Using AWS, Pacific Life can quickly scale its compute capacity with less cost and IT overhead compared to adding new hardware to its own data centers.
Bankinter:
Bankinter, a leading provider of online banking services in Spain, uses AWS as an integral part of their credit-risk simulation application.
The application uses complex algorithms to perform 5,000,000 simulations.
Using AWS, Bankinter was able to reduce the average time-to-solution from 23 hours to 20 minutes.
Manufacturing and Industrial:
Samsung:
Samsung is a South Korean multinational conglomerate company.
Samsung runs a hybrid infrastructure, using the AWS cloud to build its Smart Hub application, which allows users of Smart TV and Blu-ray players to access content of third-party providers, while its financial transactions are handled by its on-premises infrastructure.
Samsung’s cloud deployment strategy has saved the company $34 million in CAPEX and reduced OPEX by 85%.
Unilever:
Unilever is a British-Dutch multinational consumer goods company.
Unilever needed to find a way to standardize its IT infrastructure to support a faster time to market when launching new products.
The company created a standardized marketing platform on AWS to launch new product websites.
Unilever began this project by migrating 500 websites to AWS in a pilot and has since migrated 1,700 websites to AWS. This allows Unilever to launch new products in 75% less time than before.
Media and Entertainment:
Comcast:
Comcast Corporation, formerly registered as Comcast Holdings, is an American multinational mass media company and is the largest broadcasting and largest cable company in the world by revenue.
Comcast’s IT strategy focuses on a hybrid architecture combining its own data centers and AWS, with AWS serving as the cornerstone of the hybrid cloud architecture for its next-generation TV service, X1.
Demand for Comcast’s X1 delivery platform exceeded the capacity of its on-premises data centers, so the company turned to the cloud for the elasticity and flexibility it provides.
By leveraging AWS, Comcast is able to quickly add capacity with Amazon VPC and Direct Connect, expanding their data centers as they scale to provide interactive entertainment on demand.
MLBAM:
Major League Baseball Advanced Media (MLBAM) is the internet and interactive branch of the league.
MLBAM built Statcast on AWS, a new application that measures the position of each player, the umpires, and the ball in near real-time on the field, so that viewers can answer age-old questions like “what could have happened if…”
Statcast uses a missile radar system to measure the ball’s movement more than 2,000 times per second, streams and collects data in real-time data through Amazon Kinesis, stores the data on Amazon S3, and then performs analytics in Amazon EC2.
The platform will generate nearly 7 TB of data per game and up to 17 PB per season.
Consumer Goods:
McDonalds:
No public information available to disclose. Only general logo use permitted.
Dole:
Dole Food Company is an American-based agricultural multinational corporation that distributes its products in 90 countries.
Searching for a solution to host its MSFT SharePoint sites, the company chose AWS because of cost, efficiency, and to improve operational efficiency.
By running on AWS, Dole can launch a new SharePoint website in minutes, host business intelligence and mobile applications globally, and estimates savings $350,000 in operating expenses.
Kellogg’s:
Kellogg’s is an American multinational food and manufacturing company.
The Kellogg Company needed a more robust application to track and model promotional costs.
The company turned to SAP HANA hosted on AWS to analyze trade spend.
Doing so will allow Kellogg’s to save $900,000 in IT costs over 5 years and speed the analysis of data by up to 90% compared to traditional data center solutions.
Travel and Hospitality:
Expedia:
Expedia uses AWS to develop applications faster, scale to process large volumes of data, and troubleshoot issues quickly.
By using AWS to build a standard deployment model, development teams can quickly create the infrastructure for new initiatives.
Critical applications run in multiple Availability Zones in different Regions to ensure data is always available and to enable disaster recovery.
Qantas:
Qantas is Australia’s largest domestic and international airline.
The airline wanted to develop an in-flight application that would aggregate and present passenger information to cabin crew to improve customer recognition and improve customer intelligence captured on board the flight.
They used AWS to build this application and distribute it to more than 1,000 crew supervisors
With this information, the airline can then refer these situations to its customer care team to resolve proactively before the customer contacts them.
Delaware North Corporation:
Delaware North is a major presence in the food service and hospitality industry, serving more than 500 million customers at 200 venues around the world each year.
The company decided to move most of its corporate data center operations to AWS.
The migration helped Delaware North reduce its server footprint by 91 percent, achieve a projected TCO reduction of at least $3.5 million over five years, improve security compliance and disaster recovery, and vastly streamline the delivery of new services and solutions internally and to its business customers.
Cloud is biggest technology shift in our lifetimes
We've talked about this in past;
6 Primary reasons
But, doesn't explain why people so passionate;
when talk to Devs, DMs, LOB, and now CIOs at any size company;
what they'll tell you is that it's about _freedom_ and ability to control own destiny;
[Transition:] Prior to the cloud...Builders been constrained for many years
And, we trained builders not to waste any shower cycles inventing;
Because any time had new widea that required Infra or resources to build Infra services;
answer they got was...;
Demoralizing for Builders;
Devs didn't get into CS to do same thing every day;
Devs builders, tinkerers, creators;
Many of best ideas locked in builders heads ready to be unlocked if Infra withing reach;
Same true for CIOs...need Infra for Co that's orderly and secure, but don't take job...
This Cloud and AWS movement very much about giving builders freedom and control over their own destiny;
And giving builders hope that if come up with idea, can influence biz;
This is why builders so passionate about cloud and why it's taken off so fast;
And let me tell you, once builders have had a taste of it, they're not going back to the old way of doing things
So, if cloud is about freedom, what are the basic freedoms it's producing?;
We'll share 7 key ones with you today
1st freedom is the Freedom to Build, Unfettered;
Hard in this day and age to compete if you can't move fast;
Cloud has made this true b/c of what it enables for start-ups and enterprises;
BUT, AWS lets you N.O. move fast, B.A. removes many of the normal blockers/barriers builders face
There are two key pieces to moving fast
If you look at AWS's Infra Tech Platform, have a lot more functionality, by a lot, than any other provider;
won't go through entire platform, but looking at broad strokes..
Steve can only launch a RDS instance from his laptop from SEA HQ during biz hours vs other providers can only say any DBA can do anything to RDS, anytime, from anywhere; *
[Transition: And, it's why...]
It's b/c they're not;
Platforms are very different with very different capabilities
{Transition]: My first guest works at a co that's well known in the FS space and across the world as an innovator who uses data and software to differentiate their customer experience and who's making a significant shift to the cloud...to hear more about what they're doing and why they've chosen AWS, pls welcome the CIO of CapitalOne, Rob Alexander
The cloud and AWS give you newfound ability to get from idea -> market with ideas faster than ever
Also gives you freedom to get real value from your data
For years, customers found it cost prohibitive to keep the data they want
CIOs asked to tell the CFO which Qs wanted money to answer … vs. letting gems in data reveal themselves
With the cloud, never been easier to collect, store, analyze and share data
NTT Docomo: 4 PB DW/RS
Vivaki
- Process/analyze large amounts of data to optimize ROI of marketing campaigns
- Processing more data than before
- Reducing op costs 75%
- Same analytics that took 20 days, taking 6 hours
Phillips H/C
- Reinventing h/c for billions of people
- P health suite app on AWS to simplify diagnosis of patients
- Do so by comparing millions of studies together and looking for commonalities
- Using AWS to stream vitals for 190M patients around world à Generating 15 PB of data from 390M studies, growing at 1 PB/month
TRANSITION
- So you can see that the cloud lets customers save and analyze much more data than ever before
- But what you find in companies is that it’s a small group of people with tech skills to use the analytics services
The cloud allows build more quickly and save and analyze much larger data than ever before
What you find in companies is that it's a small group of people who have the technical skill to use these analytics services;
What it really begs is that all the employees want access to this data and do their own analytics
Why can't more people access analytics and feed that back into the analytics?
Most people inside the organization want access to the data, and want to be answer their own questions, and believe me, the technical folks would much rather they could answer their own questions too, instead of submitting yet another ticket to their backlog… so what’s holding this process back today?
As easy as possible to use for the less-technical members of the team;
there are a couple of key areas that we focused on:
the first was that we wanted the user experience of the service to be as rich as possible, but also as easy as possible to use.
As easy as possible to use for the less-technical members of the team.
As easy as possible to use for the less-technical members of the team.
As easy as possible to use for the less-technical members of the team.
As easy as possible to use for the less-technical members of the team.
Run analyses as quickly as possible
SPICE
- run select query, add the delta to SPICE
- how often do you want it to refresh
- data is current
- tell SPICE to archive, older than 1 year, remove from SPICE
Run analyses as quickly as possible
In websites, blogs, company portals and your own applications.
Available as native apps for iOS and Android.
Now - people get how fast and easy it is to build in the cloud
Get that they can keep more of their data and get more data into the cloud
Can I get more infor the cloud
Direct upload,
DX (many thousands of customers)
TRANSITION: BUT, these solution don’t solve all the emerging use cases and needs as more & more customers are trying to move to the cloud
But there are use cases where this still isn't good enough
For example - streaming data, where there are some unique challenges: you want to collect, process and store the data continuously, from hundreds of thousands of sources, at very high throughput, sometimes TBs per hour.
TRANSITION: Our team heard this enough that we built a service for it
Two years ago we introduced Amazon Kinesis, which we now call Kinesis Streams, which is a solution to these challenges;
it allows you to build custom applications for to collect and store streaming data at very high throughput.
Data is ordered, and has sub-second processing latency.
Now, Kinesis Streams is a very powerful, foundational platform, onto which a lot of customers have added their own apps of are using it along side frameworks such as Spark Streaming or Apache Storm, and it’s found a home in a wide range of industries including ad tech, gaming, financial services, IoT, Entertainment and IT services.
A really common use case is to capture the streaming data and load it into S3 and Redshift, but to do that, you still need to manage the stream and write the custom code to load the data.
Any streaming data source (mobile device, web app, telemetry coming from connected devices)
Make a single PUT API call to firehose -> loads data in real time
Can load into S3 or R/S, or both (more to come)
Means you can querying data, loading into high performance clusters, integrates with rest of app and because data to get end2end view of app and environment conditions
Automatically scale up & down capacity needs
Can ask F/H to concatenate by various time or size-of-data intervals
Can ask F/H to compress data using standard compression algorithms to minimize the amount of storage on end pint
Can ask F/H to encrypt data using KMS -> means you can encrypt as data arrives decrypt when app needs -> KMS stores keys that can be easily rotated & all tracked in C/T
Lots of undifferentiated heavy lifting; how can we simplify? Can we make it easier?
What about when want to move large volume of data
Can take lot of time from point A -> B
Even for companies with gigabit/sec connection -> take 10% (100 megabits/sec) -> takes 100 days for 100 TB
Can change with lots of $ for network upgrade or increase in bandwidth costs, but that’s not usually what customers want to do
Today, we have import/export physical disks that are usually around 1TB (import/export will support up to 16TB), shipped to our facilities
Most other large tech companies are rushing to copy that service
Customers need to manage all the media themselves: purchase and track, and maybe encrypt.
Manageable for small transfers but much more difficult with more data that requires drives for big transfers
Logistics are hard
- Drives encrypted
- Secured packaging for transport
- Working with courier to ship out
- Creates opportunity for human error
50T portable storage appliance
Same 100TB talked about earlier can be moved into AWS in less than a week
Custom built physical appliance
Very simple to load data on these
Automatically encrypted end to end
Secure enclosure
Tamper resistant and secure
We ship to you -> return address and tracking automated
Continuing on the theme of migration, I wanted to talk about the fifth of our basic freedoms - the freedom to be able to back out of bad database relations.
They say that you’re only dating your operating system, but you’re married to your database, and what we hear from a lot of customers is…
well, unfortunately, it’s not us, it’s you, and they want out.
TRANSITION: And see things like this …
Availability and durability which are at least as good as commercial grade DBs.
We’ve heard loud and clear from customers that they are looking for faster, easier ways to move away from proprietary databases, and they’re looking for additional paths to this basic freedom, so there are really three things that customers are looking to do in their quest towards the freedom from these old databases…
The first, is the opportunity to explore more open database options, and we’ve been focusing on this since virtually day one with RDS, where we have…
Today we’re expanding this with MariaDB
Compatible with MySQL
Open Source, maintained by the community, with a commitment to staying that way
Maintained by the creator of MySQL
Today we’re expanding this with MariaDB
Compatible with MySQL
Open Source, maintained by the community, with a commitment to staying that way
Maintained by the creator of MySQL
Downtime during a migration is hours, to days depending on the size of database and the rate of change of the data.
Need to encrypt data during the move to on-prem, 1T could take 20 hours even with DX
You can either not shut down the source, in which case you have to manage all the changes which have happened in that time. Or you can shut it down and incur application downtime. It’s also a fiddly process which is hard to get and validate right the first time; so may require multiple attempts.
If you don’t do the migration perfectly, your application data can become compromised, which may also have regulatory and compliance issues.
There are tools which can help with this, but they are expensive (typically costs $100,000+), which means that procuring the tools for just one or two migrations doesn’t make sense.
The AWS Database Migration Service allows you to avoid taking your application down, or worrying about complex data updates, by continuously replicating your data from your source to your new target RDS database. You have the option of choosing to migrate the full set, just the updates or both.
You can migrate even very large databases continuously in this way, and monitor the entire process in real time from the management console.
In fact, customers can use this service just for replication if they need to.
This means you can take you on-premises databases, or even databases running on EC2 today, and migrate them to the same engine on RDS in the Cloud.
With the AWS Database Migration service, we make it possible to move a 1TB database for less than $3, in a way which is significantly less effort, and dramatically reduces the downtime of your application.
But migrating your data is only one part, and in some cases, that the ‘final mile’ of a migration - a lot of the work, in some cases the majority of the work for entire teams, is to have to migrate the database metadata, the schema, tables, views, and in some cases transform the data itself.
Also, you need to port and test the stored, embedded procedures.
This is a long term, significant effort for many customers - can take a small team 6 months to a year to completely migrate a database from one type, to another.
Can be used in conjunction with the AWS Database Migration Service for a smooth, end to end migration which takes less time, involves less down time and at a significantly lower cost.
The Schema Conversion tool takes care of the schema and data transformation between database types; including converting tables, partitions and sequences to their equivalent schema definitions from one database engine to the next (for example, Sequences in Oracle are automatically converted to the appropriate table definition in MySQL).
Secondly, the tool can automatically re-write code which is stored inside the database; views, stored procedures, triggers and functions are automatically evaluated and where possible, we’ll go ahead and convert them to their equivalent in the new database; where we can’t do this accurately, we’ll highlight it and make smart recommendations (providing links to the docs) to guide you through the process manually. So when coupled with the Database Migration Service, the Schema Conversion tool will help you go from whatever database you’re on now, to a more open database in the Cloud with remarkably low effort in terms of skilled DBA time and cost, giving you an ‘out’ for your current database relationship woes.
The Database Migration Service preview starts today, and the Schema Conversion tool is available for you to start using right away.
Call out FINRA:
One of the largest independent securities regulators in the U.S.
Decided to go all-in on AWS with an aggressive 36 month migration plan.
Already moved > 50% of their market regulations systems to AWS
On target to move out of their data centers by end of 2016
[NEXT SPEAKER]
“We see new enterprises every week making these migrations to the cloud. One of those is GE, and to talk about that I’d like to bring out Jim Fowler, CIO of GE…”
So we’ve talked about the freedom to get analyze your data, get data in to the cloud, the journey to migrate whole businesses, and the ability to migrate your databases smoothly, the next freedom is the ability to be able to secure your cake, and eat it to. So what’s this about? Well..
“SEC Rule 17a 4”
SEC Rule 17a-4(f): SEC regulations for electronic books and records storage requirements
VPC, allows you to provision a logically isolated section of the Cloud where you can launch your resources
WAF, just launched, which helps to protect web applications from attack by blocking web exploits like SQL injection, cross site scripting, and lets you add your own rules based on network traffic and request headers.
In encryption, the Key Management Service, a fully managed service that makes it easy for you to create and control encryption keys used to encrypt your data
CloudHSM, which helps you meet your corporate and regulatory compliance requirements using a dedicated hardware security module, where you control the encryption keys and cryptographic operations.
Server-side encryption, with services such as S3 allow you to supply your own encryption key as part of the request, and the service will take care of the rest.
Identity, we have fine grained access control policies on IAM, integration with Active Directory, and support for identity federation with SAML.
And we have services specifically designed to help you be more compliant, with Service Catalog, which allows organizations to create and manage catalogs of IT services that have been approved for use on AWS, such as virtual machine images, servers, software and databases.
CloudTrail lets you record you AWS API calls for your account, and deliver the log files to S3, including the identity, the time, source IP and request parameters of the API call, as well as the response returned by the AWS service.
And AWS Config, a service that provides you with…
Inventory of full list of resources, plus visibility into how they are connected, and how a configuration change to one can affect the others (for example, the impact of a security group change).
But today, customers need to sift through their configuration item notifications in order to be able to identify and take action in the event that a configuration change has occurred, which potentially put them out of compliance with their best practices.
A lot of customers would like to be able to take action automatically when configuration changes occur: and we’re adding this today…
Defined guidelines for provisioning and configuring AWS resources and then continuously monitor compliance against those guidelines.
Encrypt volumes, specified using a key from KMS
KMS on slide
Checks that instances belong to a specific VPC.
And something which I know drives a lot of people crazy: the ability to be able to terminate instances which have been launched without the appropriate resource tags.
Config Rules include live and historical reports of your compliance status, and out of the box, we have seven pre-built rules available for the common compliance and best practices for securing your AWS configuration; additionally, you can write your own rules using AWS, which are either triggered when a configuration change occurs, or which are run periodically against your resources.
As you enable more people to move more quickly in your organization, it’s good to have a guard rail to make sure they are staying safe and secure, and to be able to take corrective action early and often.
One the challenges - for anyone who cares about security - we have this inside amazon, and a lot of our customers have this too - before they deploy it they want to be able to do a full assessment on their application they might deploy; to do that they hire consulting agencies (expensive), and have differing degrees of experience
One of the things they have asked - can you find a way to take your large security team, and build that into a service into how you deploy your applications, both periodically, and automatically.
Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices, including impacted networks, OS, and attached storage. After performing an assessment, Amazon Inspector produces a detailed report with prioritized steps for remediation.
Amazon Inspector allows you to identify security and compliance issues in applications before they are deployed or while they are running in a production environment.
I’d like to welcome Jorge (“Hor-Hay”) Ortiz, Manager of Infrastructure at Stripe, to the stage to share more about how Stripe, an inventive online payment platform, has built a PCI-compliant payment application while actually increasing the pace at which they are building their applications and growing their business as a result.
Lots of times, what you see with big companies is as they get larger they have a tendency to try and find ways to say no to new ideas; not because they are ill intended, they become conservative and afraid of some of the risks; the systems and the infrastructure to build force them to make choices about what they can do.
What’s most unusual about a company like amazon, the team is constantly looking for ways to say yes - consistently reinvent the customer experience and makes wants builders want to work there.
Qantas & Hooroo.com
Hooroo is a subsidiary of the Qantas Group, formed in 2011 to capitalize on the hotel booking services in Australia. To launch quickly in a highly competitive environment, Hooroo only had 11 months to develop and launch 4 new web properties. Hooroo was able to build a web infrastructure quickly on AWS, launch the first web property in 3 months and the rest of them within the given timeframe, and reports an estimated 99.9 percent uptime for the websites. With AWS Hooroo has also been able to reduce the load time of its pages by 25% and support a 1,400% increase in traffic to its web properties.
Singapore Post
Singapore Post’s ecommerce division built a brand new ecommerce business on AWS in 3 months. This division is now responsible for 27% of the company’s revenues and supports more than 1,000 end-customers. By using AWS, the company is able to save 50% compared to building on-premises.
[TRANSITION] Joe Inzerillo, Executive VP & Chief Technology Officer, MLBAM
Thanks, very cool;
so as we close, what is the essence of what we are really talking about here…
Quote from Jody: “Get these folks who have put their blood, sweat and tears into making our consumer experience grow, and given all the constraints, it’s incredibility liberating for them, and they feel empowered, and they feel like it’s their site. For the first time I heard one of the engineers say ‘this is my website’, and so it’s amazing what happens when they actually feel empowered like it’s their own. The technology you’re developing are helping to power that”
We all come to work to be useful, to have impact, want to feel like we’re making a difference
We don’t want to be order takers
Most of us have ideas about what might work or be better for our customers
Nothing more inspiring than having a chance to try your ideas
To have the freedom to see if you’re right and adjust if you’re not
To control your own destiny
That’s what people want
That’s what keeps them engaged and thinking about your company and customers’ problems
And this is a big piece of what’s at the heart of this movement to the cloud and AWS
Incredible opportunity in the next three days
18,500 of your peers
networking
ability to learn how to take control
Hope you take advantage over the next couple of days.