The concept of Web-Scale IT has become a pattern of global class computing that delivers the capabilities of large cloude sevice provider in the enterprise IT industry and business sector. Based on the Gartner report, WebScale IT is one of the technology trends probably to have a significant effect on companies over the next three years, by 2017. Web-Scale IT is clearly defined as the all things accouring in large scale could service firms such as Google, Amazon, Netfilx, Facebook and so on, that enables them to get high levels of agility and scalability by using new processes and architecures according to the report. This paper scrutinizes how technology can change the business style for IoT using in the future. It is expected that using of Web-Scale IT is critical in this turning point of changing the business method so as to IoT using in the future. For achieve tha aim, the first step toward the WebScale IT for many organization should be bringing Developing and Operations together. This is the movment known as “DevOps”.
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
A Study on the Application of Web-Scale IT in Enterprises in IoT Era
1. AStudy on theApplication of Web-Scale IT in Enterprises in IoT Era
Hassan Keshavarz
Nov 13-15, 2015
MJJIC2015
Yamaguchi University
Ube , Yamaguchi, Japan
3. Introduction
Based on a Gartner report, Web-Scale IT is one of the technology trends highly likely to have a significant
effect on companies over the next three years, or by 2017
4. This study scrutinizes how technology can change the business style for IoT use in the future.
To achieve this aim, the first step toward Web-Scale IT for many organizations should be to bring Development and
Operations together, a movement known as DevOps.
ResearchQuestion&Aim
7. Web-ScaleDatacentersAreSimple,Scalable&Efficient
Design Goals
• Fractional consumption and predictable
scale
• No single point of failure
• Distributed everything
• Always-on systems
• Extensive automation and rich analytics
Fundamental Assumptions
• Unbranded x86 servers: fail-fast systems
• No special purpose appliances
• All intelligence and services in software
• Linear, predictable scale-out
8. What We’ve Learned From Web-scale IT
Radical Simplicity
Business
Agility
Predictable
Scale
Cost
Efficiency
Benefits
Infrastructure Strategy
• Intelligence in software layer
• Linear, predictable scale-out
• Fractional consumption
People and Process
• Culture as important as tech
• Launch first, optimize later
• No technology religion
System Design
• Non-disruptive rolling upgrades
• No single point of failure
• Minimal manual intervention
Ingredients
9. WhatIsWeb-Scale?
Hyper-converged on x86 servers
Integrated compute and storage
All intelligence in software
100% software-defined
Distributed everything
Cluster-wide data and services
Self-healing system
Fault isolation with distributed recovery
API-driven automation and rich analytics
Data-driven efficiency
1
2
3
4
5
100011011100
010010001001
100100100100
X86 Node
A better approach to IT infrastructure pioneered by web companies to achieve unprecedented
agility, predictable scale and low TCO
Web-scale is a set of architectural principles and technology concepts. Infrastructure built around these
principles is called web-scale infrastructure.
11. Why Web-Scale Is For Everyone
Large Enterprise
Mid-Market Enterprise
SMB
Benefits of web-scale
are relevant to all
businesses
No changes to
organization or
architecture required
No changes to
organization or
architecture required
Predictable
Scale
Cost
Efficiency
Business
Agility
12. EnterpriseInfrastructureWithWeb-ScaleVirtues
• Agility
• Predictable scale
• Lower TCO
• SLAs
• Privacy and control
• Wide range of workloads
Legacy Infrastructure
Web-scale infrastructure with
all of the benefits of the cloud
Uncompromising Simplicity
Speed of Business
Unmatched TCO
Public Cloud
**TCO: Cost of Ownership
13. Distributed File System (DFS)
Virtual Storage Control Virtual Storage Control
Virtual Machine/Virtual Disk
Flash HDD
Enterprise Storage
Snapshots, clones,
replication, compression,
thin provisioning ,
deduplication
Data Management
Data locality, tiering,
balancing, tunable
resilience
Hypervisor Agnostic
vSphere, KVM,
Hyper-V
23. DevOpsincludesfouradoptionpathsteps
1- Plan and measure
This step involves a practice that concentrates on continuous business planning.
2- Develop and test
This step includes two practices: collaborative development and repeat testing. As such, it shapes the development core
and quality assurance capabilities.
3- Release and deploy
The aim of this step is to release new features to customers and users immediately.
4- Monitor and optimize
This step involves practices that allow businesses to be able to monitor 1) how published applications are performing in
the production environment and 2) how to receive feedback from customers. By analyzing the collected data, the business
can react in an agile manner and modify its business plans
24. The current study focused on Web-Scale IT as one of the top ten strategic technologies of the
future. With regard to the IoT revolution, companies consider focusing on this turning point in
their business.
Web-Scale principles and design can assist enterprise companies to obtain private cloud
operations that accomplish scalability, elasticity, resiliency and ability.
Hyperconvergence assists with the breakdown of IT silos by merging computing and storage. By
placing the data closer to where it is needed, data movement is minimized.
Web-Scale IT would be able to provide the ability to manage computing and storage
holistically.
In sum, the study endeavored to propose DevOps as a strategy for enterprises to help them
achieve their goals
Conclusion
According to Gartner (2014), Web-Scale IT explains how consumer Internet giant enterprises like Google, Amazon, and Facebook deliver seamless user service on a massive scale. Web-Scale IT permits companies to reduce marketing time of IT services and mitigate infrastructure costs. It can also increase agility, enhance the ability to simulate IT culture change and increase quality of service. Based on the aforementioned report, Gartner predicted that Web-Scale IT use as an architectural method in global enterprises will increase from 10% in 2013 to 50% by 2017 [9].
Gartner, J.S. YIM, Top 10 Strategic Technology Trends for 2015, December 2014. (Accessed on Sep 11th, 2015)
http://spri.kr/wp-content/uploads/2014/12/20141212_054514.pdf
Purpose of the Slide:
Talk about the big challenges enterprise customers face today. Establish a baseline that everyone can agree to.
Key Points:
- Datacenters have become increasingly complex over the years. Every part of the infrastructure lifecycle is complex, from buying and deploying to configuring, managing and scaling infrastructure.
- As infrastructure became more complex, IT organizations also became more siloed. You needed storage experts to manage complex network storage and networking experts to manage enterprise network topologies. ITIL processes emerged to deal with the complexity. All this significantly slowed down the pace of IT deployment. Orgs have to trade off doing it right with doing it fast.
- As demand for resources (compute, network, storage) goes up, organizations want to be able to add capacity incrementally and predictably. Scale-up (big iron) infrastructure makes it difficult to scale in small increments when needed.
Purpose: Talk about where and how web-scale IT originated, and what some of the common are between different web-scale data centers
Key Points:
Web companies like Google and Facebook started pushing the limits of existing infrastructure systems and processes in ways that traditional businesses did not. They needed infrastructure that could support their business requirements (rapid application development cycles, scale on demand, cost containment). They tried using existing infrastructure solutions, but quickly realized that legacy infra was a poor fit for their needs.
Over time, these companies developed an alternate approach to IT that enabled them to get past limitations in infrastructure. Some common traits of web-scale IT:
Infrastructure built from commodity server hardware pooled together using intelligent software. This allows customers to start small and scale one server at a time – true scale-out
The software in the system is distributed across all the nodes. You don’t have central metadata servers or name nodes. You don’t see controller bottlenecks
Embarrassingly parallel operations – everything in the system, including storage functions like deduplication and metadata management and system cleanup, is distributed across all nodes. There are no hotspots or bottlenecks, allowing for massive scale
Compute and storage sit very close to each other. Data does not have to go back and forth between storage and compute over a network. Data has gravity, so co-locating storage and compute eliminates network bottlenecks and system slowdown
Heavy automation eliminates the need for expensive, error-prone manual operations
Purpose: Summarize what web-scale IT is, its key ingredients and benefits
Key Points:
Web-scale IT is a new approach to designing, deploying and managing infrastructure
At its core, web-scale IT is about bringing simplicity back to every aspect of deploying and managing a data center
Web-scale IT has allowed companies to achieve:
Greater business agility. For example, being able to quickly spin up dev-test environments to support hundreds of software updates and releases, or getting a Hadoop cluster up for data analytics
Predictable scale – this is not just about large scale (e.g., petabytes of storage). It’s more about elasticity, i.e. adding resources predictably when needed
Cost efficiency – all about doing more with less and achieving lower TCO with a smaller footprint and fewer admins to manage infrastructure day to day
Web-scale IT encompasses changes across infrastructure strategy, organization and system design
It’s important to note that web-scale IT is not just about technology. It requires a fundamental shift in the people and processes managing the infrastructure
For example, the organizational culture and the lack of silos is as important as the specific tools being used
Web-scale is a set of architectural principles and technology concepts. Infrastructure built around these principles is called web-scale infrastructure.
OPTIONAL SLIDE:
The brown hexagons are benefits while the peripherals are attributes of web-scale infrastructure
Purpose: Web-scale is not just for web companies. Companies of all sizes can benefit from adopting the principles and technologies of web-scale infrastructure.
Key Points:
There’s a common misconception that web-scale is all about supporting massive amounts of data and millions of concurrent users.
Those are in fact the problems that drove web companies to look for alternatives a few years ago. But at its core, web-scale IT is about simplicity. It is an architectural approach that provides a step-function decrease in costs by eliminating complexity in the data center.
In recent years, this approach is becoming relevant to other businesses as they face similar challenges around business agility, unpredictable growth requirements and increasing cost pressures. It’s a fact that businesses of all sizes can benefit from the advantages that web-scale infrastructure offers
Turnkey solutions can deliver the benefits of web-scale without requiring businesses to learn new skills or overhaul their IT environments. IT can run standard enterprise applications on these turnkey platforms.
Purpose: IT Companies delivers the power of web-scale infrastructure to enterprise customers as a turnkey solutions
Key Points:
What IT company does is bring the simplicity, agility and rapid scale that web-scale technologies deliver but as a turnkey enterprise solution
Customers can run their diverse application workloads without having to build custom applications
Customers don’t have to learn how to use Cassandra, map-reduce, etc.
Talk about “controlled disruption” – Enterpris is building the bridge for enterprise IT to embrace web-scale IT without completely overhauling the way they do things
Key Points:
Enterprise offers versatile building blocks that customers can deploy for a wide range of applications
With the software-defined approach to infrastructure, policies around resilience, data protection, etc. are late-bound in the system
The systems coming out of the factory don’t have rigid restrictions and preset configurations, so customers don’t have to buy different solutions for different workloads.
Taken from: http://dev2ops.org/2010/02/what-is-devops/
Development kicks things off by “tossing” a software release “over the wall” to Operations. Operations picks up the release artifacts and begins preparing for their deployment. Operations manually hacks the deployment scripts provided by the developers or creates their own scripts. They also hand edit configuration files to reflect the production environment, which is significantly different than the Development or QA environments. At best they are duplicating work that was already done in previous environments, at worst they are about to introduce or uncover new bugs.
Operations then embarks on what they understand to be the currently correct deployment process, which at this point is essentially being performed for the first time due to the script, configuration, process, and environment differences between Development and Operations. Of course, somewhere along the way a problem occurs and the developers are called in to help troubleshoot. Operations claims that Development gave them faulty artifacts. Developers respond by pointing out that it worked just fine in their environments, so it must be the case that Operations did something wrong. Developers are having a difficult time even diagnosing the problem because the configuration, file locations, and procedure used to get into this state is different then what they expect (if security policies even allow them to access the production servers!).
Time is running out on the change window and, of course, there isn’t a reliable way to roll the environment back to a previously known good state. So what should have been an eventless deployment ended up being an all-hands-on-deck fire drill where a lot of trial and error finally hacked the production environment into a usable state.
Organizations must recognize that people, process, and technology are all interdependent facets of all IT services.
As noted by Gartner above, 80% of operational problems can often be attributed to people and process issues. Only a portion of the remaining 20% is actually technology related – some being external disasters.
Dev: “What’s the point of an Agile development process, that produces production ready code every two weeks, if the code sits for weeks or months waiting to be released?”
IT/Ops: “These frequent releases are killing my team, and impacting our ability to have a stable environment!”
People = Culture
Fundamental attributes of successful cultures:
Shared mission and incentives: infrastructure as code, apps as services, DevOps/all as teams
You need to consider your hardware as a commodity, (don't give your servers names) , servers are like farm animals, it is just harder if you let theids name them
Build deep instrumentation into services, push complexity up the stack
Rally around agile, shared metrics, CI, service owners on call, etc.
Changing the culture: any change takes time, changing culture is no exception and you can't do it alone, exploit compelling events to change culture: downtimes, cloud adoption, devops buzz
PROCESSDefinition and design, compliance, and continuous improvement
PEOPLEResponsibilities, management, skills development, and discipline
ProductsTools and infrastructure
http://itrevolution.com/a-personal-reinterpretation-of-the-three-ways/
1st - IT places Dev as the business representative and Ops as the customer representative, with the value flowing in one direction (from the business to the customer). When we can think as a system we can focus clearly on the business value that flows between our Business, Dev, Ops and the end users. We can see each piece as it fits into the whole, and can identify its constraints. We can also properly define our work and when we can see and think in terms of the Flow of our system, we see the following benefits:
increased value flow due to the visibility into what it takes to produce our end product
our downstream step always gets what they need, how they need it, when they need it
faster time to market
we bring Ops in earlier in the development process, letting them plan appropriately for the changes that Dev will be making (because we know that all changes can affect how our product is delivered) which leads to less unplanned work or rushed changes
because work is visible, Ops can see the work coming and better prepare
We can identify and address constraints or bottleneck points in our system
2nd Way - It adds a backward facing channel of communications between OPs and Dev. It enforces the idea that to better the product, we always need to communicate. Dev continually improves as an organization when it better sees the outcomes of it’s work. This can be small (inviting the other Tribes to our stand ups) or it can be larger (Including Dev in the on-call rotation, tools development, architecture planning and/or incident management process) But to truly increase our Flow and improve the business value being delivered to the customer our Tribes need to know ‘what happens’, ‘when it happens’. When we increase our Feedback and create a stable Feedback loop we see the following benefits:
Tribal knowledge grows, and we foster a community of sharing
With sharing comes trust and with trust comes greater levels of collaboration. This collaboration will lead to more stability and better Flow
We better understand all of our customers (Ops as a customer, Dev as a Business, but especially our end users, to whom we deliver value.)
We fix our defects faster, and are more aware of what is needed to make sure that type of problem doesn’t happen again
We adapt our processes as we learn more about the inner workings or our other Tribes
We increase our delivery speeds and decrease unplanned work
3rd Way: When we have achieved the first Two Ways we can feel comfortable knowing that we can push the boundaries. We can experiment, and fail fast, or achieve greatness. We have a constant feedback loop for each small experiment that allows us to validate our theories quickly.
we fail often and sometimes intentionally to learn how to respond properly and where our limits are
we inject faults into the production system and early as possible in the delivery pipeline
we practice for outages and find innovative ways to deal with them
we push ourselves into the unknown more frequently and become comfortable in the uncomfortable
we innovate and iterate in a ‘controlled’ manner, knowing when should keep pushing and when we should stop
our code commits are more reliable, and production ready
we test our business hypotheses (at the beginning of the product pipeline), and measure the business results
we constantly put pressure into the system, striving to decrease cycle times and improve flow
This is your output to measure
This is your output to measure
Modern application lifecycle management practices enable teams to support a continuous delivery cadence that balances agility and quality, while removing the traditional silos separating developers from operations and business stakeholders. This improves communication and collaboration within development teams, and drives connections between application and business outcomes. We see three key metrics that are critical to an organization’s ability to enable value delivery with agility and quality. First, the flow of business value must be measured and improved. Understanding what provides business value, and delivering those features on a sustained, regular cadence is key. The second is having the ability to identify and remove bottlenecks to shorten cycle times for delivering those business values. It’s not enough to simply deliver regularly, but also efficiently. And finally, identify and reduce sources of rework, such as bugs, incorrectly specified features, etc.