The document discusses various technologies related to cloud computing and big data including Docker, Kubernetes, OpenStack, and Hitachi's Hyper Scale-Out Platform (HSP). It provides overviews of what each technology is used for, such as Docker for containerization, Kubernetes for container orchestration, OpenStack for building cloud infrastructure, and HSP for hyperconverged scale-out infrastructure. It also includes diagrams illustrating how these technologies can work together in an enterprise environment to provide solutions for areas like data lakes, analytics, and private clouds.
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
HIPAS UCP HSP Openstack Sascha Oehl
1. What’s up with Docker, Kubernetes, Openstack, HSP
UCP Basetechnology and Sales Vision
Sascha Oehl
Senior Manager Presales Germany North & Public
Initiative Lead UCP
4. Accenture-Study: Insurance sales via the internet will double in
Europe by 2016 to 25 billion euro revenue.
Nearly two-third of the executives interviewed (64 percent)
believe that this development will be driven by companies not in
this business today, like or the e-commerce giant
Source: https://de.nachrichten.yahoo.com/accenture-studie-versicherungsabsatz-%C3%BCber-das-internet-verdoppelt-sich-000000482.html
14. Bimodal IT
Mode 1 Mode 2
Stability
Efficiency
Safety
Accuracy
Process
Core Business Systems
RUN
Agility
Innovation
Fail Fast
Speed
Automation
DEVOPS
Budget
Workload
22. V
topology planning
bonding and availability
network monitoring
address management
routing and access control
port provisioning
bandwidth provisioning
capacity planning
firmware updates
23. V
SAN zoning
RAID grouping
LUN carving
performance tuning
storage monitoring
topology planning
multi-pathing and availability
port provisioning
bandwidth provisioning
capacity planning
firmware updates
25. V
topology planning
bonding and availability
network monitoring
address management
routing and access control
port provisioning
bandwidth provisioning
capacity planning
firmware updates
43. Horizon
The
Management
Interface / GUI
to provide new
Instances
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
44. Instance
The „virtual
machine“
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
47. Nova
The Compute
Service
Runs
Instances on
hypervisors
like
Hitachi LPAR,
KVM,
Vmware,
HyperV...
Works with
Hitachi
CB500,
CB2500 HSP
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
49. Cinder
The Block
Storage
Service
Manage and
Provision
Volumes on
e.g. Hitachi
HNAS, Hitachi
VSP G200-
1000
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
50. Swift
The Object
Storage
Service
Manage and
Provision
Object Store
Space on e.g.
Hitachi HCP
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
51. Manila
The NAS
Storage
Service
Manage and
Provision
Network Store
Space on e.g.
Hitachi HNAS
2015.2 Liberty
Nova CinderNeutron
Horizon
SwiftGlance
Instance
Manila
Instance
Keystone
Instance
2016.1 Mitaka2015.1 Kilo2014.2 Juno 2016.2 Newton 201
Openstack Architecture
55. Hadoop developer kit to build analytics apps, distributed file
system with global namespace
Pre-built analytics apps and solutions for targeted verticals
Hyper-converged scale-out platform, ideal for private
compute clouds in HSP+
Scale-Out
56. ELEVATES sales discussion from IT to business
Data warehouse
optimization
Offload data to Hadoop
on HSP
COST REDUCTION
SCALABILITY
Streamlined data
refinery
Hadoop on HSP for data
pre-processing
BETTER INSIGHT
Customer 360
T
X
Hitachi Content Platform
(HCP) as an insight
platform
BETTER INSIGHT
SERVICES DRIVER for both Hitachi Data Systems and Partner
PROVEN GTM models
COMPELLING ROI for customers
57. Compute Node
Batch App 2
Compute Node
Batch App 2Compute Node
Batch App 2
HDFS
Compute Node
Batch App 1Compute Node
Batch App 1Compute Node
Batch App 1
HDFS
NAS Filer
Object Store
Hadoop − Bring Data to Apps
Durable Enterprise-
Class Storage
Unstructured
Data
58. Scale: Spin up compute jobs where the data is, vs. moving terabytes of Hadoop data
Posix
NFS Serving
HSP − Bring Apps to Data
Durable Enterprise-Class Data Lake
Fast Data
Ingest
Unstructured Data
Virtual Machine
Streaming AppVirtual Machine
Streaming AppVirtual Machine
Streaming App
Virtual Machine
Batch AppVirtual Machine
Batch AppVirtual Machine
Batch App
Virtual Machine
App3
64. The eScale Distributed File System
Equipment Usage
File System
Billing File System
Maintenance
File System
65. Distributed Appliance Management System
VMs
Running
MARS
and
HSDP
VM
VM
VM
VMVMs
Running
Hadoop
with
Lucene
and Solr
VM
VM
VM
VM
OpenStack
Management Client
VMs
Running
Hadoop
with
HBASE
VM
VM
VM
VM
66. Access (NFS, HTTP, etc.)
Namespace (File system)
Storage Services (OSD)
Storage Access
(Erasure coding, redundancy)
DLM
NODE
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
67. Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Cell
ClientsClientsClientClientClient
P
A
X
O
S
C
R
U
S
H
Cluster
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
Global Name Space
68. Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Cell
ClientsClientsClientClientClient
P
A
X
O
S
C
R
U
S
H
Cluster
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
Distributed Data: DLM MDS, config (patented)
69. Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Cell
ClientsClientsClientClientClient
P
A
X
O
S
C
R
U
S
H
Cluster
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
Focus: Availability, recoverability, massive scalability and performance
70. Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Cell
ClientsClientsClientClientClient
P
A
X
O
S
C
R
U
S
H
Cluster
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
Self-Configure Self-Manage Self-Balance Self-Repair
71. Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Access / NFS
Namespace (file
system)
Storage Services
(OSD)
Storage Access
DLM
NODE
Cell
ClientsClientsClientClientClient
P
A
X
O
S
C
R
U
S
H
Cluster
File System
Billing File
System
Maintenance
File System
Equipment Usage
File System
Efficient (re)distribution of data across nodes (CRUSH)
Quick and easy to add nodes -Very efficient failure recovery
In many ways Big Data and The Internet of things is the result of a ‘Perfect Storm’
There’s the digital data evolution
From business transactional data in databases to human generated content – word documents, images, audio, social media and so on
To the Data of the Internet of Things
Massive amounts of data – doubling annually
There’s the technology evolution enabling us to store manage and process that data at scale and at a price point that makes it viable to do so
And finally there’s the evolution in the technology and techniques to actually analyze that data and derive value from it
In the past we had a relatively limited amount of data to drive decision making and much of the analysis was limited to reporting on what had happened
Technology advancements enabled us to mine more data in real time to be able to react instantaneously and utilize data to model what might happen in the future for more predictive insight
But, with the ability to now mine massive amounts of granular data from all data sources our observations can now be far more accurate and enable us to unlock hidden or weak signals in the data that can give us highly accurate and early prediction of problems before they occur. In other words rather than just alerting us to problems that have occurred – prevent them occurring in the first place.
Analytics has moved from forensics to a weapon of preemptive control!
Talk about the massive growth in all areas, data, users, performance demands, number of devices in use.
This growth will continue due to Big Data and the analytics that will be run on this data.
This demands high performance with instant answers.
Demands on IT are changing and growing every day, needs an infrastructure that can change and adapt without outages or down time.
Todays infrastructures need to be able to cope with all this and whatever comes next
Customers are using converged more and more as a standard building block for their Data Center or private cloud.
According to IDC the worldwide integrated infrastructure and platforms market is increasing by 20% year over year.
At HDS we like to refer to Converged Platforms: Converged Platform consists of multiple components, compute, network and capacity, into a single, optimized computing package.
We have seen a strong demand for our converged platform, Unified Compute Platform.
"The importance of unstructured data in the enterprise is underscored by the fact that beginning in 2015, unstructured data will surpass structured data in terms of both capacity shipped and revenue. IDC estimates that in 2017, unstructured data will account for 79.2% of capacity shipped and 57.3% of revenue," said Ashish Nadkarni, research director, Storage Systems at IDC.
There are three major changes that affect businesses and IT groups around the world.
First, the demands of the business prompting tremendous changes in IT.
An unavoidable business demand is having seamless access to data and applications across all devices, anywhere, anytime, uninterrupted, and 24/7.
Fast application development and speed to market is vital for competitive advantage.
IT must be efficient and must enable revenue. If it doesn’t, then the business may go outside IT for solutions – there are many alternatives available and the business can’t wait if they are to survive.
The second major shift is that the overall role of IT is changing dramatically.
The CIO and IT are becoming advisors to the rest of the business on the use of technology solutions from internal and external sources.
IT must also provide new capabilities such as platforms that help companies with digitization and the overall transformation of the business model.
IT must facilitate it’s own transformation. IT can no longer rely on efficiency. It must move past legacy infrastructure to allow a new way to deliver and use IT – such as a cloud delivery model.
The third major shift in IT is in the delivery strategy.
Agility – IT needs to make a very big move from a hardware-centric infrastructure and capabilities to a software-defined approach that is nimble and flexible. An open design that allows IT to handle the business shifts much faster than waiting two years for a big technology refresh.
Automation – It’s the foundation for cloud. It not only saves cost, it offers speed to market, including faster application development.
Lastly, IT needs to help with business outcomes. The best CIOs on the planet are the next CEOs. Look how technology is so pervasive in every company. The reactive IT groups will fall behind. The IT people who are actually bridging the gap between technology and changing business models will be the new business executives.
distributed, scale-out storage using a distributed architecture and file system with global namespace.
First layer: IaaS (for NFS. We don’t do SMB)
2nd layer: Simply want a Hadoop distro supported (build and implement)
3rd layer: Want this layer, but won’t be there day 1
At the core we are a platform for scale out….
With APIs and tools for analytics and virtualized apps
With the top layer being the solutions and apps that drive customer business
With traditional Hadoop, you bring your data into the data center
As you use the data for Hadoop, your import it into HDFS and ship the data to corresponding compute nodes
And you send more data as you increase your workload to more nodes
HDFS: considerable coordination, network transfers to support Hadoop jobs; compute, net & storage costs
With traditional Hadoop, you bring your data into the data center
As you use the data for Hadoop, your import it into HDFS and ship the data to corresponding compute nodes
And you send more data as you increase your workload to more nodes
HDFS: considerable coordination, network transfers to support Hadoop jobs; compute, net & storage costs
With Hawaii, you bring your data into the data center
Once on Hawaii it stays in place, doesn’t need an import beyond the original Posix ingest
You now spin up VMs where the data is, rather than shipping the data to the compute
Results in: Minimal network traffic, ability to have in-place data handling = simplified management
Key is flipping Hadoop architecture from pushing data to silo’d storage to retaining data in place and leveraging central management/storage/IT
With Hawaii, you bring your data into the data center
Once on Hawaii it stays in place, doesn’t need an import beyond the original Posix ingest
You now spin up VMs where the data is, rather than shipping the data to the compute
Results in: Minimal network traffic, ability to have in-place data handling = simplified management
Key is flipping Hadoop architecture from pushing data to silo’d storage to retaining data in place and leveraging central management/storage/IT
Theme for Pentaho 6.0
Data-in-place processing makes HSP ideal for running Hadoop.
Run analytics applications side by side with Hadoop and leverage the same data.
Quickly host web apps. Web apps have become popular due to the widespread availability and convenience of using a web browser as a client. Host web apps using HSP with VM templates that let you host apps and provide durable storage in a single platform.
Analyze data from web apps in neighboring analytics applications for business insights.
Resiliency
When you lose a disk, will be rebuilt on other disks/nodes
Achieved by copying another copy of the data to retain the 3 copy policy
Resiliency
When you lose a disk, will be rebuilt on other disks/nodes
Achieved by copying another copy of the data to retain the 3 copy policy
Resiliency
When you lose a disk, will be rebuilt on other disks/nodes
Achieved by copying another copy of the data to retain the 3 copy policy
Resiliency
When you lose a disk, will be rebuilt on other disks/nodes
Achieved by copying another copy of the data to retain the 3 copy policy
Showing the HDS “Big Picture for Big Data” where a complete portfolio demonstrates the power of our coverage:
Multiple Data Sources acting as “tributary” rivers into the HSP Data Lake
Pentaho Data Integration “shims” act as the ETL connection between the various Data Sources
Run both PDI and BA on the HSP via KVM
Run Hadoop and other data services on the HSP (Spark, Cassandra, MongoDB, Elastic Search, etc.)
Archive for Regulatory Compliance to the HCP (Retention Policies, Data disposition, Chain-of-custody, Privileged Delete, Immutable data)
Ensure data lineage and security
The centerpiece of the Hawaii value proposition is fast time to business results. These surrounding attributes and benefits all contribute to this:
Massive data ingest at high speed, at scale, for data intensive workloads like Hadoop.
Data-in-place analysis, for example, refers to bringing apps to data. Running apps at the data source makes a difference when processing massive data sets and streams from multiple sources. It also avoids performance loss associated with moving data.
Automated and self-managed is key because of the reduced operational time spent, resulting in OPEX savings.
Speed of data ingest and proximity of apps to data make Hadoop and analytics jobs finish faster.
Reduced hardware via converged compute and storage and reduced set up time save both CAPEX and OPEX by eliminating separate virtualization, server and storage.
Maximized efficiency for open cloud architectures by running virtualized apps the source of the data on a single platform in the cloud.
Useful innovation for the benefit of society is our vision: Social Innovation.
We connect what works today with what you need tomorrow. A long-term vision is vital for this kind of integration over time.