SlideShare ist ein Scribd-Unternehmen logo
1 von 80
Downloaden Sie, um offline zu lesen
1
An Architectural
Overview of MapR
M. C. Srivas, CTO/Founder
2
 100% Apache Hadoop
 With significant enterprise-
grade enhancements
 Comprehensive
management
 Industry-standard
interfaces
 Higher performance
MapR Distribution for Apache Hadoop
3
MapR: Lights Out Data Center Ready
• Automated stateful failover
• Automated re-replication
• Self-healing from HW and SW
failures
• Load balancing
• Rolling upgrades
• No lost jobs or data
• 99999’s of uptime
Reliable Compute Dependable Storage
• Business continuity with snapshots
and mirrors
• Recover to a point in time
• End-to-end check summing
• Strong consistency
• Built-in compression
• Mirror between two sites by RTO
policy
4
MapR does MapReduce (fast)
TeraSort Record
1 TB in 54 seconds
1003 nodes
MinuteSort Record
1.5 TB in 59 seconds
2103 nodes
5
MapR does MapReduce (faster)
TeraSort Record
1 TB in 54 seconds
1003 nodes
MinuteSort Record
1.5 TB in 59 seconds
2103 nodes
1.65
300
6
Google chose MapR to
provide Hadoop on
Google Compute Engine
The Cloud Leaders Pick MapR
Amazon EMR is the largest
Hadoop provider in revenue
and # of clusters
Deploying OpenStack? MapR partnership with Canonical
and Mirantis on OpenStack support.
7
1. Make the storage reliable
– Recover from disk and node failures
How to make a cluster reliable?
8
1. Make the storage reliable
– Recover from disk and node failures
2. Make services reliable
– Services need to checkpoint their state rapidly
– Restart failed service, possibly on another node
– Move check-pointed state to restarted service, using (1) above
How to make a cluster reliable?
9
1. Make the storage reliable
– Recover from disk and node failures
2. Make services reliable
– Services need to checkpoint their state rapidly
– Restart failed service, possibly on another node
– Move check-pointed state to restarted service, using (1) above
3. Do it fast
– Instant-on … (1) and (2) must happen very, very fast
– Without maintenance windows
• No compactions (eg, Cassandra, Apache HBase)
• No “anti-entropy” that periodically wipes out the cluster (eg, Cassandra)
How to make a cluster reliable?
10
 No NVRAM
Reliability With Commodity Hardware
11
 No NVRAM
 Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
Reliability With Commodity Hardware
12
 No NVRAM
 Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
 Cannot even assume more than 1 drive per node
– no RAID possible
Reliability With Commodity Hardware
13
 No NVRAM
 Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
 Cannot even assume more than 1 drive per node
– no RAID possible
 Use replication, but …
– cannot assume peers have equal drive sizes
– drive on first machine is 10x larger than drive on other?
Reliability With Commodity Hardware
14
 No NVRAM
 Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
 Cannot even assume more than 1 drive per node
– no RAID possible
 Use replication, but …
– cannot assume peers have equal drive sizes
– drive on first machine is 10x larger than drive on other?
 No choice but to replicate for reliability
Reliability With Commodity Hardware
15
 Replication is easy, right? All we have to do is send the same bits
to the master and replica.
Reliability via Replication
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Normal replication, primary forwards Cassandra-style replication
16
 When the replica comes back, it is stale
– it must brought up-to-date
– until then, exposed to failure
But crashes occur…
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Primary re-syncs replica Replica remains stale until
"anti-entropy" process
kicked off by administrator
Who
re-syncs?
17
 HDFS solves the problem a third way
Unless its Apache HDFS …
18
 HDFS solves the problem a third way
 Make everything read-only
– Nothing to re-sync
Unless its Apache HDFS …
19
 HDFS solves the problem a third way
 Make everything read-only
– Nothing to re-sync
 Single writer, no reads allowed while writing
Unless its Apache HDFS …
20
 HDFS solves the problem a third way
 Make everything read-only
– Nothing to re-sync
 Single writer, no reads allowed while writing
 File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
Unless its Apache HDFS …
21
 HDFS solves the problem a third way
 Make everything read-only
– Nothing to re-sync
 Single writer, no reads allowed while writing
 File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
 Real-time not possible with HDFS
– to make data visible, must close file immediately after writing
– Too many files is a serious problem with HDFS (a well documented
limitation)
Unless its Apache HDFS …
22
 HDFS solves the problem a third way
 Make everything read-only
– Nothing to re-sync
 Single writer, no reads allowed while writing
 File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
 Real-time not possible with HDFS
– to make data visible, must close file immediately after writing
– Too many files is a serious problem with HDFS (a well documented
limitation)
 HDFS therefore cannot do NFS, ever
– No “close” in NFS … can lose data any time
Unless its Apache HDFS …
23
 To support normal apps, need full read/write support
 Let's return to issue: resync the replica when it comes back
This is the 21st century…
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Primary re-syncs replica Replica remains stale until
"anti-entropy" process
kicked off by administrator
Who
re-syncs?
24
 24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
How long to re-sync?
25
 24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
 Did you say you want to do this online?
How long to re-sync?
26
 24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
 Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
How long to re-sync?
27
 24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
 Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
 What is your Mean Time To Data Loss (MTTDL)?
How long to re-sync?
28
 24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
 Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
 What is your Mean Time To Data Loss (MTTDL)?
– how long before a double disk failure?
– a triple disk failure?
How long to re-sync?
29
 Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
30
 Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
31
 Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
32
 Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
Large Purchase Contracts, 5-year spare-parts plan
33
Forget Performance?
SAN/NAS
data data data
data data data
daa data data
data data data
function
RDBMS
Traditional Architecture
function
App
function
App
function
App
34
Forget Performance?
SAN/NAS
data data data
data data data
daa data data
data data data
function
RDBMS
Traditional Architecture
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
Hadoop
function
App
function
App
function
App
Geographically dispersed also?
35
 Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
What MapR does
36
 Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
What MapR does
37
 Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
 Spread replicas of each container across the cluster
What MapR does
38
 Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
 Spread replicas of each container across the cluster
What MapR does
39
Why does it improve things?
40
 100-node cluster
 each node holds 1/100th of every node's data
MapR Replication Example
41
 100-node cluster
 each node holds 1/100th of every node's data
 when a server dies
MapR Replication Example
42
 100-node cluster
 each node holds 1/100th of every node's data
 when a server dies
MapR Replication Example
43
 100-node cluster
 each node holds 1/100th of every node's data
 when a server dies
MapR Replication Example
44
 100-node cluster
 each node holds 1/100th of every node's data
 when a server dies
 entire cluster resync's the dead node's data
MapR Replication Example
45
 100-node cluster
 each node holds 1/100th of every node's data
 when a server dies
 entire cluster resync's the dead node's data
MapR Replication Example
46
 99 nodes re-sync'ing in parallel
MapR Re-sync Speed
47
 99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
MapR Re-sync Speed
48
 99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
 Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
49
 99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
 Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
 Net speed up is about 100x
– 350 hours vs. 3.5
50
 99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
 Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
 Net speed up is about 100x
– 350 hours vs. 3.5
 MTTDL is 100xbetter
51
Why is this so difficult?
52
 Writes are synchronous
 Data is replicated in a "chain"
fashion
– utilizes full-duplex network
 Meta-data is replicated in a
"star" manner
– response time better
MapR's Read-write Replication
client1
client2
clientN
client1
client2
clientN
53
 As data size increases, writes
spread more, like dropping a
pebble in a pond
 Larger pebbles spread the
ripples farther
 Space balanced by moving idle
containers
Container Balancing
• Servers keep a bunch of containers "ready to go".
• Writes get distributed around the cluster.
54
MapR Container Resync
 MapR is 100% random
write
– very tough problem
 On a complete crash, all
replicas diverge from
each other
 On recovery, which one
should be master?
Complete
crash
55
MapR Container Resync
 MapR can detect exactly where
replicas diverged
– even at 2000 MB/s update rate
 Resync means
– roll-back rest to divergence point
– roll-forward to converge with chosen
master
 Done while online
– with very little impact on normal
operations
New master
after crash
56
 Resync traffic is “secondary”
 Each node continuously measures RTT to all its peers
 More throttle to slower peers
– Idle system runs at full speed
 All automatically
MapR does Automatic Resync Throttling
57
Where/how does MapR exploit this
unique advantage?
58
NameNode
E F
NameNode
E F
NameNode
E F
MapR's No-NameNode Architecture
HDFS Federation MapR (distributed metadata)
• Multiple single points of failure
• Limited to 50-200 million files
• Performance bottleneck
• Commercial NAS required
• HA w/ automatic failover
• Instant cluster restart
• Up to 1T files (> 5000x advantage)
• 10-20x higher performance
• 100% commodity hardware
NAS
appliance
NameNode
A B
NameNode
C D
NameNode
E F
DataNode DataNode DataNode
DataNode DataNode DataNode
A F C D E D
B C E B
C F B F
A B
A D
E
59
Relative performance and scale
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
0 1000 2000 3000 4000 5000 6000
Filecreates/s
Files (M)
0 100 200 400 600 800 1000
MapR distribution
Other distribution
Benchmark: File creates (100B)
Hardware: 10 nodes, 2 x 4 cores, 24 GB
RAM, 12 x 1 TB 7200 RPM
0
50
100
150
200
250
300
350
400
0 0.5 1 1.5
Filecreates/s
Files (M)
Other distributionMapR Other Advantage
Rate (creates/s) 14-16K 335-360 40x
Scale (files) 6B 1.3M 4615x
60
Where/how does MapR exploit this
unique advantage?
61
MapR’s NFS allows Direct Deposit
Connectors not needed
No extra scripts or clusters to deploy and maintain
Random Read/Write
Compression
Distributed HA
Web
Server
…
Database
Server
Application
Server
62
Where/how does MapR exploit this
unique advantage?
63
MapR Volumes
100K volumes are OK,
create as many as
desired!
/projects
/tahoe
/yosemite
/user
/msmith
/bjohnson
Volumes dramatically simplify the
management of Big Data
• Replication factor
• Scheduled mirroring
• Scheduled snapshots
• Data placement control
• User access and tracking
• Administrative permissions
64
Where/how does MapR exploit this
unique advantage?
65
M7 Tables
 M7 tables integrated into storage
– always available on every node, zero admin
 Unlimited number of tables
– Apache HBase is typically 10-20 tables (max 100)
 No compactions
 Instant-On
– zero recovery time
 5-10x better perf
 Consistent low latency
– At 95%-ile and 99%-ile MapR M7
66
M7 vs. CDH: 50-50 Mix (Reads)
67
M7 vs. CDH: 50-50 load (read latency)
68
Where/how does MapR exploit this
unique advantage?
69
 ALL Hadoop components are Highly Available, eg, YARN
MapR makes Hadoop truly HA
70
 ALL Hadoop components are Highly Available, eg, YARN
 ApplicationMaster (old JT) and TaskTracker record their state in
MapR
MapR makes Hadoop truly HA
71
 ALL Hadoop components are Highly Available, eg, YARN
 ApplicationMaster (old JT) and TaskTracker record their state in
MapR
 On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
MapR makes Hadoop truly HA
72
 ALL Hadoop components are Highly Available, eg, YARN
 ApplicationMaster (old JT) and TaskTracker record their state in
MapR
 On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
 All jobs resume from where they were
– Only from MapR
MapR makes Hadoop truly HA
73
 ALL Hadoop components are Highly Available, eg, YARN
 ApplicationMaster (old JT) and TaskTracker record their state in
MapR
 On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
 All jobs resume from where they were
– Only from MapR
 Allows pre-emption
– MapR can pre-empt any job, without losing its progress
– ExpressLane™ feature in MapR exploits it
MapR makes Hadoop truly HA
74
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
75
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
 Save service-state in MapR
 Save data in MapR
76
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
 Save service-state in MapR
 Save data in MapR
 Use Zookeeper to notice service failure
77
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
 Save service-state in MapR
 Save data in MapR
 Use Zookeeper to notice service failure
 Restart anywhere, data+state will move there automatically
78
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
 Save service-state in MapR
 Save data in MapR
 Use Zookeeper to notice service failure
 Restart anywhere, data+state will move there automatically
 That’s what we did!
79
Where/how can YOU exploit
MapR’s unique advantage?
 ALL your code can easily be scale-out HA
 Save service-state in MapR
 Save data in MapR
 Use Zookeeper to notice service failure
 Restart anywhere, data+state will move there automatically
 That’s what we did!
 Only from MapR: HA for Impala, Hive, Oozie, Storm, MySQL,
SOLR/Lucene, Kafka, …
80
Build cluster brick by brick, one node at a time
 Use commodity hardware at rock-bottom prices
 Get enterprise-class reliability: instant-restart, snapshots,
mirrors, no-single-point-of-failure, …
 Export via NFS, ODBC, Hadoop and other std protocols
MapR: Unlimited Scale
# files, # tables trillions
# rows per table trillions
# data 1-10 Exabytes
# nodes 10,000+

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive

Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive


 
Extending Apache Ranger Authorization Beyond Hadoop: Review of Apache Ranger ...
Extending Apache Ranger Authorization Beyond Hadoop: Review of Apache Ranger ...Extending Apache Ranger Authorization Beyond Hadoop: Review of Apache Ranger ...
Extending Apache Ranger Authorization Beyond Hadoop: Review of Apache Ranger ...
 
hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...
hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...
hbaseconasia2019 HBCK2: Concepts, trends, and recipes for fixing issues in HB...
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
 
End-End Security with Confluent Platform
End-End Security with Confluent Platform End-End Security with Confluent Platform
End-End Security with Confluent Platform
 
Using Apache Hive with High Performance
Using Apache Hive with High PerformanceUsing Apache Hive with High Performance
Using Apache Hive with High Performance
 
Confluent REST Proxy and Schema Registry (Concepts, Architecture, Features)
Confluent REST Proxy and Schema Registry (Concepts, Architecture, Features)Confluent REST Proxy and Schema Registry (Concepts, Architecture, Features)
Confluent REST Proxy and Schema Registry (Concepts, Architecture, Features)
 
Kafka Tutorial - DevOps, Admin and Ops
Kafka Tutorial - DevOps, Admin and OpsKafka Tutorial - DevOps, Admin and Ops
Kafka Tutorial - DevOps, Admin and Ops
 
Oracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLONOracle RAC 19c and Later - Best Practices #OOWLON
Oracle RAC 19c and Later - Best Practices #OOWLON
 
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the Cloud
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudOracle RAC Virtualized - In VMs, in Containers, On-premises, and in the Cloud
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the Cloud
 
From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...
From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...
From Postgres to Event-Driven: using docker-compose to build CDC pipelines in...
 
Apache Spark Overview
Apache Spark OverviewApache Spark Overview
Apache Spark Overview
 
Tuning Apache Ambari performance for Big Data at scale with 3000 agents
Tuning Apache Ambari performance for Big Data at scale with 3000 agentsTuning Apache Ambari performance for Big Data at scale with 3000 agents
Tuning Apache Ambari performance for Big Data at scale with 3000 agents
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
 
Apache Hive Tutorial
Apache Hive TutorialApache Hive Tutorial
Apache Hive Tutorial
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
 
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon KimHDFS on Kubernetes—Lessons Learned with Kimoon Kim
HDFS on Kubernetes—Lessons Learned with Kimoon Kim
 
Achieving Continuous Availability for Your Applications with Oracle MAA
Achieving Continuous Availability for Your Applications with Oracle MAAAchieving Continuous Availability for Your Applications with Oracle MAA
Achieving Continuous Availability for Your Applications with Oracle MAA
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Kafka Tutorial - Introduction to Apache Kafka (Part 1)Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
 

Andere mochten auch

Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
Databricks
 

Andere mochten auch (11)

Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase API
 
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
 
Deep Learning for Fraud Detection
Deep Learning for Fraud DetectionDeep Learning for Fraud Detection
Deep Learning for Fraud Detection
 
MapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR and Cisco Make IT Better
MapR and Cisco Make IT Better
 
Modern Data Architecture
Modern Data ArchitectureModern Data Architecture
Modern Data Architecture
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
MapR Data Analyst
MapR Data AnalystMapR Data Analyst
MapR Data Analyst
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and Smarter
 
Introduction to Spark Internals
Introduction to Spark InternalsIntroduction to Spark Internals
Introduction to Spark Internals
 
Apache Spark Architecture
Apache Spark ArchitectureApache Spark Architecture
Apache Spark Architecture
 

Ähnlich wie Architectural Overview of MapR's Apache Hadoop Distribution

Ted Dunning - Whither Hadoop
Ted Dunning - Whither HadoopTed Dunning - Whither Hadoop
Ted Dunning - Whither Hadoop
Ed Kohlwey
 
Intro (Distributed computing)
Intro (Distributed computing)Intro (Distributed computing)
Intro (Distributed computing)
Sri Prasanna
 
UKOUG, Lies, Damn Lies and I/O Statistics
UKOUG, Lies, Damn Lies and I/O StatisticsUKOUG, Lies, Damn Lies and I/O Statistics
UKOUG, Lies, Damn Lies and I/O Statistics
Kyle Hailey
 
(Aaron myers) hdfs impala
(Aaron myers)   hdfs impala(Aaron myers)   hdfs impala
(Aaron myers) hdfs impala
NAVER D2
 

Ähnlich wie Architectural Overview of MapR's Apache Hadoop Distribution (20)

Shift into High Gear: Dramatically Improve Hadoop & NoSQL Performance
Shift into High Gear: Dramatically Improve Hadoop & NoSQL PerformanceShift into High Gear: Dramatically Improve Hadoop & NoSQL Performance
Shift into High Gear: Dramatically Improve Hadoop & NoSQL Performance
 
Introduction to Galera Cluster
Introduction to Galera ClusterIntroduction to Galera Cluster
Introduction to Galera Cluster
 
Eventual Consistency @WalmartLabs with Kafka, Avro, SolrCloud and Hadoop
Eventual Consistency @WalmartLabs with Kafka, Avro, SolrCloud and HadoopEventual Consistency @WalmartLabs with Kafka, Avro, SolrCloud and Hadoop
Eventual Consistency @WalmartLabs with Kafka, Avro, SolrCloud and Hadoop
 
MySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspectiveMySQL 5.7 clustering: The developer perspective
MySQL 5.7 clustering: The developer perspective
 
Compressed Introduction to Hadoop, SQL-on-Hadoop and NoSQL
Compressed Introduction to Hadoop, SQL-on-Hadoop and NoSQLCompressed Introduction to Hadoop, SQL-on-Hadoop and NoSQL
Compressed Introduction to Hadoop, SQL-on-Hadoop and NoSQL
 
Invalidation-Based Protocols for Replicated Datastores
Invalidation-Based Protocols for Replicated DatastoresInvalidation-Based Protocols for Replicated Datastores
Invalidation-Based Protocols for Replicated Datastores
 
Pilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyPilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster Redundancy
 
Knowledge share about scalable application architecture
Knowledge share about scalable application architectureKnowledge share about scalable application architecture
Knowledge share about scalable application architecture
 
Ted Dunning - Whither Hadoop
Ted Dunning - Whither HadoopTed Dunning - Whither Hadoop
Ted Dunning - Whither Hadoop
 
Speed up R with parallel programming in the Cloud
Speed up R with parallel programming in the CloudSpeed up R with parallel programming in the Cloud
Speed up R with parallel programming in the Cloud
 
Intro (Distributed computing)
Intro (Distributed computing)Intro (Distributed computing)
Intro (Distributed computing)
 
Cloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation inCloud computing UNIT 2.1 presentation in
Cloud computing UNIT 2.1 presentation in
 
UKOUG, Lies, Damn Lies and I/O Statistics
UKOUG, Lies, Damn Lies and I/O StatisticsUKOUG, Lies, Damn Lies and I/O Statistics
UKOUG, Lies, Damn Lies and I/O Statistics
 
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
IMCSummit 2015 - Day 2 IT Business Track - 4 Myths about In-Memory Databases ...
 
MariaDB High Availability
MariaDB High AvailabilityMariaDB High Availability
MariaDB High Availability
 
Perforce Server: The Next Generation
Perforce Server: The Next GenerationPerforce Server: The Next Generation
Perforce Server: The Next Generation
 
HBaseCon 2013: How to Get the MTTR Below 1 Minute and More
HBaseCon 2013: How to Get the MTTR Below 1 Minute and MoreHBaseCon 2013: How to Get the MTTR Below 1 Minute and More
HBaseCon 2013: How to Get the MTTR Below 1 Minute and More
 
(Aaron myers) hdfs impala
(Aaron myers)   hdfs impala(Aaron myers)   hdfs impala
(Aaron myers) hdfs impala
 
SRV407 Deep Dive on Amazon Aurora
SRV407 Deep Dive on Amazon AuroraSRV407 Deep Dive on Amazon Aurora
SRV407 Deep Dive on Amazon Aurora
 
Hadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapaHadoop Interview Questions and Answers by rohit kapa
Hadoop Interview Questions and Answers by rohit kapa
 

KĂźrzlich hochgeladen

KĂźrzlich hochgeladen (20)

Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 

Architectural Overview of MapR's Apache Hadoop Distribution

  • 1. 1 An Architectural Overview of MapR M. C. Srivas, CTO/Founder
  • 2. 2  100% Apache Hadoop  With significant enterprise- grade enhancements  Comprehensive management  Industry-standard interfaces  Higher performance MapR Distribution for Apache Hadoop
  • 3. 3 MapR: Lights Out Data Center Ready • Automated stateful failover • Automated re-replication • Self-healing from HW and SW failures • Load balancing • Rolling upgrades • No lost jobs or data • 99999’s of uptime Reliable Compute Dependable Storage • Business continuity with snapshots and mirrors • Recover to a point in time • End-to-end check summing • Strong consistency • Built-in compression • Mirror between two sites by RTO policy
  • 4. 4 MapR does MapReduce (fast) TeraSort Record 1 TB in 54 seconds 1003 nodes MinuteSort Record 1.5 TB in 59 seconds 2103 nodes
  • 5. 5 MapR does MapReduce (faster) TeraSort Record 1 TB in 54 seconds 1003 nodes MinuteSort Record 1.5 TB in 59 seconds 2103 nodes 1.65 300
  • 6. 6 Google chose MapR to provide Hadoop on Google Compute Engine The Cloud Leaders Pick MapR Amazon EMR is the largest Hadoop provider in revenue and # of clusters Deploying OpenStack? MapR partnership with Canonical and Mirantis on OpenStack support.
  • 7. 7 1. Make the storage reliable – Recover from disk and node failures How to make a cluster reliable?
  • 8. 8 1. Make the storage reliable – Recover from disk and node failures 2. Make services reliable – Services need to checkpoint their state rapidly – Restart failed service, possibly on another node – Move check-pointed state to restarted service, using (1) above How to make a cluster reliable?
  • 9. 9 1. Make the storage reliable – Recover from disk and node failures 2. Make services reliable – Services need to checkpoint their state rapidly – Restart failed service, possibly on another node – Move check-pointed state to restarted service, using (1) above 3. Do it fast – Instant-on … (1) and (2) must happen very, very fast – Without maintenance windows • No compactions (eg, Cassandra, Apache HBase) • No “anti-entropy” that periodically wipes out the cluster (eg, Cassandra) How to make a cluster reliable?
  • 10. 10  No NVRAM Reliability With Commodity Hardware
  • 11. 11  No NVRAM  Cannot assume special connectivity – no separate data paths for "online" vs. replica traffic Reliability With Commodity Hardware
  • 12. 12  No NVRAM  Cannot assume special connectivity – no separate data paths for "online" vs. replica traffic  Cannot even assume more than 1 drive per node – no RAID possible Reliability With Commodity Hardware
  • 13. 13  No NVRAM  Cannot assume special connectivity – no separate data paths for "online" vs. replica traffic  Cannot even assume more than 1 drive per node – no RAID possible  Use replication, but … – cannot assume peers have equal drive sizes – drive on first machine is 10x larger than drive on other? Reliability With Commodity Hardware
  • 14. 14  No NVRAM  Cannot assume special connectivity – no separate data paths for "online" vs. replica traffic  Cannot even assume more than 1 drive per node – no RAID possible  Use replication, but … – cannot assume peers have equal drive sizes – drive on first machine is 10x larger than drive on other?  No choice but to replicate for reliability Reliability With Commodity Hardware
  • 15. 15  Replication is easy, right? All we have to do is send the same bits to the master and replica. Reliability via Replication Clients Primary Server Replica Clients Primary Server Replica Normal replication, primary forwards Cassandra-style replication
  • 16. 16  When the replica comes back, it is stale – it must brought up-to-date – until then, exposed to failure But crashes occur… Clients Primary Server Replica Clients Primary Server Replica Primary re-syncs replica Replica remains stale until "anti-entropy" process kicked off by administrator Who re-syncs?
  • 17. 17  HDFS solves the problem a third way Unless its Apache HDFS …
  • 18. 18  HDFS solves the problem a third way  Make everything read-only – Nothing to re-sync Unless its Apache HDFS …
  • 19. 19  HDFS solves the problem a third way  Make everything read-only – Nothing to re-sync  Single writer, no reads allowed while writing Unless its Apache HDFS …
  • 20. 20  HDFS solves the problem a third way  Make everything read-only – Nothing to re-sync  Single writer, no reads allowed while writing  File close is the transaction that allows readers to see data – unclosed files are lost – cannot write any further to closed file Unless its Apache HDFS …
  • 21. 21  HDFS solves the problem a third way  Make everything read-only – Nothing to re-sync  Single writer, no reads allowed while writing  File close is the transaction that allows readers to see data – unclosed files are lost – cannot write any further to closed file  Real-time not possible with HDFS – to make data visible, must close file immediately after writing – Too many files is a serious problem with HDFS (a well documented limitation) Unless its Apache HDFS …
  • 22. 22  HDFS solves the problem a third way  Make everything read-only – Nothing to re-sync  Single writer, no reads allowed while writing  File close is the transaction that allows readers to see data – unclosed files are lost – cannot write any further to closed file  Real-time not possible with HDFS – to make data visible, must close file immediately after writing – Too many files is a serious problem with HDFS (a well documented limitation)  HDFS therefore cannot do NFS, ever – No “close” in NFS … can lose data any time Unless its Apache HDFS …
  • 23. 23  To support normal apps, need full read/write support  Let's return to issue: resync the replica when it comes back This is the 21st century… Clients Primary Server Replica Clients Primary Server Replica Primary re-syncs replica Replica remains stale until "anti-entropy" process kicked off by administrator Who re-syncs?
  • 24. 24  24 TB / server – @ 1000MB/s = 7 hours – practical terms, @ 200MB/s = 35 hours How long to re-sync?
  • 25. 25  24 TB / server – @ 1000MB/s = 7 hours – practical terms, @ 200MB/s = 35 hours  Did you say you want to do this online? How long to re-sync?
  • 26. 26  24 TB / server – @ 1000MB/s = 7 hours – practical terms, @ 200MB/s = 35 hours  Did you say you want to do this online? – throttle re-sync rate to 1/10th – 350 hours to re-sync (= 15 days) How long to re-sync?
  • 27. 27  24 TB / server – @ 1000MB/s = 7 hours – practical terms, @ 200MB/s = 35 hours  Did you say you want to do this online? – throttle re-sync rate to 1/10th – 350 hours to re-sync (= 15 days)  What is your Mean Time To Data Loss (MTTDL)? How long to re-sync?
  • 28. 28  24 TB / server – @ 1000MB/s = 7 hours – practical terms, @ 200MB/s = 35 hours  Did you say you want to do this online? – throttle re-sync rate to 1/10th – 350 hours to re-sync (= 15 days)  What is your Mean Time To Data Loss (MTTDL)? – how long before a double disk failure? – a triple disk failure? How long to re-sync?
  • 29. 29  Use dual-ported disk to side-step this problem Traditional solutions Clients Primary Server Replica Dual Ported Disk Array Raid-6 with idle spares Servers use NVRAM
  • 30. 30  Use dual-ported disk to side-step this problem Traditional solutions Clients Primary Server Replica Dual Ported Disk Array Raid-6 with idle spares Servers use NVRAM COMMODITY HARDWARE LARGE SCALE CLUSTERING
  • 31. 31  Use dual-ported disk to side-step this problem Traditional solutions Clients Primary Server Replica Dual Ported Disk Array Raid-6 with idle spares Servers use NVRAM COMMODITY HARDWARE LARGE SCALE CLUSTERING
  • 32. 32  Use dual-ported disk to side-step this problem Traditional solutions Clients Primary Server Replica Dual Ported Disk Array Raid-6 with idle spares Servers use NVRAM COMMODITY HARDWARE LARGE SCALE CLUSTERING Large Purchase Contracts, 5-year spare-parts plan
  • 33. 33 Forget Performance? SAN/NAS data data data data data data daa data data data data data function RDBMS Traditional Architecture function App function App function App
  • 34. 34 Forget Performance? SAN/NAS data data data data data data daa data data data data data function RDBMS Traditional Architecture data function data function data function data function data function data function data function data function data function data function data function data function Hadoop function App function App function App Geographically dispersed also?
  • 35. 35  Chop the data on each node to 1000's of pieces – not millions of pieces, only 1000's – pieces are called containers What MapR does
  • 36. 36  Chop the data on each node to 1000's of pieces – not millions of pieces, only 1000's – pieces are called containers What MapR does
  • 37. 37  Chop the data on each node to 1000's of pieces – not millions of pieces, only 1000's – pieces are called containers  Spread replicas of each container across the cluster What MapR does
  • 38. 38  Chop the data on each node to 1000's of pieces – not millions of pieces, only 1000's – pieces are called containers  Spread replicas of each container across the cluster What MapR does
  • 39. 39 Why does it improve things?
  • 40. 40  100-node cluster  each node holds 1/100th of every node's data MapR Replication Example
  • 41. 41  100-node cluster  each node holds 1/100th of every node's data  when a server dies MapR Replication Example
  • 42. 42  100-node cluster  each node holds 1/100th of every node's data  when a server dies MapR Replication Example
  • 43. 43  100-node cluster  each node holds 1/100th of every node's data  when a server dies MapR Replication Example
  • 44. 44  100-node cluster  each node holds 1/100th of every node's data  when a server dies  entire cluster resync's the dead node's data MapR Replication Example
  • 45. 45  100-node cluster  each node holds 1/100th of every node's data  when a server dies  entire cluster resync's the dead node's data MapR Replication Example
  • 46. 46  99 nodes re-sync'ing in parallel MapR Re-sync Speed
  • 47. 47  99 nodes re-sync'ing in parallel – 99x number of drives – 99x number of ethernet ports – 99x cpu's MapR Re-sync Speed
  • 48. 48  99 nodes re-sync'ing in parallel – 99x number of drives – 99x number of ethernet ports – 99x cpu's  Each is resync'ing 1/100th of the data MapR Re-sync Speed
  • 49. 49  99 nodes re-sync'ing in parallel – 99x number of drives – 99x number of ethernet ports – 99x cpu's  Each is resync'ing 1/100th of the data MapR Re-sync Speed  Net speed up is about 100x – 350 hours vs. 3.5
  • 50. 50  99 nodes re-sync'ing in parallel – 99x number of drives – 99x number of ethernet ports – 99x cpu's  Each is resync'ing 1/100th of the data MapR Re-sync Speed  Net speed up is about 100x – 350 hours vs. 3.5  MTTDL is 100xbetter
  • 51. 51 Why is this so difficult?
  • 52. 52  Writes are synchronous  Data is replicated in a "chain" fashion – utilizes full-duplex network  Meta-data is replicated in a "star" manner – response time better MapR's Read-write Replication client1 client2 clientN client1 client2 clientN
  • 53. 53  As data size increases, writes spread more, like dropping a pebble in a pond  Larger pebbles spread the ripples farther  Space balanced by moving idle containers Container Balancing • Servers keep a bunch of containers "ready to go". • Writes get distributed around the cluster.
  • 54. 54 MapR Container Resync  MapR is 100% random write – very tough problem  On a complete crash, all replicas diverge from each other  On recovery, which one should be master? Complete crash
  • 55. 55 MapR Container Resync  MapR can detect exactly where replicas diverged – even at 2000 MB/s update rate  Resync means – roll-back rest to divergence point – roll-forward to converge with chosen master  Done while online – with very little impact on normal operations New master after crash
  • 56. 56  Resync traffic is “secondary”  Each node continuously measures RTT to all its peers  More throttle to slower peers – Idle system runs at full speed  All automatically MapR does Automatic Resync Throttling
  • 57. 57 Where/how does MapR exploit this unique advantage?
  • 58. 58 NameNode E F NameNode E F NameNode E F MapR's No-NameNode Architecture HDFS Federation MapR (distributed metadata) • Multiple single points of failure • Limited to 50-200 million files • Performance bottleneck • Commercial NAS required • HA w/ automatic failover • Instant cluster restart • Up to 1T files (> 5000x advantage) • 10-20x higher performance • 100% commodity hardware NAS appliance NameNode A B NameNode C D NameNode E F DataNode DataNode DataNode DataNode DataNode DataNode A F C D E D B C E B C F B F A B A D E
  • 59. 59 Relative performance and scale 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 0 1000 2000 3000 4000 5000 6000 Filecreates/s Files (M) 0 100 200 400 600 800 1000 MapR distribution Other distribution Benchmark: File creates (100B) Hardware: 10 nodes, 2 x 4 cores, 24 GB RAM, 12 x 1 TB 7200 RPM 0 50 100 150 200 250 300 350 400 0 0.5 1 1.5 Filecreates/s Files (M) Other distributionMapR Other Advantage Rate (creates/s) 14-16K 335-360 40x Scale (files) 6B 1.3M 4615x
  • 60. 60 Where/how does MapR exploit this unique advantage?
  • 61. 61 MapR’s NFS allows Direct Deposit Connectors not needed No extra scripts or clusters to deploy and maintain Random Read/Write Compression Distributed HA Web Server … Database Server Application Server
  • 62. 62 Where/how does MapR exploit this unique advantage?
  • 63. 63 MapR Volumes 100K volumes are OK, create as many as desired! /projects /tahoe /yosemite /user /msmith /bjohnson Volumes dramatically simplify the management of Big Data • Replication factor • Scheduled mirroring • Scheduled snapshots • Data placement control • User access and tracking • Administrative permissions
  • 64. 64 Where/how does MapR exploit this unique advantage?
  • 65. 65 M7 Tables  M7 tables integrated into storage – always available on every node, zero admin  Unlimited number of tables – Apache HBase is typically 10-20 tables (max 100)  No compactions  Instant-On – zero recovery time  5-10x better perf  Consistent low latency – At 95%-ile and 99%-ile MapR M7
  • 66. 66 M7 vs. CDH: 50-50 Mix (Reads)
  • 67. 67 M7 vs. CDH: 50-50 load (read latency)
  • 68. 68 Where/how does MapR exploit this unique advantage?
  • 69. 69  ALL Hadoop components are Highly Available, eg, YARN MapR makes Hadoop truly HA
  • 70. 70  ALL Hadoop components are Highly Available, eg, YARN  ApplicationMaster (old JT) and TaskTracker record their state in MapR MapR makes Hadoop truly HA
  • 71. 71  ALL Hadoop components are Highly Available, eg, YARN  ApplicationMaster (old JT) and TaskTracker record their state in MapR  On node-failure, AM recovers its state from MapR – Works even if entire cluster restarted MapR makes Hadoop truly HA
  • 72. 72  ALL Hadoop components are Highly Available, eg, YARN  ApplicationMaster (old JT) and TaskTracker record their state in MapR  On node-failure, AM recovers its state from MapR – Works even if entire cluster restarted  All jobs resume from where they were – Only from MapR MapR makes Hadoop truly HA
  • 73. 73  ALL Hadoop components are Highly Available, eg, YARN  ApplicationMaster (old JT) and TaskTracker record their state in MapR  On node-failure, AM recovers its state from MapR – Works even if entire cluster restarted  All jobs resume from where they were – Only from MapR  Allows pre-emption – MapR can pre-empt any job, without losing its progress – ExpressLane™ feature in MapR exploits it MapR makes Hadoop truly HA
  • 74. 74 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA
  • 75. 75 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA  Save service-state in MapR  Save data in MapR
  • 76. 76 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA  Save service-state in MapR  Save data in MapR  Use Zookeeper to notice service failure
  • 77. 77 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA  Save service-state in MapR  Save data in MapR  Use Zookeeper to notice service failure  Restart anywhere, data+state will move there automatically
  • 78. 78 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA  Save service-state in MapR  Save data in MapR  Use Zookeeper to notice service failure  Restart anywhere, data+state will move there automatically  That’s what we did!
  • 79. 79 Where/how can YOU exploit MapR’s unique advantage?  ALL your code can easily be scale-out HA  Save service-state in MapR  Save data in MapR  Use Zookeeper to notice service failure  Restart anywhere, data+state will move there automatically  That’s what we did!  Only from MapR: HA for Impala, Hive, Oozie, Storm, MySQL, SOLR/Lucene, Kafka, …
  • 80. 80 Build cluster brick by brick, one node at a time  Use commodity hardware at rock-bottom prices  Get enterprise-class reliability: instant-restart, snapshots, mirrors, no-single-point-of-failure, …  Export via NFS, ODBC, Hadoop and other std protocols MapR: Unlimited Scale # files, # tables trillions # rows per table trillions # data 1-10 Exabytes # nodes 10,000+