SlideShare a Scribd company logo
1 of 62
Download to read offline
Modeling with
   Hadoop	

       Vijay K. Narayanan	

Principal Scientist, Yahoo! Labs, Yahoo!	

                     	

                     	

        Milind Bhandarkar	

Chief Architect, Greenplum Labs, EMC2
Session 1: Overview of
       Hadoop	

•  Motivation	

•  Hadoop	

  •  Map-Reduce	

  •  Distributed File System	

  •  Next Generation MapReduce	

•  Q & A	

                    2
Session 2: Modeling
       with Hadoop	

•  Types of learning in MapReduce	

•  Algorithms in MapReduce
   framework	

  •  Data parallel algorithms	

  •  Sequential algorithms	

•  Challenges and Enhancements	

                  3
Session 3: Hands On
        Exercise	

•  Spin-up Single Node Hadoop cluster in a
  Virtual Machine	

•  Write a regression trainer	

•  Train model on a dataset	


                        4
Overview of Apache
     Hadoop
Hadoop At Yahoo!
    (Some Statistics)	

•  40,000 + machines in 20+ clusters	

•  Largest cluster is 4,000 machines	

•  170 Petabytes of storage	

•  1000+ users	

•  1,000,000+ jobs/month	

                       6
BEHIND
EVERY CLICK
Who Uses Hadoop ?
Why Hadoop ?	



      10
Big Datasets
(Data-Rich Computing theme proposal. J. Campbell, et al, 2007)
Cost Per Gigabyte                             
(http://www.mkomo.com/cost-per-gigabyte)
Storage Trends
(Graph by Adam Leventhal, ACM Queue, Dec 2009)
Motivating Examples	



          14
Yahoo! Search Assist
Search Assist	

•  Insight: Related concepts appear close
  together in text corpus	

•  Input: Web pages	

  •  1 Billion Pages, 10K bytes each	

  •  10 TB of input data	

•  Output: List(word, List(related words))	

                       16
Search Assist	

// Input: List(URL, Text)	
foreach URL in Input :	
    Words = Tokenize(Text(URL));	
    foreach word in Tokens :	
        Insert (word, Next(word, Tokens)) in Pairs;	
        Insert (word, Previous(word, Tokens)) in Pairs;	
// Result: Pairs = List (word, RelatedWord)	
Group Pairs by word;	
// Result: List (word, List(RelatedWords)	
foreach word in Pairs :	
    Count RelatedWords in GroupedPairs;	
// Result: List (word, List(RelatedWords, count))	
foreach word in CountedPairs :	
    Sort Pairs(word, *) descending by count;	
    choose Top 5 Pairs;	
// Result: List (word, Top5(RelatedWords))	

                           17
People You May Know
People You May Know	

•  Insight: You might also know Joe Smith if a
  lot of folks you know, know Joe Smith	

  •  if you don t know Joe Smith already	

•  Numbers:	

  •  100 MM users	

  •  Average connections per user is 100	

                      19
People You May Know	


// Input: List(UserName, List(Connections))	
	
foreach u in UserList : // 100 MM	
    foreach x in Connections(u) : // 100	
        foreach y in Connections(x) : // 100	
            if (y not in Connections(u)) :	
                Count(u, y)++; // 1 Trillion Iterations	
    Sort (u,y) in descending order of Count(u,y);	
    Choose Top 3 y;	
    Store (u, {y0, y1, y2}) for serving;	
	




                           20
Performance	

•  101 Random accesses for each user	

  •  Assume 1 ms per random access	

  •  100 ms per user	

•  100 MM users	

  •  100 days on a single machine	

                     21
MapReduce Paradigm	



         22
Map  Reduce	


•  Primitives in Lisp ( Other functional
  languages) 1970s	

•  Google Paper 2004	

  •  http://labs.google.com/papers/
    mapreduce.html	



                        23
Map	


Output_List = Map (Input_List)	
	




Square (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) =	
	
(1, 4, 9, 16, 25, 36,49, 64, 81, 100)	
	



                           24
Reduce	


Output_Element = Reduce (Input_List)	
	




Sum (1, 4, 9, 16, 25, 36,49, 64, 81, 100) = 385	
	




                           25
Parallelism	

•  Map is inherently parallel	

  •  Each list element processed
    independently	

•  Reduce is inherently sequential	

  •  Unless processing multiple lists	

•  Grouping to produce multiple lists	

                       26
Search Assist Map	

// Input: http://hadoop.apache.org	
	
Pairs = Tokenize_And_Pair ( Text ( Input ) )	
	




Output = {	
(apache, hadoop) (hadoop, mapreduce) (hadoop, streaming)
(hadoop, pig) (apache, pig) (hadoop, DFS) (streaming,
commandline) (hadoop, java) (DFS, namenode) (datanode,
block) (replication, default)...	
}	
	

                           27
Search Assist Reduce	

// Input: GroupedList (word, GroupedList(words))	
	
CountedPairs = CountOccurrences (word, RelatedWords)	




Output = {	
(hadoop, apache, 7) (hadoop, DFS, 3) (hadoop, streaming,
4) (hadoop, mapreduce, 9) ...	
}	



                           28
Issues with Large Data	


•  Map Parallelism: Chunking input data	

•  Reduce Parallelism: Grouping related data	

•  Dealing with failures  load imbalance	



                      29
Apache Hadoop	


•  January 2006: Subproject of Lucene	

•  January 2008: Top-level Apache project	

•  Stable Version: 0.20.203	

•  Latest Version: 0.22 (Coming soon)	


                      31
Apache Hadoop	

•  Reliable, Performant Distributed file system	

•  MapReduce Programming framework	

•  Ecosystem: HBase, Hive, Pig, Howl, Oozie,
  Zookeeper, Chukwa, Mahout, Cascading,
  Scribe, Cassandra, Hypertable, Voldemort,
  Azkaban, Sqoop, Flume, Avro ...	



                      32
Problem: Bandwidth to
        Data	

•  Scan 100TB Datasets on 1000 node cluster	

  •  Remote storage @ 10MB/s = 165 mins	

  •  Local storage @ 50-200MB/s = 33-8 mins	

•  Moving computation is more efficient than
  moving data	

  •  Need visibility into data placement	

                       33
Problem: Scaling
          Reliably	

•  Failure is not an option, it s a rule !	

  •  1000 nodes, MTBF  1 day	

  •  4000 disks, 8000 cores, 25 switches,
     1000 NICs, 2000 DIMMS (16TB RAM)	

•  Need fault tolerant store with reasonable
  availability guarantees	

  •  Handle hardware faults transparently	

                        34
Hadoop Goals	

•  Scalable: Petabytes (10     15   Bytes) of data on
  thousands on nodes	

•  Economical: Commodity components only	

•  Reliable	

  •  Engineering reliability into every
    application is expensive	



                       35
Hadoop MapReduce	



        36
Think MapReduce	


•  Record = (Key, Value)	

•  Key : Comparable, Serializable	

•  Value: Serializable	

•  Input, Map, Shuffle, Reduce, Output	


                      37
Seems Familiar ?	



cat /var/log/auth.log* |  	
grep session opened | cut -d      -f10 | 	
sort | 	
uniq -c  	
~/userlist 	
	




                          38
Map	


•  Input: (Key , Value )	

              1          1

•  Output: List(Key , Value )	

                     2           2

•  Projections, Filtering, Transformation	



                         39
Shuffle	


•  Input: List(Key , Value )	

                   2            2

•  Output	

  •  Sort(Partition(List(Key , List(Value ))))	

                                    2      2

•  Provided by Hadoop	


                        40
Reduce	


•  Input: List(Key , List(Value ))	

                   2                   2

•  Output: List(Key , Value )	

                       3           3

•  Aggregation	



                           41
Hadoop Streaming	

•  Hadoop is written in Java	

  •  Java MapReduce code is native 	

•  What about Non-Java Programmers ?	

  •  Perl, Python, Shell, R	

  •  grep, sed, awk, uniq as Mappers/Reducers	

•  Text Input and Output	

                      42
Hadoop Streaming	

•  Thin Java wrapper for Map  Reduce Tasks	

•  Forks actual Mapper  Reducer	

•  IPC via stdin, stdout, stderr	

•  Key.toString() t Value.toString() n	

•  Slower than Java programs	

  •  Allows for quick prototyping / debugging	

                      43
Hadoop Streaming	


$ bin/hadoop jar hadoop-streaming.jar 	
      -input in-files -output out-dir 	
      -mapper mapper.sh -reducer reducer.sh	
	
# mapper.sh	
	
sed -e 's/ /n/g' | grep .	
	
# reducer.sh	
	
uniq -c | awk '{print $2 t $1}'	
	



                           44
Hadoop Distributed
File System (HDFS)	



          45
HDFS	

•  Data is organized into files and directories	

•  Files are divided into uniform sized blocks
  (default 128MB) and distributed across
  cluster nodes	

•  HDFS exposes block placement so that
  computation can be migrated to data	



                       46
HDFS	

•  Blocks are replicated (default 3) to handle
  hardware failure	

•  Replication for performance and fault
  tolerance (Rack-Aware placement)	

•  HDFS keeps checksums of data for
  corruption detection and recovery	



                        47
HDFS	


•  Master-Worker Architecture	

•  Single NameNode	

•  Many (Thousands) DataNodes	



                    48
HDFS Master
         (NameNode)	

•  Manages filesystem namespace	

•  File metadata (i.e. inode )	

•  Mapping inode to list of blocks + locations	

•  Authorization  Authentication	

•  Checkpoint  journal namespace changes	

                       49
Namenode	

•  Mapping of datanode to list of blocks	

•  Monitor datanode health	

•  Replicate missing blocks	

•  Keeps ALL namespace in memory	

•  60M objects (File/Block) in 16GB	

                       50
Datanodes	

•  Handle block storage on multiple volumes
   block integrity	

•  Clients access the blocks directly from data
  nodes	

•  Periodically send heartbeats and block
  reports to Namenode	

•  Blocks are stored as underlying OS s files	

                         51
HDFS Architecture
Next Generation
  MapReduce	



       53
MapReduce Today	

    (Courtesy: Arun Murthy, Hortonworks)
Why ?	

•  Scalability Limitations today	

  •  Maximum cluster size: 4000 nodes	

  •  Maximum Concurrent tasks: 40,000	

•  Job Tracker SPOF	

•  Fixed map and reduce containers (slots)	

  •  Punishes pleasantly parallel apps	

                      55
Why ? (contd)	

•  MapReduce is not suitable for every
  application	

•  Fine-Grained Iterative applications	

  •  HaLoop: Hadoop in a Loop	

•  Message passing applications	

  •  Graph Processing	

                       56
Requirements	


•  Need scalable cluster resources manager	

•  Separate scheduling from resource
  management	

•  Multi-Lingual Communication Protocols	


                      57
Bottom Line	

•  @techmilind #mrng (MapReduce, Next
  Gen) is in reality, #rmng (Resource
  Manager, Next Gen)	

•  Expect different programming paradigms to
  be implemented	

 •  Including MPI (soon)	

                      58
Architecture	

  (Courtesy: Arun Murthy, Hortonworks)
The New World	

•    Resource Manager	

     •    Allocates resources (containers) to applications	

•    Node Manager	

     •    Manages containers on nodes	

•    Application Master	

     •    Specific to paradigm e.g. MapReduce application
          master, MPI application master etc	



                                 60
Container	


•  In current terminology: A Task Slot	

•  Slice of the node s hardware resources	

•  #of cores, virtual memory, disk size, disk
  and network bandwidth etc	

  •  Currently, only memory usage is sliced	


                      61
Questions ?	



      62

More Related Content

What's hot

Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Julien Le Dem
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Cloudera, Inc.
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoopjoelcrabb
 
Hadoop File system (HDFS)
Hadoop File system (HDFS)Hadoop File system (HDFS)
Hadoop File system (HDFS)Prashant Gupta
 
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...Mihai Criveti
 
SeaweedFS introduction
SeaweedFS introductionSeaweedFS introduction
SeaweedFS introductionchrislusf
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & DeltaDatabricks
 
Google Dremel. Concept and Implementations.
Google Dremel. Concept and Implementations.Google Dremel. Concept and Implementations.
Google Dremel. Concept and Implementations.Vicente Orjales
 
Introduction to HADOOP.pdf
Introduction to HADOOP.pdfIntroduction to HADOOP.pdf
Introduction to HADOOP.pdf8840VinayShelke
 
Apache Pulsar Development 101 with Python
Apache Pulsar Development 101 with PythonApache Pulsar Development 101 with Python
Apache Pulsar Development 101 with PythonTimothy Spann
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumTathastu.ai
 
Hive, Presto, and Spark on TPC-DS benchmark
Hive, Presto, and Spark on TPC-DS benchmarkHive, Presto, and Spark on TPC-DS benchmark
Hive, Presto, and Spark on TPC-DS benchmarkDongwon Kim
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
 
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Amazon Web Services
 
Venturing into Large Hadoop Clusters
Venturing into Large Hadoop ClustersVenturing into Large Hadoop Clusters
Venturing into Large Hadoop ClustersVARUN SAXENA
 
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
codecentric AG: CQRS and Event Sourcing Applications with Cassandracodecentric AG: CQRS and Event Sourcing Applications with Cassandra
codecentric AG: CQRS and Event Sourcing Applications with CassandraDataStax Academy
 
SQL-on-Hadoop Tutorial
SQL-on-Hadoop TutorialSQL-on-Hadoop Tutorial
SQL-on-Hadoop TutorialDaniel Abadi
 

What's hot (20)

Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Hadoop File system (HDFS)
Hadoop File system (HDFS)Hadoop File system (HDFS)
Hadoop File system (HDFS)
 
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
Retrieval Augmented Generation in Practice: Scalable GenAI platforms with k8s...
 
SeaweedFS introduction
SeaweedFS introductionSeaweedFS introduction
SeaweedFS introduction
 
Moving to Databricks & Delta
Moving to Databricks & DeltaMoving to Databricks & Delta
Moving to Databricks & Delta
 
Google Dremel. Concept and Implementations.
Google Dremel. Concept and Implementations.Google Dremel. Concept and Implementations.
Google Dremel. Concept and Implementations.
 
Introduction to HADOOP.pdf
Introduction to HADOOP.pdfIntroduction to HADOOP.pdf
Introduction to HADOOP.pdf
 
Parquet overview
Parquet overviewParquet overview
Parquet overview
 
Apache Pulsar Development 101 with Python
Apache Pulsar Development 101 with PythonApache Pulsar Development 101 with Python
Apache Pulsar Development 101 with Python
 
Building robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and DebeziumBuilding robust CDC pipeline with Apache Hudi and Debezium
Building robust CDC pipeline with Apache Hudi and Debezium
 
Apache hive
Apache hiveApache hive
Apache hive
 
Hive, Presto, and Spark on TPC-DS benchmark
Hive, Presto, and Spark on TPC-DS benchmarkHive, Presto, and Spark on TPC-DS benchmark
Hive, Presto, and Spark on TPC-DS benchmark
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
 
Venturing into Large Hadoop Clusters
Venturing into Large Hadoop ClustersVenturing into Large Hadoop Clusters
Venturing into Large Hadoop Clusters
 
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
codecentric AG: CQRS and Event Sourcing Applications with Cassandracodecentric AG: CQRS and Event Sourcing Applications with Cassandra
codecentric AG: CQRS and Event Sourcing Applications with Cassandra
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
SQL-on-Hadoop Tutorial
SQL-on-Hadoop TutorialSQL-on-Hadoop Tutorial
SQL-on-Hadoop Tutorial
 

Viewers also liked

Hadoop: The Default Machine Learning Platform ?
Hadoop: The Default Machine Learning Platform ?Hadoop: The Default Machine Learning Platform ?
Hadoop: The Default Machine Learning Platform ?Milind Bhandarkar
 
Extending Hadoop for Fun & Profit
Extending Hadoop for Fun & ProfitExtending Hadoop for Fun & Profit
Extending Hadoop for Fun & ProfitMilind Bhandarkar
 
Future of Data Intensive Applicaitons
Future of Data Intensive ApplicaitonsFuture of Data Intensive Applicaitons
Future of Data Intensive ApplicaitonsMilind Bhandarkar
 
Hadoop summit 2010 frameworks panel elephant bird
Hadoop summit 2010 frameworks panel elephant birdHadoop summit 2010 frameworks panel elephant bird
Hadoop summit 2010 frameworks panel elephant birdKevin Weil
 
The Zoo Expands: Labrador *Loves* Elephant, Thanks to Hamster
The Zoo Expands: Labrador *Loves* Elephant, Thanks to HamsterThe Zoo Expands: Labrador *Loves* Elephant, Thanks to Hamster
The Zoo Expands: Labrador *Loves* Elephant, Thanks to HamsterMilind Bhandarkar
 
Hadoop at Twitter (Hadoop Summit 2010)
Hadoop at Twitter (Hadoop Summit 2010)Hadoop at Twitter (Hadoop Summit 2010)
Hadoop at Twitter (Hadoop Summit 2010)Kevin Weil
 
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Kevin Weil
 
Big Data at Twitter, Chirp 2010
Big Data at Twitter, Chirp 2010Big Data at Twitter, Chirp 2010
Big Data at Twitter, Chirp 2010Kevin Weil
 
Hadoop and pig at twitter (oscon 2010)
Hadoop and pig at twitter (oscon 2010)Hadoop and pig at twitter (oscon 2010)
Hadoop and pig at twitter (oscon 2010)Kevin Weil
 
Modeling with Hadoop kdd2011
Modeling with Hadoop kdd2011Modeling with Hadoop kdd2011
Modeling with Hadoop kdd2011Milind Bhandarkar
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGAdam Kawa
 
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Kevin Weil
 
Practical Problem Solving with Apache Hadoop & Pig
Practical Problem Solving with Apache Hadoop & PigPractical Problem Solving with Apache Hadoop & Pig
Practical Problem Solving with Apache Hadoop & PigMilind Bhandarkar
 
Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Kevin Weil
 
Hadoop MapReduce joins
Hadoop MapReduce joinsHadoop MapReduce joins
Hadoop MapReduce joinsShalish VJ
 

Viewers also liked (16)

Scaling hadoopapplications
Scaling hadoopapplicationsScaling hadoopapplications
Scaling hadoopapplications
 
Hadoop: The Default Machine Learning Platform ?
Hadoop: The Default Machine Learning Platform ?Hadoop: The Default Machine Learning Platform ?
Hadoop: The Default Machine Learning Platform ?
 
Extending Hadoop for Fun & Profit
Extending Hadoop for Fun & ProfitExtending Hadoop for Fun & Profit
Extending Hadoop for Fun & Profit
 
Future of Data Intensive Applicaitons
Future of Data Intensive ApplicaitonsFuture of Data Intensive Applicaitons
Future of Data Intensive Applicaitons
 
Hadoop summit 2010 frameworks panel elephant bird
Hadoop summit 2010 frameworks panel elephant birdHadoop summit 2010 frameworks panel elephant bird
Hadoop summit 2010 frameworks panel elephant bird
 
The Zoo Expands: Labrador *Loves* Elephant, Thanks to Hamster
The Zoo Expands: Labrador *Loves* Elephant, Thanks to HamsterThe Zoo Expands: Labrador *Loves* Elephant, Thanks to Hamster
The Zoo Expands: Labrador *Loves* Elephant, Thanks to Hamster
 
Hadoop at Twitter (Hadoop Summit 2010)
Hadoop at Twitter (Hadoop Summit 2010)Hadoop at Twitter (Hadoop Summit 2010)
Hadoop at Twitter (Hadoop Summit 2010)
 
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
Analyzing Big Data at Twitter (Web 2.0 Expo NYC Sep 2010)
 
Big Data at Twitter, Chirp 2010
Big Data at Twitter, Chirp 2010Big Data at Twitter, Chirp 2010
Big Data at Twitter, Chirp 2010
 
Hadoop and pig at twitter (oscon 2010)
Hadoop and pig at twitter (oscon 2010)Hadoop and pig at twitter (oscon 2010)
Hadoop and pig at twitter (oscon 2010)
 
Modeling with Hadoop kdd2011
Modeling with Hadoop kdd2011Modeling with Hadoop kdd2011
Modeling with Hadoop kdd2011
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUG
 
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)Rainbird: Realtime Analytics at Twitter (Strata 2011)
Rainbird: Realtime Analytics at Twitter (Strata 2011)
 
Practical Problem Solving with Apache Hadoop & Pig
Practical Problem Solving with Apache Hadoop & PigPractical Problem Solving with Apache Hadoop & Pig
Practical Problem Solving with Apache Hadoop & Pig
 
Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)Hadoop, Pig, and Twitter (NoSQL East 2009)
Hadoop, Pig, and Twitter (NoSQL East 2009)
 
Hadoop MapReduce joins
Hadoop MapReduce joinsHadoop MapReduce joins
Hadoop MapReduce joins
 

Similar to Hadoop Overview kdd2011

Hadoop Overview & Architecture
Hadoop Overview & Architecture  Hadoop Overview & Architecture
Hadoop Overview & Architecture EMC
 
Hadoop Tutorial with @techmilind
Hadoop Tutorial with @techmilindHadoop Tutorial with @techmilind
Hadoop Tutorial with @techmilindEMC
 
Hadoop: A distributed framework for Big Data
Hadoop: A distributed framework for Big DataHadoop: A distributed framework for Big Data
Hadoop: A distributed framework for Big DataDhanashri Yadav
 
Large scale computing with mapreduce
Large scale computing with mapreduceLarge scale computing with mapreduce
Large scale computing with mapreducehansen3032
 
What's new in hadoop 3.0
What's new in hadoop 3.0What's new in hadoop 3.0
What's new in hadoop 3.0Heiko Loewe
 
Hadoop with Python
Hadoop with PythonHadoop with Python
Hadoop with PythonDonald Miner
 
HadoopThe Hadoop Java Software Framework
HadoopThe Hadoop Java Software FrameworkHadoopThe Hadoop Java Software Framework
HadoopThe Hadoop Java Software FrameworkThoughtWorks
 
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014cdmaxime
 
Etu L2 Training - Hadoop 企業應用實作
Etu L2 Training - Hadoop 企業應用實作Etu L2 Training - Hadoop 企業應用實作
Etu L2 Training - Hadoop 企業應用實作James Chen
 
L19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptL19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptMaruthiPrasad96
 
Fundamental of Big Data with Hadoop and Hive
Fundamental of Big Data with Hadoop and HiveFundamental of Big Data with Hadoop and Hive
Fundamental of Big Data with Hadoop and HiveSharjeel Imtiaz
 
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015cdmaxime
 
Introduction to Spark - Phoenix Meetup 08-19-2014
Introduction to Spark - Phoenix Meetup 08-19-2014Introduction to Spark - Phoenix Meetup 08-19-2014
Introduction to Spark - Phoenix Meetup 08-19-2014cdmaxime
 
Map reduce and hadoop at mylife
Map reduce and hadoop at mylifeMap reduce and hadoop at mylife
Map reduce and hadoop at myliferesponseteam
 

Similar to Hadoop Overview kdd2011 (20)

Hadoop Overview & Architecture
Hadoop Overview & Architecture  Hadoop Overview & Architecture
Hadoop Overview & Architecture
 
Hadoop london
Hadoop londonHadoop london
Hadoop london
 
Hadoop Tutorial with @techmilind
Hadoop Tutorial with @techmilindHadoop Tutorial with @techmilind
Hadoop Tutorial with @techmilind
 
Hadoop: A distributed framework for Big Data
Hadoop: A distributed framework for Big DataHadoop: A distributed framework for Big Data
Hadoop: A distributed framework for Big Data
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Large scale computing with mapreduce
Large scale computing with mapreduceLarge scale computing with mapreduce
Large scale computing with mapreduce
 
Osd ctw spark
Osd ctw sparkOsd ctw spark
Osd ctw spark
 
What's new in hadoop 3.0
What's new in hadoop 3.0What's new in hadoop 3.0
What's new in hadoop 3.0
 
Hadoop with Python
Hadoop with PythonHadoop with Python
Hadoop with Python
 
Hadoop
HadoopHadoop
Hadoop
 
HadoopThe Hadoop Java Software Framework
HadoopThe Hadoop Java Software FrameworkHadoopThe Hadoop Java Software Framework
HadoopThe Hadoop Java Software Framework
 
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
Apache Spark - Santa Barbara Scala Meetup Dec 18th 2014
 
Etu L2 Training - Hadoop 企業應用實作
Etu L2 Training - Hadoop 企業應用實作Etu L2 Training - Hadoop 企業應用實作
Etu L2 Training - Hadoop 企業應用實作
 
L19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptL19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .ppt
 
Fundamental of Big Data with Hadoop and Hive
Fundamental of Big Data with Hadoop and HiveFundamental of Big Data with Hadoop and Hive
Fundamental of Big Data with Hadoop and Hive
 
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015Apache Spark - San Diego Big Data Meetup Jan 14th 2015
Apache Spark - San Diego Big Data Meetup Jan 14th 2015
 
Lecture 2 part 3
Lecture 2 part 3Lecture 2 part 3
Lecture 2 part 3
 
Scala and spark
Scala and sparkScala and spark
Scala and spark
 
Introduction to Spark - Phoenix Meetup 08-19-2014
Introduction to Spark - Phoenix Meetup 08-19-2014Introduction to Spark - Phoenix Meetup 08-19-2014
Introduction to Spark - Phoenix Meetup 08-19-2014
 
Map reduce and hadoop at mylife
Map reduce and hadoop at mylifeMap reduce and hadoop at mylife
Map reduce and hadoop at mylife
 

Recently uploaded

Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 

Recently uploaded (20)

Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 

Hadoop Overview kdd2011

  • 1. Modeling with Hadoop Vijay K. Narayanan Principal Scientist, Yahoo! Labs, Yahoo! Milind Bhandarkar Chief Architect, Greenplum Labs, EMC2
  • 2. Session 1: Overview of Hadoop •  Motivation •  Hadoop •  Map-Reduce •  Distributed File System •  Next Generation MapReduce •  Q & A 2
  • 3. Session 2: Modeling with Hadoop •  Types of learning in MapReduce •  Algorithms in MapReduce framework •  Data parallel algorithms •  Sequential algorithms •  Challenges and Enhancements 3
  • 4. Session 3: Hands On Exercise •  Spin-up Single Node Hadoop cluster in a Virtual Machine •  Write a regression trainer •  Train model on a dataset 4
  • 6. Hadoop At Yahoo! (Some Statistics) •  40,000 + machines in 20+ clusters •  Largest cluster is 4,000 machines •  170 Petabytes of storage •  1000+ users •  1,000,000+ jobs/month 6
  • 8.
  • 11. Big Datasets (Data-Rich Computing theme proposal. J. Campbell, et al, 2007)
  • 12. Cost Per Gigabyte (http://www.mkomo.com/cost-per-gigabyte)
  • 13. Storage Trends (Graph by Adam Leventhal, ACM Queue, Dec 2009)
  • 16. Search Assist •  Insight: Related concepts appear close together in text corpus •  Input: Web pages •  1 Billion Pages, 10K bytes each •  10 TB of input data •  Output: List(word, List(related words)) 16
  • 17. Search Assist // Input: List(URL, Text) foreach URL in Input : Words = Tokenize(Text(URL)); foreach word in Tokens : Insert (word, Next(word, Tokens)) in Pairs; Insert (word, Previous(word, Tokens)) in Pairs; // Result: Pairs = List (word, RelatedWord) Group Pairs by word; // Result: List (word, List(RelatedWords) foreach word in Pairs : Count RelatedWords in GroupedPairs; // Result: List (word, List(RelatedWords, count)) foreach word in CountedPairs : Sort Pairs(word, *) descending by count; choose Top 5 Pairs; // Result: List (word, Top5(RelatedWords)) 17
  • 19. People You May Know •  Insight: You might also know Joe Smith if a lot of folks you know, know Joe Smith •  if you don t know Joe Smith already •  Numbers: •  100 MM users •  Average connections per user is 100 19
  • 20. People You May Know // Input: List(UserName, List(Connections)) foreach u in UserList : // 100 MM foreach x in Connections(u) : // 100 foreach y in Connections(x) : // 100 if (y not in Connections(u)) : Count(u, y)++; // 1 Trillion Iterations Sort (u,y) in descending order of Count(u,y); Choose Top 3 y; Store (u, {y0, y1, y2}) for serving; 20
  • 21. Performance •  101 Random accesses for each user •  Assume 1 ms per random access •  100 ms per user •  100 MM users •  100 days on a single machine 21
  • 23. Map Reduce •  Primitives in Lisp ( Other functional languages) 1970s •  Google Paper 2004 •  http://labs.google.com/papers/ mapreduce.html 23
  • 24. Map Output_List = Map (Input_List) Square (1, 2, 3, 4, 5, 6, 7, 8, 9, 10) = (1, 4, 9, 16, 25, 36,49, 64, 81, 100) 24
  • 25. Reduce Output_Element = Reduce (Input_List) Sum (1, 4, 9, 16, 25, 36,49, 64, 81, 100) = 385 25
  • 26. Parallelism •  Map is inherently parallel •  Each list element processed independently •  Reduce is inherently sequential •  Unless processing multiple lists •  Grouping to produce multiple lists 26
  • 27. Search Assist Map // Input: http://hadoop.apache.org Pairs = Tokenize_And_Pair ( Text ( Input ) ) Output = { (apache, hadoop) (hadoop, mapreduce) (hadoop, streaming) (hadoop, pig) (apache, pig) (hadoop, DFS) (streaming, commandline) (hadoop, java) (DFS, namenode) (datanode, block) (replication, default)... } 27
  • 28. Search Assist Reduce // Input: GroupedList (word, GroupedList(words)) CountedPairs = CountOccurrences (word, RelatedWords) Output = { (hadoop, apache, 7) (hadoop, DFS, 3) (hadoop, streaming, 4) (hadoop, mapreduce, 9) ... } 28
  • 29. Issues with Large Data •  Map Parallelism: Chunking input data •  Reduce Parallelism: Grouping related data •  Dealing with failures load imbalance 29
  • 30.
  • 31. Apache Hadoop •  January 2006: Subproject of Lucene •  January 2008: Top-level Apache project •  Stable Version: 0.20.203 •  Latest Version: 0.22 (Coming soon) 31
  • 32. Apache Hadoop •  Reliable, Performant Distributed file system •  MapReduce Programming framework •  Ecosystem: HBase, Hive, Pig, Howl, Oozie, Zookeeper, Chukwa, Mahout, Cascading, Scribe, Cassandra, Hypertable, Voldemort, Azkaban, Sqoop, Flume, Avro ... 32
  • 33. Problem: Bandwidth to Data •  Scan 100TB Datasets on 1000 node cluster •  Remote storage @ 10MB/s = 165 mins •  Local storage @ 50-200MB/s = 33-8 mins •  Moving computation is more efficient than moving data •  Need visibility into data placement 33
  • 34. Problem: Scaling Reliably •  Failure is not an option, it s a rule ! •  1000 nodes, MTBF 1 day •  4000 disks, 8000 cores, 25 switches, 1000 NICs, 2000 DIMMS (16TB RAM) •  Need fault tolerant store with reasonable availability guarantees •  Handle hardware faults transparently 34
  • 35. Hadoop Goals •  Scalable: Petabytes (10 15 Bytes) of data on thousands on nodes •  Economical: Commodity components only •  Reliable •  Engineering reliability into every application is expensive 35
  • 37. Think MapReduce •  Record = (Key, Value) •  Key : Comparable, Serializable •  Value: Serializable •  Input, Map, Shuffle, Reduce, Output 37
  • 38. Seems Familiar ? cat /var/log/auth.log* | grep session opened | cut -d -f10 | sort | uniq -c ~/userlist 38
  • 39. Map •  Input: (Key , Value ) 1 1 •  Output: List(Key , Value ) 2 2 •  Projections, Filtering, Transformation 39
  • 40. Shuffle •  Input: List(Key , Value ) 2 2 •  Output •  Sort(Partition(List(Key , List(Value )))) 2 2 •  Provided by Hadoop 40
  • 41. Reduce •  Input: List(Key , List(Value )) 2 2 •  Output: List(Key , Value ) 3 3 •  Aggregation 41
  • 42. Hadoop Streaming •  Hadoop is written in Java •  Java MapReduce code is native •  What about Non-Java Programmers ? •  Perl, Python, Shell, R •  grep, sed, awk, uniq as Mappers/Reducers •  Text Input and Output 42
  • 43. Hadoop Streaming •  Thin Java wrapper for Map Reduce Tasks •  Forks actual Mapper Reducer •  IPC via stdin, stdout, stderr •  Key.toString() t Value.toString() n •  Slower than Java programs •  Allows for quick prototyping / debugging 43
  • 44. Hadoop Streaming $ bin/hadoop jar hadoop-streaming.jar -input in-files -output out-dir -mapper mapper.sh -reducer reducer.sh # mapper.sh sed -e 's/ /n/g' | grep . # reducer.sh uniq -c | awk '{print $2 t $1}' 44
  • 46. HDFS •  Data is organized into files and directories •  Files are divided into uniform sized blocks (default 128MB) and distributed across cluster nodes •  HDFS exposes block placement so that computation can be migrated to data 46
  • 47. HDFS •  Blocks are replicated (default 3) to handle hardware failure •  Replication for performance and fault tolerance (Rack-Aware placement) •  HDFS keeps checksums of data for corruption detection and recovery 47
  • 48. HDFS •  Master-Worker Architecture •  Single NameNode •  Many (Thousands) DataNodes 48
  • 49. HDFS Master (NameNode) •  Manages filesystem namespace •  File metadata (i.e. inode ) •  Mapping inode to list of blocks + locations •  Authorization Authentication •  Checkpoint journal namespace changes 49
  • 50. Namenode •  Mapping of datanode to list of blocks •  Monitor datanode health •  Replicate missing blocks •  Keeps ALL namespace in memory •  60M objects (File/Block) in 16GB 50
  • 51. Datanodes •  Handle block storage on multiple volumes block integrity •  Clients access the blocks directly from data nodes •  Periodically send heartbeats and block reports to Namenode •  Blocks are stored as underlying OS s files 51
  • 53. Next Generation MapReduce 53
  • 54. MapReduce Today (Courtesy: Arun Murthy, Hortonworks)
  • 55. Why ? •  Scalability Limitations today •  Maximum cluster size: 4000 nodes •  Maximum Concurrent tasks: 40,000 •  Job Tracker SPOF •  Fixed map and reduce containers (slots) •  Punishes pleasantly parallel apps 55
  • 56. Why ? (contd) •  MapReduce is not suitable for every application •  Fine-Grained Iterative applications •  HaLoop: Hadoop in a Loop •  Message passing applications •  Graph Processing 56
  • 57. Requirements •  Need scalable cluster resources manager •  Separate scheduling from resource management •  Multi-Lingual Communication Protocols 57
  • 58. Bottom Line •  @techmilind #mrng (MapReduce, Next Gen) is in reality, #rmng (Resource Manager, Next Gen) •  Expect different programming paradigms to be implemented •  Including MPI (soon) 58
  • 59. Architecture (Courtesy: Arun Murthy, Hortonworks)
  • 60. The New World •  Resource Manager •  Allocates resources (containers) to applications •  Node Manager •  Manages containers on nodes •  Application Master •  Specific to paradigm e.g. MapReduce application master, MPI application master etc 60
  • 61. Container •  In current terminology: A Task Slot •  Slice of the node s hardware resources •  #of cores, virtual memory, disk size, disk and network bandwidth etc •  Currently, only memory usage is sliced 61