SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Apache Hadoop and HBase,[object Object],Todd Lipcon,[object Object],todd@cloudera.com,[object Object],@tlipcon @cloudera,[object Object],August 9th, 2011,[object Object]
Outline,[object Object],Why should you care? (Intro),[object Object],What is Apache Hadoop?,[object Object],How does it work?,[object Object],What is Apache HBase?,[object Object],Use Cases,[object Object]
Software Engineer at,[object Object],Committer and PMC member on Apache HBase, HDFS, MapReduce, and Thrift,[object Object],Previously: systems programming, operations, large scale data analysis,[object Object],I love data and data systems,[object Object],Introductions,[object Object]
Sf NoSQL MeetUp: Apache Hadoop and HBase
Sf NoSQL MeetUp: Apache Hadoop and HBase
Sf NoSQL MeetUp: Apache Hadoop and HBase
Data is the difference.,[object Object]
Sf NoSQL MeetUp: Apache Hadoop and HBase
Sf NoSQL MeetUp: Apache Hadoop and HBase
Sf NoSQL MeetUp: Apache Hadoop and HBase
“Every two days we create as much information as we did from the dawn of civilization up until 2003.” ,[object Object],Eric Schmidt,[object Object],(Chairman of Google),[object Object]
“I keep saying that the sexy job in the next 10 years will be statisticians. And I’m not kidding.” ,[object Object],Hal Varian (Google’s chief economist),[object Object]
Are you throwing away data?,[object Object],Data comes in many shapes and sizes: relational tuples, log files, semistructured textual data (e.g., e-mail), … .,[object Object],Are you throwing it away because it doesn’t ‘fit’?,[object Object]
So, what’s Hadoop?,[object Object],The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry,[object Object]
So, what’s Hadoop?,[object Object],The Little Prince, Antoine de Saint-Exupéry, Irene Testot-Ferry,[object Object]
Apache Hadoop is anopen-source systemto reliably store and processGOBS of dataacross many commodity computers. ,[object Object]
Two Core Components,[object Object],Store,[object Object],Process,[object Object],HDFS,[object Object],MapReduce,[object Object],Self-healing,[object Object],high-bandwidthclustered storage.,[object Object],Fault-tolerant distributed processing.,[object Object]
What makes Hadoop special?,[object Object]
Falsehood #1: Machines can be reliable…,[object Object], Image: MadMan the Mighty CC BY-NC-SA,[object Object]
Hadoop separates distributed system fault-tolerance code from application logic. ,[object Object],Unicorns,[object Object],Systems Programmers,[object Object],Statisticians,[object Object]
Falsehood #2:  Machines deserve identities...,[object Object], Image:Laughing Squid CC BY-NC-SA,[object Object]
Hadoop lets you interact with a cluster, not a bunch of machines.,[object Object], Image:Yahoo! Hadoop cluster [ OSCON ’07 ],[object Object]
Falsehood #3: Your analysis fits on one machine…,[object Object],Image: Matthew J. Stinson CC-BY-NC,[object Object]
Hadoop scales linearlywith data sizeor analysis complexity.,[object Object],Data-parallel or compute-parallel. For example:,[object Object],Extensive machine learning on <100GB of image data,[object Object],Simple SQL queries on >100TB of clickstream data,[object Object],Hadoop works for both applications!,[object Object]
Hadoop sounds likemagic.How is it possible?,[object Object]
A Typical Look...,[object Object],5-4000 commodity servers(8-core, 24GB RAM, 4-12 TB, gig-E),[object Object],2-level network architecture,[object Object],20-40 nodes per rack,[object Object]
Cluster nodes,[object Object],Master nodes (1 each),[object Object],NameNode (metadata server and database),[object Object],JobTracker (scheduler),[object Object],Slave nodes (1-4000 each),[object Object],DataNodes (block storage),[object Object],TaskTrackers (task execution),[object Object]
HDFS Data Storage,[object Object],DN 1,[object Object],NameNode,[object Object],/logs/weblog.txt,[object Object],64MB,[object Object],64MB,[object Object],30MB,[object Object],DN 2,[object Object],blk_29232,[object Object],158MB,[object Object],blk_19231,[object Object],DN 3,[object Object],blk_329432,[object Object],DN 4,[object Object]
HDFS Write Path,[object Object]
HDFS has split the file into 64MB blocks and stored it on the DataNodes.,[object Object],Now, we want to process that data.,[object Object]
The MapReduce Programming Model,[object Object]
You specify map() and reduce() functions.The framework does the rest.,[object Object]
map(),[object Object],map: K₁,V₁->list K₂,V₂,[object Object],Key:   byte offset 193284,[object Object],Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326”,[object Object],Key:   userimage,[object Object],Value: 2326 bytes,[object Object],The map function runs on the same node as the data was stored!,[object Object]
Input Format,[object Object],Wait! HDFS is not a Key-Value store!,[object Object],InputFormatinterprets bytes as a Key and Value,[object Object],127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326,[object Object],Key:   log offset 193284,[object Object],Value: “127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /userimage/123 HTTP/1.0" 200 2326”,[object Object]
The Shuffle,[object Object],Each map output is assigned to a “reducer” based on its key,[object Object],map output is grouped andsorted by key,[object Object]
reduce(),[object Object],K₂, iter(V₂)->list(K₃,V₃),[object Object],Key:   userimage,[object Object],Value: 2326 bytes  (from map task 0001),[object Object],Value: 1000 bytes  (from map task 0008),[object Object],Value: 3020 bytes  (from map task 0120),[object Object],Reducer function,[object Object],Key:   userimage,[object Object],Value: 6346 bytes,[object Object],TextOutputFormat,[object Object],userimage  6346,[object Object]
Putting it together...,[object Object],Note: not limited to just one reducer. Result set may be many TB!,[object Object]
Hadoop is not NoSQL(NoNoSQL? Sorry…),[object Object],Hive project adds SQL support to Hadoop,[object Object],HiveQL (SQL dialect) compiles to a query plan,[object Object],Query plan executes as MapReduce jobs,[object Object]
Hive Example,[object Object],CREATE TABLE movie_rating_data (,[object Object],  userid INT, movieid INT, rating INT, unixtime STRING,[object Object],) ROW FORMAT DELIMITED,[object Object],  FIELDS TERMINATED BY '‘,[object Object],  STORED AS TEXTFILE;,[object Object],LOAD DATA INPATH ‘/datasets/movielens’ INTO TABLE movie_rating_data;,[object Object],CREATE TABLE average_ratings AS,[object Object],SELECT movieid, AVG(rating) FROM movie_rating_data,[object Object],GROUP BY movieid;,[object Object]
The Hadoop Ecosystem,[object Object],(Column DB)  ,[object Object]
Hadoop in the Wild(yes, it’s used in production),[object Object],Yahoo! Hadoop Clusters: > 82PB, >40k machines(as of Jun ‘11),[object Object],Facebook: 15TB new data per day;1200+ machines, 30PB in one cluster,[object Object],Twitter: >1TB per day, ~120 nodes,[object Object],Lots of 5-40 node clusters at companies withoutpetabytes of data (web, retail, finance, telecom, research, government),[object Object]
What about real time access?,[object Object],MapReduce is a batch system,[object Object],The fastest MR job takes 15+ seconds,[object Object],HDFS just stores bytes, and is append-only,[object Object],Not about to serve data for your next web site.,[object Object]
Apache HBase,[object Object],HBase is an,[object Object],open source, distributed, sorted map,[object Object],modeled after Google’s BigTable,[object Object]
Open Source,[object Object],Apache 2.0 License,[object Object],Committers and contributors from diverse organizations,[object Object],Cloudera, Facebook, StumbleUpon, Trend Micro, etc.,[object Object]
Distributed,[object Object],Store and access data on 1-1000 commodity servers,[object Object],Automatic failover based on Apache ZooKeeper,[object Object],Linear scaling of capacity and IOPS by adding servers,[object Object]
Sorted Map Datastore,[object Object],Tables consist of rows, each of which has a primary key (row key),[object Object],Each row may have any number of columns -- like a Map<byte[], byte[]>,[object Object],Rows are stored in sorted order,[object Object]
Sorted Map Datastore(logical view as “records”),[object Object],Implicit PRIMARY KEY in RDBMS terms,[object Object],Data is all byte[] in HBase,[object Object],Different types of data separated into different ,[object Object],“column families”,[object Object],Different rows may have different sets of columns(table is sparse),[object Object],Useful for *-To-Many mappings,[object Object],A single cell might have different,[object Object],values at different timestamps,[object Object]
Sorted Map Datastore(physical view as “cells”),[object Object],info Column Family,[object Object],roles Column Family,[object Object],Sorted,[object Object],on disk byRow key, Col key, descending timestamp,[object Object],Milliseconds since unix epoch,[object Object]
Cost Transparency,[object Object]
Column Families,[object Object],Different sets of columns may have different properties and access patterns,[object Object],Configurable by column family:,[object Object],Block Compression (none, gzip, LZO, Snappy),[object Object],Version retention policies,[object Object],Cache priority,[object Object],CFs stored separately on disk: access one without wasting IO on the other.,[object Object]
HBase API,[object Object],get(row),[object Object],put(row, Map<column, value>),[object Object],scan(key range, filter),[object Object],increment(row, columns),[object Object],… (checkAndPut, delete, etc…),[object Object],MapReduce/Hive,[object Object]
Accessing HBase,[object Object],Java API (thick client),[object Object],REST/HTTP,[object Object],Apache Thrift (any language),[object Object],Hive/Pig for analytics,[object Object]
High Level Architecture,[object Object],Your PHP Application,[object Object],MapReduce,[object Object],Hive/Pig,[object Object],Thrift/REST Gateway,[object Object],Your Java Application,[object Object],Java Client,[object Object],ZooKeeper,[object Object],HBase,[object Object],HDFS,[object Object]
HBase vs other systems,[object Object]
HBase vs just HDFS,[object Object],If you have neither random write nor random read, stick to HDFS!,[object Object]
HBase vs RDBMS,[object Object]
HBase vs other “NoSQL”,[object Object],Favors Strict Consistency over Availability (but availability is good in practice!),[object Object],Great Hadoop integration (very efficient bulk loads, MapReduce analysis),[object Object],Ordered range partitions (not hash),[object Object],Automatically shards/scales (just turn on more servers, really proven at petabyte scale),[object Object],Sparse column storage (not key-value),[object Object]
HBase in Numbers,[object Object],Largest cluster: ~1000 nodes, ~1PB,[object Object],Most clusters: 5-20 nodes, 100GB-4TB,[object Object],Writes: 1-3ms, 1k-10k writes/sec per node,[object Object],Reads: 0-3ms cached, 10-30ms disk,[object Object],10-40k reads / second / node from cache,[object Object],Cell size: 0-3MB preferred,[object Object]
HBase in Production,[object Object],Facebook (Messages, Analytics, operational datastore, more on the way) [see SIGMOD paper],[object Object],StumbleUpon / http://su.pr,[object Object],Mozilla (receives crash reports),[object Object],Yahoo (stores a copy of the web),[object Object],Twitter (stores users and tweets for analytics),[object Object],… many others,[object Object]
Ok, fine, what next?,[object Object],Get Hadoop!,[object Object],CDH - Cloudera’s Distribution including Apache Hadoophttp://cloudera.com/,[object Object],http://hadoop.apache.org/,[object Object],Try it out!  (Locally, VM, or EC2),[object Object],Watch free training videos onhttp://cloudera.com/,[object Object]
Thanks!,[object Object],todd@cloudera.com,[object Object],@tlipcon,[object Object],(feedback? yes!),[object Object],(hiring? yes!),[object Object]

Weitere ähnliche Inhalte

Was ist angesagt?

MapReduce Design Patterns
MapReduce Design PatternsMapReduce Design Patterns
MapReduce Design PatternsDonald Miner
 
Introduction to SARA's Hadoop Hackathon - dec 7th 2010
Introduction to SARA's Hadoop Hackathon - dec 7th 2010Introduction to SARA's Hadoop Hackathon - dec 7th 2010
Introduction to SARA's Hadoop Hackathon - dec 7th 2010Evert Lammerts
 
Hive ICDE 2010
Hive ICDE 2010Hive ICDE 2010
Hive ICDE 2010ragho
 
Hive: Data Warehousing for Hadoop
Hive: Data Warehousing for HadoopHive: Data Warehousing for Hadoop
Hive: Data Warehousing for Hadoopbigdatasyd
 
Hadoop and Hive Development at Facebook
Hadoop and Hive Development at FacebookHadoop and Hive Development at Facebook
Hadoop and Hive Development at Facebookelliando dias
 
Terabyte-scale image similarity search: experience and best practice
Terabyte-scale image similarity search: experience and best practiceTerabyte-scale image similarity search: experience and best practice
Terabyte-scale image similarity search: experience and best practiceDenis Shestakov
 
hadoop&zing
hadoop&zinghadoop&zing
hadoop&zingzingopen
 
First NL-HUG: Large-scale data processing at SARA with Apache Hadoop
First NL-HUG: Large-scale data processing at SARA with Apache HadoopFirst NL-HUG: Large-scale data processing at SARA with Apache Hadoop
First NL-HUG: Large-scale data processing at SARA with Apache HadoopEvert Lammerts
 
Hadoop for beginners free course ppt
Hadoop for beginners   free course pptHadoop for beginners   free course ppt
Hadoop for beginners free course pptNjain85
 
Scalable high-dimensional indexing with Hadoop
Scalable high-dimensional indexing with HadoopScalable high-dimensional indexing with Hadoop
Scalable high-dimensional indexing with HadoopDenis Shestakov
 
Big data Hadoop presentation
Big data  Hadoop  presentation Big data  Hadoop  presentation
Big data Hadoop presentation Shivanee garg
 
Introduction to Hadoop
Introduction to Hadoop Introduction to Hadoop
Introduction to Hadoop Sudarshan Pant
 
Distributed Computing with Apache Hadoop. Introduction to MapReduce.
Distributed Computing with Apache Hadoop. Introduction to MapReduce.Distributed Computing with Apache Hadoop. Introduction to MapReduce.
Distributed Computing with Apache Hadoop. Introduction to MapReduce.Konstantin V. Shvachko
 
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Andrey Vykhodtsev
 
Large-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopLarge-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopEvert Lammerts
 
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizard
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizardPhily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizard
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizardBrian O'Neill
 
Practical Hadoop using Pig
Practical Hadoop using PigPractical Hadoop using Pig
Practical Hadoop using PigDavid Wellman
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...Cognizant
 

Was ist angesagt? (20)

MapReduce Design Patterns
MapReduce Design PatternsMapReduce Design Patterns
MapReduce Design Patterns
 
Introduction to SARA's Hadoop Hackathon - dec 7th 2010
Introduction to SARA's Hadoop Hackathon - dec 7th 2010Introduction to SARA's Hadoop Hackathon - dec 7th 2010
Introduction to SARA's Hadoop Hackathon - dec 7th 2010
 
Hive ICDE 2010
Hive ICDE 2010Hive ICDE 2010
Hive ICDE 2010
 
Hive: Data Warehousing for Hadoop
Hive: Data Warehousing for HadoopHive: Data Warehousing for Hadoop
Hive: Data Warehousing for Hadoop
 
Hadoop and Hive Development at Facebook
Hadoop and Hive Development at FacebookHadoop and Hive Development at Facebook
Hadoop and Hive Development at Facebook
 
Terabyte-scale image similarity search: experience and best practice
Terabyte-scale image similarity search: experience and best practiceTerabyte-scale image similarity search: experience and best practice
Terabyte-scale image similarity search: experience and best practice
 
hadoop&zing
hadoop&zinghadoop&zing
hadoop&zing
 
First NL-HUG: Large-scale data processing at SARA with Apache Hadoop
First NL-HUG: Large-scale data processing at SARA with Apache HadoopFirst NL-HUG: Large-scale data processing at SARA with Apache Hadoop
First NL-HUG: Large-scale data processing at SARA with Apache Hadoop
 
Hadoop for beginners free course ppt
Hadoop for beginners   free course pptHadoop for beginners   free course ppt
Hadoop for beginners free course ppt
 
Scalable high-dimensional indexing with Hadoop
Scalable high-dimensional indexing with HadoopScalable high-dimensional indexing with Hadoop
Scalable high-dimensional indexing with Hadoop
 
R, Hadoop and Amazon Web Services
R, Hadoop and Amazon Web ServicesR, Hadoop and Amazon Web Services
R, Hadoop and Amazon Web Services
 
Big data Hadoop presentation
Big data  Hadoop  presentation Big data  Hadoop  presentation
Big data Hadoop presentation
 
Introduction to Hadoop
Introduction to Hadoop Introduction to Hadoop
Introduction to Hadoop
 
Distributed Computing with Apache Hadoop. Introduction to MapReduce.
Distributed Computing with Apache Hadoop. Introduction to MapReduce.Distributed Computing with Apache Hadoop. Introduction to MapReduce.
Distributed Computing with Apache Hadoop. Introduction to MapReduce.
 
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
 
Large-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with HadoopLarge-Scale Data Storage and Processing for Scientists with Hadoop
Large-Scale Data Storage and Processing for Scientists with Hadoop
 
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizard
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizardPhily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizard
Phily JUG : Web Services APIs for Real-time Analytics w/ Storm and DropWizard
 
Practical Hadoop using Pig
Practical Hadoop using PigPractical Hadoop using Pig
Practical Hadoop using Pig
 
Understanding hdfs
Understanding hdfsUnderstanding hdfs
Understanding hdfs
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
 

Ähnlich wie Sf NoSQL MeetUp: Apache Hadoop and HBase

EclipseCon Keynote: Apache Hadoop - An Introduction
EclipseCon Keynote: Apache Hadoop - An IntroductionEclipseCon Keynote: Apache Hadoop - An Introduction
EclipseCon Keynote: Apache Hadoop - An IntroductionCloudera, Inc.
 
Hadoop: An Industry Perspective
Hadoop: An Industry PerspectiveHadoop: An Industry Perspective
Hadoop: An Industry PerspectiveCloudera, Inc.
 
Hands on Hadoop and pig
Hands on Hadoop and pigHands on Hadoop and pig
Hands on Hadoop and pigSudar Muthu
 
Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010nzhang
 
Introduction to apache hadoop
Introduction to apache hadoopIntroduction to apache hadoop
Introduction to apache hadoopShashwat Shriparv
 
Apache Hadoop & Friends at Utah Java User's Group
Apache Hadoop & Friends at Utah Java User's GroupApache Hadoop & Friends at Utah Java User's Group
Apache Hadoop & Friends at Utah Java User's GroupCloudera, Inc.
 
Hadoop Big Data A big picture
Hadoop Big Data A big pictureHadoop Big Data A big picture
Hadoop Big Data A big pictureJ S Jodha
 
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...Chris Baglieri
 
Hadoop and mysql by Chris Schneider
Hadoop and mysql by Chris SchneiderHadoop and mysql by Chris Schneider
Hadoop and mysql by Chris SchneiderDmitry Makarchuk
 
A gentle introduction to the world of BigData and Hadoop
A gentle introduction to the world of BigData and HadoopA gentle introduction to the world of BigData and Hadoop
A gentle introduction to the world of BigData and HadoopStefano Paluello
 
Hadoop scalability
Hadoop scalabilityHadoop scalability
Hadoop scalabilityWANdisco Plc
 
Big data & hadoop
Big data & hadoopBig data & hadoop
Big data & hadoopAbhi Goyan
 
Hadoop and Mapreduce Introduction
Hadoop and Mapreduce IntroductionHadoop and Mapreduce Introduction
Hadoop and Mapreduce Introductionrajsandhu1989
 
Python in big data world
Python in big data worldPython in big data world
Python in big data worldRohit
 
Hadoop - A big data initiative
Hadoop - A big data initiativeHadoop - A big data initiative
Hadoop - A big data initiativeMansi Mehra
 
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
How Hadoop Revolutionized Data Warehousing at Yahoo and FacebookHow Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
How Hadoop Revolutionized Data Warehousing at Yahoo and FacebookAmr Awadallah
 
SQL on Hadoop for the Oracle Professional
SQL on Hadoop for the Oracle ProfessionalSQL on Hadoop for the Oracle Professional
SQL on Hadoop for the Oracle ProfessionalMichael Rainey
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and HadoopFlavio Vit
 

Ähnlich wie Sf NoSQL MeetUp: Apache Hadoop and HBase (20)

EclipseCon Keynote: Apache Hadoop - An Introduction
EclipseCon Keynote: Apache Hadoop - An IntroductionEclipseCon Keynote: Apache Hadoop - An Introduction
EclipseCon Keynote: Apache Hadoop - An Introduction
 
Hadoop: An Industry Perspective
Hadoop: An Industry PerspectiveHadoop: An Industry Perspective
Hadoop: An Industry Perspective
 
Hands on Hadoop and pig
Hands on Hadoop and pigHands on Hadoop and pig
Hands on Hadoop and pig
 
Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010
 
Introduction to apache hadoop
Introduction to apache hadoopIntroduction to apache hadoop
Introduction to apache hadoop
 
Hadoop
HadoopHadoop
Hadoop
 
Apache Hadoop & Friends at Utah Java User's Group
Apache Hadoop & Friends at Utah Java User's GroupApache Hadoop & Friends at Utah Java User's Group
Apache Hadoop & Friends at Utah Java User's Group
 
Hadoop Big Data A big picture
Hadoop Big Data A big pictureHadoop Big Data A big picture
Hadoop Big Data A big picture
 
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
 
Hadoop and mysql by Chris Schneider
Hadoop and mysql by Chris SchneiderHadoop and mysql by Chris Schneider
Hadoop and mysql by Chris Schneider
 
A gentle introduction to the world of BigData and Hadoop
A gentle introduction to the world of BigData and HadoopA gentle introduction to the world of BigData and Hadoop
A gentle introduction to the world of BigData and Hadoop
 
Hadoop scalability
Hadoop scalabilityHadoop scalability
Hadoop scalability
 
Big data & hadoop
Big data & hadoopBig data & hadoop
Big data & hadoop
 
Nextag talk
Nextag talkNextag talk
Nextag talk
 
Hadoop and Mapreduce Introduction
Hadoop and Mapreduce IntroductionHadoop and Mapreduce Introduction
Hadoop and Mapreduce Introduction
 
Python in big data world
Python in big data worldPython in big data world
Python in big data world
 
Hadoop - A big data initiative
Hadoop - A big data initiativeHadoop - A big data initiative
Hadoop - A big data initiative
 
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
How Hadoop Revolutionized Data Warehousing at Yahoo and FacebookHow Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
 
SQL on Hadoop for the Oracle Professional
SQL on Hadoop for the Oracle ProfessionalSQL on Hadoop for the Oracle Professional
SQL on Hadoop for the Oracle Professional
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 

Mehr von Cloudera, Inc.

Partner Briefing_January 25 (FINAL).pptx
Partner Briefing_January 25 (FINAL).pptxPartner Briefing_January 25 (FINAL).pptx
Partner Briefing_January 25 (FINAL).pptxCloudera, Inc.
 
Cloudera Data Impact Awards 2021 - Finalists
Cloudera Data Impact Awards 2021 - Finalists Cloudera Data Impact Awards 2021 - Finalists
Cloudera Data Impact Awards 2021 - Finalists Cloudera, Inc.
 
2020 Cloudera Data Impact Awards Finalists
2020 Cloudera Data Impact Awards Finalists2020 Cloudera Data Impact Awards Finalists
2020 Cloudera Data Impact Awards FinalistsCloudera, Inc.
 
Edc event vienna presentation 1 oct 2019
Edc event vienna presentation 1 oct 2019Edc event vienna presentation 1 oct 2019
Edc event vienna presentation 1 oct 2019Cloudera, Inc.
 
Machine Learning with Limited Labeled Data 4/3/19
Machine Learning with Limited Labeled Data 4/3/19Machine Learning with Limited Labeled Data 4/3/19
Machine Learning with Limited Labeled Data 4/3/19Cloudera, Inc.
 
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Data Driven With the Cloudera Modern Data Warehouse 3.19.19Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Data Driven With the Cloudera Modern Data Warehouse 3.19.19Cloudera, Inc.
 
Introducing Cloudera DataFlow (CDF) 2.13.19
Introducing Cloudera DataFlow (CDF) 2.13.19Introducing Cloudera DataFlow (CDF) 2.13.19
Introducing Cloudera DataFlow (CDF) 2.13.19Cloudera, Inc.
 
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Introducing Cloudera Data Science Workbench for HDP 2.12.19Introducing Cloudera Data Science Workbench for HDP 2.12.19
Introducing Cloudera Data Science Workbench for HDP 2.12.19Cloudera, Inc.
 
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Cloudera, Inc.
 
Leveraging the cloud for analytics and machine learning 1.29.19
Leveraging the cloud for analytics and machine learning 1.29.19Leveraging the cloud for analytics and machine learning 1.29.19
Leveraging the cloud for analytics and machine learning 1.29.19Cloudera, Inc.
 
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19Cloudera, Inc.
 
Leveraging the Cloud for Big Data Analytics 12.11.18
Leveraging the Cloud for Big Data Analytics 12.11.18Leveraging the Cloud for Big Data Analytics 12.11.18
Leveraging the Cloud for Big Data Analytics 12.11.18Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 3
Modern Data Warehouse Fundamentals Part 3Modern Data Warehouse Fundamentals Part 3
Modern Data Warehouse Fundamentals Part 3Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 2
Modern Data Warehouse Fundamentals Part 2Modern Data Warehouse Fundamentals Part 2
Modern Data Warehouse Fundamentals Part 2Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 1
Modern Data Warehouse Fundamentals Part 1Modern Data Warehouse Fundamentals Part 1
Modern Data Warehouse Fundamentals Part 1Cloudera, Inc.
 
Extending Cloudera SDX beyond the Platform
Extending Cloudera SDX beyond the PlatformExtending Cloudera SDX beyond the Platform
Extending Cloudera SDX beyond the PlatformCloudera, Inc.
 
Federated Learning: ML with Privacy on the Edge 11.15.18
Federated Learning: ML with Privacy on the Edge 11.15.18Federated Learning: ML with Privacy on the Edge 11.15.18
Federated Learning: ML with Privacy on the Edge 11.15.18Cloudera, Inc.
 
Analyst Webinar: Doing a 180 on Customer 360
Analyst Webinar: Doing a 180 on Customer 360Analyst Webinar: Doing a 180 on Customer 360
Analyst Webinar: Doing a 180 on Customer 360Cloudera, Inc.
 
Build a modern platform for anti-money laundering 9.19.18
Build a modern platform for anti-money laundering 9.19.18Build a modern platform for anti-money laundering 9.19.18
Build a modern platform for anti-money laundering 9.19.18Cloudera, Inc.
 
Introducing the data science sandbox as a service 8.30.18
Introducing the data science sandbox as a service 8.30.18Introducing the data science sandbox as a service 8.30.18
Introducing the data science sandbox as a service 8.30.18Cloudera, Inc.
 

Mehr von Cloudera, Inc. (20)

Partner Briefing_January 25 (FINAL).pptx
Partner Briefing_January 25 (FINAL).pptxPartner Briefing_January 25 (FINAL).pptx
Partner Briefing_January 25 (FINAL).pptx
 
Cloudera Data Impact Awards 2021 - Finalists
Cloudera Data Impact Awards 2021 - Finalists Cloudera Data Impact Awards 2021 - Finalists
Cloudera Data Impact Awards 2021 - Finalists
 
2020 Cloudera Data Impact Awards Finalists
2020 Cloudera Data Impact Awards Finalists2020 Cloudera Data Impact Awards Finalists
2020 Cloudera Data Impact Awards Finalists
 
Edc event vienna presentation 1 oct 2019
Edc event vienna presentation 1 oct 2019Edc event vienna presentation 1 oct 2019
Edc event vienna presentation 1 oct 2019
 
Machine Learning with Limited Labeled Data 4/3/19
Machine Learning with Limited Labeled Data 4/3/19Machine Learning with Limited Labeled Data 4/3/19
Machine Learning with Limited Labeled Data 4/3/19
 
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Data Driven With the Cloudera Modern Data Warehouse 3.19.19Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
 
Introducing Cloudera DataFlow (CDF) 2.13.19
Introducing Cloudera DataFlow (CDF) 2.13.19Introducing Cloudera DataFlow (CDF) 2.13.19
Introducing Cloudera DataFlow (CDF) 2.13.19
 
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Introducing Cloudera Data Science Workbench for HDP 2.12.19Introducing Cloudera Data Science Workbench for HDP 2.12.19
Introducing Cloudera Data Science Workbench for HDP 2.12.19
 
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
 
Leveraging the cloud for analytics and machine learning 1.29.19
Leveraging the cloud for analytics and machine learning 1.29.19Leveraging the cloud for analytics and machine learning 1.29.19
Leveraging the cloud for analytics and machine learning 1.29.19
 
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
 
Leveraging the Cloud for Big Data Analytics 12.11.18
Leveraging the Cloud for Big Data Analytics 12.11.18Leveraging the Cloud for Big Data Analytics 12.11.18
Leveraging the Cloud for Big Data Analytics 12.11.18
 
Modern Data Warehouse Fundamentals Part 3
Modern Data Warehouse Fundamentals Part 3Modern Data Warehouse Fundamentals Part 3
Modern Data Warehouse Fundamentals Part 3
 
Modern Data Warehouse Fundamentals Part 2
Modern Data Warehouse Fundamentals Part 2Modern Data Warehouse Fundamentals Part 2
Modern Data Warehouse Fundamentals Part 2
 
Modern Data Warehouse Fundamentals Part 1
Modern Data Warehouse Fundamentals Part 1Modern Data Warehouse Fundamentals Part 1
Modern Data Warehouse Fundamentals Part 1
 
Extending Cloudera SDX beyond the Platform
Extending Cloudera SDX beyond the PlatformExtending Cloudera SDX beyond the Platform
Extending Cloudera SDX beyond the Platform
 
Federated Learning: ML with Privacy on the Edge 11.15.18
Federated Learning: ML with Privacy on the Edge 11.15.18Federated Learning: ML with Privacy on the Edge 11.15.18
Federated Learning: ML with Privacy on the Edge 11.15.18
 
Analyst Webinar: Doing a 180 on Customer 360
Analyst Webinar: Doing a 180 on Customer 360Analyst Webinar: Doing a 180 on Customer 360
Analyst Webinar: Doing a 180 on Customer 360
 
Build a modern platform for anti-money laundering 9.19.18
Build a modern platform for anti-money laundering 9.19.18Build a modern platform for anti-money laundering 9.19.18
Build a modern platform for anti-money laundering 9.19.18
 
Introducing the data science sandbox as a service 8.30.18
Introducing the data science sandbox as a service 8.30.18Introducing the data science sandbox as a service 8.30.18
Introducing the data science sandbox as a service 8.30.18
 

Kürzlich hochgeladen

Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-pyJamie (Taka) Wang
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxGDSC PJATK
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdfPedro Manuel
 

Kürzlich hochgeladen (20)

Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-py
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
Cybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptxCybersecurity Workshop #1.pptx
Cybersecurity Workshop #1.pptx
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
Nanopower In Semiconductor Industry.pdf
Nanopower  In Semiconductor Industry.pdfNanopower  In Semiconductor Industry.pdf
Nanopower In Semiconductor Industry.pdf
 

Sf NoSQL MeetUp: Apache Hadoop and HBase

  • 1.
  • 2.
  • 3.
  • 7.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.

Hinweis der Redaktion

  1. Given the nature of this meetup, I imagine that most people already know this. Data is very important, and it is getting more important every day.
  2. Google wrote an interesting article last year about this, called “The Unreasonable Effectiveness of Data”. In this paper, they talk about the algorithm for Google Translate, which has done very well in various competitions. They say that it is “unreasonable” that it works so well, since they do not use an advanced algorithm. Instead, they feed a simple algorithm with more data than anyone else, since they collect the entire web. With more data, they can do a better job even without doing anything fancy.
  3. For example, if you are a credit card company, you can use transaction data to determine how risky a loan is. If a customer drinks a lot of beer, he is probably risky. If the customer buy equipment for a dentist’s office, he is probably less risky. If a credit card company can do a better job of predicting risk, it will save them billions of dollars per year.
  4. One good quote is this, from Hal Varian, Google’s chief economist. He is saying that engineering is important, but what will differentiate businesses is their ability to extract information from data.
  5. One good quote is this, from Hal Varian, Google’s chief economist. He is saying that engineering is important, but what will differentiate businesses is their ability to extract information from data.
  6. So, what is Hadoop?
  7. So, what is Hadoop?
  8. Hadoop is an open source project hosted by the Apache Software Foundation that can reliably store and process a lot of data. It does this using commodity computers, like typical servers from Dell, HP, SuperMicro, etc. Here is a screenshot of a Hadoop cluster that has a capacity of 1.5 petabytes. This is not the largest Hadoop cluster! Hadoop can easily store many petabytes of information.
  9. Hadoop has two main components. The first is HDFS, the Hadoop Distributed File System, which stores data. The second is MapReduce, a fault tolerant distributed processing system, which processes the data stored in HDFS.
  10. The first thing is that Hadoop takes care of the distributed systems for you. As we said earlier, statisticians are the ones who need to be looking at data, but there are not many statisticians who are also systems programmers. Hadoop takes care of the systems problems so that the analysts can look at the data.
  11. Hadoop is also different because it harnesses the power of a cluster, while not making users interact separately with a bunch of machines. A user can write one piece of code and submit it to the cluster, and Hadoop will automatically deploy and run the code on all of the machines.
  12. Hadoop is also special because it really scales linearly, both in terms of data size and analysis complexity. For example, you may not have a lot of data, but you may want to do something very complicated with it – for example, detecting faces in a lot of images. Or, you may have a huge amount of data and just want to summarize or aggregate some metrics. Hadoop can work for both kinds of applications.
  13. Hadoop sounds great – it can make 4000 servers look like one big computer, and it can store and process petabytes of information. Let’s look at how it works.
  14. Let’s look at a typical Hadoop cluster. Most production clusters have at least 5 servers, though you can run it on a laptop for development. A typical server probably has 8 cores, 24GB of RAM, 4-12TB of disk, and gigabit ethernet, for example something like a Dell R410 or an HP SL170. On larger clusters, the machines are spread out in multiple racks, with 20 or 40 nodes per rack. The largest Hadoop clusters have about 4000 servers in them.
  15. Hadoop has 4 main types of nodes. There are a few special “master” nodes. The NameNode stores metadata about the filesystem – for example the names and permissions of all of the files on HDFS. The JobTracker acts as a scheduler for processing being done on the cluster. Then there are the slave nodes, which run on every machine in the cluster. The DataNodes store the actual file data, and the TaskTrackers run the actual analysis code that a user submits.
  16. Let’s look more closely at HDFS. As I said, the NameNode is responsible for metadata storage. Here we see that the NameNode has a file called /logs/weblog.txt. When it is written, it is automatically split into pieces called “blocks” which each have a numeric ID. The default block size is 64MB, but if a file is not a multiple of 64MB, a smaller block is used, so space is not wasted. Each block is then replicated out to three datanodes, so that if any datanode crashes, the data is not lost.
  17. This is a simplified diagram of how data is written on HDFS. First, the client asks the NameNode to create a new file. Then it directly accesses the datanodes to write the data – this is so that the NameNode is not a bottleneck when loading data. When the data has been completely written, the client informs the NameNode, and the NameNode saves the metadata.
  18. So now, we’ve uploaded a file into HDFS. HDFS has split it into chunks for us, and spread those chunks around the cluster on the DataNodes.But we don’t want to just store the data – we want to process it, too.
  19. This is where MapReduce comes in. I imagine some of the earlier presenters already covered MapReduce, so I’ll try to move quickly.
  20. The basic idea of MapReduce is simple. You provide just two functions, map, and reduce, and the framework takes care of the rest. That means you don’t need to worry about fault tolerance, or figuring out where the data is stored, for example.
  21. First, let’s look at map(). Map is a function that takes a key/value pair, and outputs a list of keys and values. For this example, we’re going to look at a MapReduce job that tells us how many bytes were transferred for each type of image in our Apache web logs. The input here has a key which is just the offset in the log file. The value is the text of the line itself. Our map function parses the line, and outputs simply the image type and the number of bytes transferred.The MapReduce framework automatically will run this function on the same node as the actual data is stored, on all of the nodes in the cluster at once.
  22. But wait! I said earlier that HDFS just stores bytes, but the Map function acts on keys and values. MapReduce uses a class called InputFormat to convert between bytes and key/value pairs. This is very powerful, since it means you don’t need to figure out a schema ahead of time – you can just load data and write an input format that parses it however you need.
  23. For each output from the map function, MapReduce will assign it to a “reducer”. So, if two different log files have data for the same image, the byte counts will still end up at the same reducer.
  24. The reducer function takes a key, and then a list of all of the values for that key. Here we see that user images were requested 3 times. The reducer calculates a sum. The default output format is TextOutputFormat, which produces a tab separated output file. In this case we found out that 6346 bytes of bandwidth were used for user images.
  25. Here’s a diagram of MapReduce from beginning to end. On the left you can see the input on HDFS, which has been split into 5 pieces. The pieces get assigned to map functions on three different nodes. Each of these then outputs some keys, which get grouped and sent to two different reducers, which also run on the cluster. The reducers then write their output back to HDFS.If at any point any node fails, the MapReduce framework will reassign the work to a different node.
  26. Now I have to apologize to you. I am speaking at a NoSQL event, but Hadoop is not NoSQL! There is a project called Hive which adds SQL support to Hadoop.Hive takes a query written in HiveQL, a dialect of SQL, and compiles it into a query plan. The query plan is in terms of MapReduce jobs, which are automatically executed on the cluster. The results can be returned to a client or written directly back to the cluster.
  27. Here’s an example Hive query. First, we create a table. Note that Hive’s tables can be stored as text – here it is just tab-separated values: “fields terminated by \\t”. Next, we load a particular dataset into the table. Then we can create a new summary table by issuing a query over the first table. This is a simple example, but Hive can do most common SQL functions.
  28. In addition to Hive, Hadoop has a number of other projects in its ecosystem. For example, Sqoop can import or export data from relational databases, and Pig is another high-level scripting language to help write MapReduce jobs quickly.
  29. Hadoop is also heavily used in production. Here are a few examples of companies that use Hadoop. In addition to these very big clusters at companies like Yahoo and Facebook, there are hundreds of smaller companies with clusters between 5 and 40 nodes.
  30. So, we just saw how MapReduce can be used to do analysis on a large dataset of Apache logs. MapReduce is a batch system, though – the very fastest MapReduce job takes about 24 seconds to run, even if the dataset is tiny.Also, HDFS is an append-only file system – it doesn’t support editing existing files. So, it is not like a database that will serve a web site.
  31. Hbase is a project that solves this problem. In a sentence, Hbase is an open source, distributed, sorted map modeled after Google’s BigTable.Open-source: Apache HBase is an open source project with an Apache 2.0 license.Distributed: HBase is designed to use multiple machines to store and serve data.Sorted Map: HBase stores data as a map, and guarantees that adjacent keys will be stored next to each other on disk.HBase is modeled after BigTable, a system that is used for hundreds of applications at Google.
  32. Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the &quot;info&quot; column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  33. Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the &quot;info&quot; column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  34. Earlier, I said that Hbase is a big sorted map. Here is an example of a table. The map key is (row key+column+timestamp). The value is the cell contents. The rows in the map are sorted by key. In this example, Row1 has 3 columns in the &quot;info&quot; column family. Row2 only has a single column. A column can also be empty.Each row has a timestamp. By default, the timestamp is set to the current time (in milliseconds since the Unix Epoch, January 1st 1970) when the row is inserted. A client can specify a timestamp when inserting or retrieving data, and specify how many versions of each cell should be maintained.Data in HBase is non-typed; everything is an array of bytes. Rows are sorted lexicographically. This order is maintained on disk, so Row1 and Row2 can be read together in just one disk seek.
  35. Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  36. Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  37. Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  38. Given that Hbase stores a large sorted map, the API looks similar to a map. You can get or put individual rows, or scan a range of rows. There is also a very efficient way of incrementing a particular cell – this can be useful for maintaining high performance counters or statistics. Lastly, it’s possible to write MapReduce jobs that analyze the data in Hbase.
  39. One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  40. One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  41. Data Layout: An traditional RDBMS uses a fixed schema and row-oriented storage model. This has drawbacks if the number of columns per row could vary drastically. A semi-structured column-oriented store handles this case very well.Transactions: A benefit that an RDBMS offers is strict ACID compliance with full transaction support. HBase currently offers transactions on a per row basis. There is work being done to expand HBase&apos;s transactional support.Query language: RDBMSs support SQL, a full-featured language for doing filtering, joining, aggregating, sorting, etc. HBase does not support SQL*. There are two ways to find rows in HBase: get a row by key or scan a table.Security: In version 0.20.4, authentication and authorization are not yet available for HBase.Indexes: In a typical RDBMS, indexes can be created on arbitrary columns. HBase does not have any traditional indexes**. The rows are stored sorted, with a sparse index of row offsets. This means it is very fast to find a row by its row key.Max data size: Most RDBMS architectures are designed to store GBs or TBs of data. HBase can scale to much larger data sizes.Read/write throughput limits: Typical RDBMS deployments can scale to thousands of queries/second. There is virtually no upper bound to the number of reads and writes HBase can handle.* Hive/HBase integration is being worked on** There are contrib packages for building indexes on HBase tables
  42. One of the interesting things about NoSQL is that the different systems don’t usually compete directly. We all have picked different tradeoffs.Hbase is a strongly consistent system, so it does not have as good availability as an eventual consistency system like Cassandra. But, we find that availability is good in practice!Since Hbase is built on top of Hadoop, it has very good integration. For example, we have a very efficient bulk load feature, and the ability to run mapreduce into or out of Hbase tables.Hbase’s partitioning is range based, and data is sorted by key on disk. This is different than other systems which use a hash function to distribute keys. This can be useful for guaranteeing that for a given user account, all of that user’s data can be read with just one disk seek.Hbase automatically reshards when necessary, and regions automatically reassign if servers die. Adding more servers is simple – just turn them on. There is no “reshard” step.Hbase is not just a key value store – it is similar to Cassandra in that each row has a sparse set of columns which are efficiently stored
  43. People often want to know “the numbers” about a storage system. I would recommend that you test it yourself – benchmarks always lie.But, here are some general numbers about Hbase. The largest cluster I’ve seen is 600 nodes, storing around 600TB. Most clusters are much smaller, only 5-20 nodes, hosting a few hundred gigabytes. Generally, writes take a few ms, and throughput is on the order of thousands of writes per node per second, but of course it depends on the size of the writes. Reads are a few milliseconds if the data is in cache, or 10-30ms if disk seeks are required.Generally we don’t recommend that you store very large values in Hbase. It is not efficient if the values stored are more than a few MB.
  44. Hbase is currently used in production at a number of companies. Here are a few examples.Facebook is using Hbase for a new user-facing product which is going to launch very soon. They also are using Hbase for analytics.StumbleUpon hosts large parts of its website from Hbase, and also built an advertising platform based on Hbase.Mozilla’s crash reporting infrastructure is based on Hbase. If your browser crashes and you submit the crash to mozilla, it is stored in Hbase for later analysis by the Firefox developers.
  45. So, if you are interested in Hadoop and Hbase, here are some resources. The easiest way to install Hadoop is to use Cloudera’s Distribution for Hadoop from cloudera.com. You can also download the Apache source directly from hadoop.apache.org. You can get started on your laptop, in a VM, or running on EC2. I also recommend our free training videos from our website.The Hadoop: The Definitive Guide book is also really great – it’s also available translated in Japanese.
  46. Thanks very much for having me! If you have any questions, please feel free to ask now or send me an email. Also, we’re hiring both in the USA and in Japan, so if you’re interested in working on Hadoop or Hbase, please get in touch.