Cloud Deployments with Apache Hadoop and Apache HBase
1. Cloud deployments with Apache Hadoop
and Apache HBase
8/23/11 NoSqlNow! 2011 Conference
Jonathan Hsieh
jon@cloudera.com
jon@cloudera com
@jmhsieh
2. Who Am I?
• Cloudera:
• Software Engineer on the Platform
Team
• Apache HBase contributor
Apache HBase contributor
• Apache Flume (incubating)
founder/committer
• Apache Sqoop (incubating)
committer
• U of Washington:
U of Washington:
• Research in Distributed Systems
and Programming Languages
8/23/11 NoSQLNow! 2011 Conference 2
3. Who is Cloudera?
Cloudera, the leader in Apache
Cloudera, the leader in Apache
Hadoop‐based software and
services, enables enterprises to
easily derive business value from
easily derive business value from
all their data.
8/23/11 NoSQLNow! 2011 Conference 3
7. “Every two days we create as much
“E t d t h
information as we did from the dawn
of civilization up until 2003.”
Eric Schmidt
Eric Schmidt
“I keep saying that the sexy job in the
p y g yj
next 10 years will be statisticians.
And I m not kidding.
And I’m not kidding.”
Hal Varian
(Google s chief economist)
(Google’s chief economist)
8/23/11 NoSQLNow! 2011 Conference 7
8. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 8
9. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 9
10. What is Apache HBase?
p
Apache HBase is an
p
open source,
horizontally scalable,
h i t ll l bl
sorted map data
p
store built on top of
Apache Hadoop.
Apache Hadoop
8/23/11 NoSQLNow! 2011 Conference 10
11. What is Apache Hadoop?
p p
Apache Hadoop is an
p p
open source,
horizontally scalable
horizontally scalable
system for reliably
storing and processing
massive amounts of
massive amounts of
data across many
commodity servers.
di
8/23/11 NoSQLNow! 2011 Conference 11
12. Open Source
p
• Apache 2.0 License
• A Community project with committers and contributors
from diverse organizations.
• Cloudera, Facebook, Yahoo!, eBay, StumbleUpon, Trend Micro,
…
• Code license means anyone can modify and use the
Code license means anyone can modify and use the
code.
8/23/11 NoSQLNow! 2011 Conference 12
13. Horizontally Scalable
y
600
• Store and access data on
put)
500
1‐1000’s commodity
000’ di
/Storage/Throughp
400
Performance
300
servers.
200
• Addi
Adding more servers
(IOPs/
100
0
# of servers should linearly increase
performance and
performance and
capacity.
• Storage capacity
Storage capacity
• Processing capacity
p / p p
• Input/output operations
8/23/11 NoSQLNow! 2011 Conference 13
14. Commodity Servers (circa 2010)
y ( )
• 2 quad core CPUs, running at least 2‐2.5GHz
• 16‐24GBs of RAM (24‐32GBs if you’re considering
HBase)
• 4x 1TB hard disks in a JBOD (Just a Bunch Of Disks)
configuration
• Gigabit Ethernet
• $5k‐10k / machine
8/23/11 NoSQLNow! 2011 Conference 14
16. We’ll use Apache Whirr
p
Apache Whirr is set of
Apache Whirr is set of
tools and libraries for
deploying clusters on
cloud services in a
cloud services in a
cloud‐neutral way.y
8/23/11 NoSQLNow! 2011 Conference 16
18. Done. That s it.
Done. That’s it.
You can go home now.
Ok, ok, we’ll come back to this later.
Ok k ’ll b k t thi l t
8/23/11 NoSQLNow! 2011 Conference 18
19. “The future is already here — it's
The future is already here — it s
just not very evenly distributed.”
William Gibson
William Gibson
8/23/11 NoSQLNow! 2011 Conference 19
21. Building a web index
g
• Download all of the web.
• Store all of it.
• Analyze all of it to build the index and rankings.
• Repeat.
p
8/23/11 NoSQLNow! 2011 Conference 21
22. Size of Google’s Web Index
g
• Let’s assume 50k per webpage, 500 bytes per URL.
• According to Google* 100000
orage in TB
• 1998: 26Mn indexed pages 10000
of Google's Web Sto
• 1.3TB
1000
• 2000: 1Bn indexed pages
• 500 TB
500 TB 100
Estimated Size o
• 2008: ~40Bn indexed pages 10
• 20,000 TB
1
• 2008: 1 Tn URLS
• ~500 TB just in urls names! Year
* http://googleblog.blogspot.com/2008/07/we‐knew‐web‐was‐big.html
8/23/11 NoSQLNow! 2011 Conference 22
23. Volume, Variety, and Velocity
, y, y
• The web has a massive amount of data.
• How do we store this massive volume?
• Raw web data is diverse, dirty, and semi‐structured.
• How do we deal with all the compute necessary to clean and
process this variety of data?
process this variety of data?
• There is new content being created all the time!
There is new content being created all the time!
• How do we keep up the velocity of new data coming in?
8/23/11 NoSQLNow! 2011 Conference 23
25. Did you try scaling vertically?
y y g y
=>
• Upgrading to a beefier machines could be quick.
• (upgrade that m1 large to a m2 4xlarge)
(upgrade that m1.large to a m2.4xlarge)
• This is probably a good idea.
• Not quite time for HBase
Not quite time for HBase.
• What if this isn’t enough?
What if this isn t enough?
8/23/11 NoSQLNow! 2011 Conference 25
26. Changed your schema and queries?
g y q
• Remove text search queries (LIKE).
• These are expensive.
• Remove joins.
• Normalization is more expensive today
Normalization is more expensive today.
• Multiple seeks are more expensive than sequential reads/writes.
• Remove foreign keys and encode your own relations.
Remove foreign keys and encode your own relations.
• Avoids constraint checks.
• Just put all parts of a query in a single table.
• Lots of Full table scans?
• Good time for Hadoop.
• This might be a good time to consider HBase.
8/23/11 NoSQLNow! 2011 Conference 26
28. Need to scale writes?
• Unfortunately, eventually you may need more writes.
• Let’s shard and federate the DB
• Loses consistency, order of operations.
• Replication has diminishing returns with more writes.
600
• HA operational complexity!
HA operational complexity! 500
Performance (IOPs)
400
• Gah! 300
200
100
0
# of servers
• This is definitely a good time to consider HBase
This is definitely a good time to consider HBase
8/23/11 NoSQLNow! 2011 Conference 28
29. Wait –
W i we “optimized the DB” by
“ i i d h DB” b
discarding some fundamental
discarding some fundamental
SQL/relational databases
features?
8/23/11 NoSQLNow! 2011 Conference 29
32. Massive amounts of storage
g
• How much data could you collect today?
• Many companies easily collect 200GB of logs per day.
• Facebook claims to collect >15 TB per day.
• How do you handle this problem?
• Just keep a few days worth of data and then /dev/null.
p y / /
• Sample your data.
• Move data to write only media (tape)
• If you want to analyze all your data, you are going to
need to use multiple machines.
need to use multiple machines
8/23/11 NoSQLNow! 2011 Conference 32
34. Interact with a cluster, not
Interact with a cluster not
a bunch of machines.
a bunch of machines.
8/23/11 NoSQLNow! 2011 Conference 34
Image:Yahoo! Hadoop cluster [ OSCON ’07 ]
36. Disk failures are very common
y
• USENIX FAST ‘07:
• tl;dr: 2‐8 % new disk drives fail per year.
; p y
• For a 100 node cluster with 1200 drives:
• a drive failure every 15‐60 days
y y
8/23/11 NoSQLNow! 2011 Conference 36
37. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Cluster Deployment
• p y
Cloud Deployment
• Demo
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 37
38. What is Apache Hadoop?
p p
Apache Hadoop is an
p p
open source,
horizontally scalable
horizontally scalable
system for reliably
storing and processing
massive amounts of
massive amounts of
data across many
commodity servers.
di
8/23/11 NoSQLNow! 2011 Conference 38
39. What is Apache Hadoop?
p p
Apache Hadoop is an
p p
open source,
horizontally scalable
horizontally scalable
system for reliably
storing and processing
massive amounts of
massive amounts of
data across many
commodity servers.
di
8/23/11 NoSQLNow! 2011 Conference 39
41. What did Google do?
g
• SOSP 2003:
• OSDI 2004:
8/23/11 NoSQLNow! 2011 Conference 41
42. Origin of Apache Hadoop
g p p
Hadoop wins
Terabyte sort
benchmark
Releases
Open Source, CDH3 and
Publishes MapReduce Cloudera
MapReduce, & HDFS Runs 4,000 Enterprise
Open Source, GFS Paper project Node Hadoop
Web Crawler
W bC l created by Cluster
project Launches SQL
Doug Cutting
created by Support for
Doug Cutting Hadoop
2002 2003 2004 2005 2006 2007 2008 2009 2010
8/23/11 NoSQLNow! 2011 Conference 42
43. Apache Hadoop in production today
p p p y
• Yahoo! Hadoop Clusters: > 82PB, >40k machines
(as of Jun ‘11)
( f ‘ )
• Facebook: 15TB new data per day;
1200+ machines, 30PB in one cluster
1200+ machines 30PB in one cluster
• Twitter: >1TB per day, ~120 nodes
• Lots of 5‐40 node clusters at companies without
petabytes of data (web retail finance telecom
of data (web, retail, finance, telecom,
research, government)
8/23/11 NoSQLNow! 2011 Conference 43
44. Case studies: Hadoop World ‘10
p
• eBay: Hadoop at eBay
• Twitter: The Hadoop Ecosystem at Twitter
The Hadoop Ecosystem at Twitter
• Yale University: MapReduce and Parallel Database Systems
• General Electric: Sentiment Analysis powered by Hadoop
• Facebook: HBase in Production
• Bank of America: The Business of Big Data
• AOL: AOL’s Data Layer
AOL s Data Layer
• Raytheon: SHARD: Storing and Querying Large‐Scale Data
• StumbleUpon: Mixing Real‐Time and Batch Processing
More Info at
More Info at
http://www.cloudera.com/company/press‐center/hadoop‐world‐nyc/
8/23/11 NoSQLNow! 2011 Conference 44
45. What is HDFS?
• HDFS is a file system.
• Just stores (a lot of) bytes as files.
• Distributes storage across many machines and many disks.
• R li bl b
Reliable by replicating of blocks across the machines
li i f bl k h hi
throughout the cluster.
• Horizontally Scalable ‐‐ just add more machines and disks.
Horizontally Scalable just add more machines and disks.
• Optimized for large sequential writes.
• Features
• Unix‐style permissions.
• Kerberos‐based authentication.
8/23/11 NoSQLNow! 2011 Conference 45
46. HDFS’s File API
• Dir
• List files
• Remove files
• Copy files
C fil
• Put / Get Files
• File
• Open
• Close
• Read
• pp
Write/Append
8/23/11 NoSQLNow! 2011 Conference 46
47. Ideal use cases
• Great for storage of
massive amounts of raw
or uncooked data
• M i d t fil
Massive data files
• All of your logs
8/23/11 NoSQLNow! 2011 Conference 47
49. Hadoop gives you agility
pg y g y
• Schema on write
• Traditional DBs require cleaning and applying schema.
• Great if you can plan on you schema well in advance.
• Schema on read
• HDFS enables you to store all of your raw data.
HDFS bl t t ll f d t
• Great if you have ad hoc queries on ad hoc data.
• If you don t know your schema, you can try new ones.
If you don’t know your schema you can try new ones
• Great if you are exploring schemas and transforming data.
8/23/11 NoSQLNow! 2011 Conference 49
50. Analyzing data in with MapReduce
y g p
• Apache Hadoop MapReduce
• Simplified distributed programming model.
• Just specify a “map” and a “reduce” function.
• MapReduce is a batch processing system.
• O ti i d f th
Optimized for throughput not latency.
h t tl t
• The fastest MR job takes 15+ seconds.
• You are not going to use this to directly serve data for
your next web site.
your next web site
8/23/11 NoSQLNow! 2011 Conference 50
51. Don’t like programming Java?
p g g
• Apache Pig
• Higher level dataflow language
• Good for data transformations
• G
Generally preferred by programmers
ll f db
• Apache Hive
Apache Hive
• SQL‐like language for querying data in HDFS
• Generally preferred by Data scientists and Business Analysts
Generally preferred by Data scientists and Business Analysts
8/23/11 NoSQLNow! 2011 Conference 51
52. There is a catch…
• Files are append‐only.
• No update or random writes.
• Data not available for read until
file is flushed.
file is flushed
• Files are ideally large.
• Enables storage of 10’s of
Enables storage of 10 s of
petabytes of data
• HDFS has split the file into 64MB
or 128MB blocks!
8/23/11 NoSQLNow! 2011 Conference 52
53. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 53
54. What is Apache HBase?
p
Apache HBase is an
p
open source,
horizontally scalable,
h i t ll l bl
sorted map data
p
store built on top of
Apache Hadoop.
Apache Hadoop
8/23/11 NoSQLNow! 2011 Conference 54
55. What is Apache HBase?
p
Apache HBase is an
p
open source,
horizontally scalable,
h i t ll l bl
sorted map data
p
store built on top of
Apache Hadoop.
Apache Hadoop
8/23/11 NoSQLNow! 2011 Conference 55
56. Inspiration: Google BigTable
p g g
• OSDI 2006 paper
• Goal: Quick random read/write access to massive
Q /
amounts of structured data.
• It was the data store for Google’s crawler web table, orkut,
analytics, earth, blogger, …
8/23/11 NoSQLNow! 2011 Conference 56
57. Sorted Map Datastore
p
• It really is just a Big Table!
0000000000
1111111111
2222222222
3333333333
• Tables consist of rows, each of
4444444444
which has a primary key (row key).
5555555555
6666666666
• Each row may have any number of
7777777777 columns.
• Rows are stored in sorted order
8/23/11 NoSQLNow! 2011 Conference 57
58. Anatomy of a Row
y
• Each row has a row key (think primary key)
• Lexicographically sorted byte[]
Lexicographically sorted byte[]
• Timestamp associated for keeping multiple versions of data
• R i
Row is made up of columns.
d f l
• Each column has a Cell
• Contents of a cell are an untyped byte[]’s.
• Apps must “know” types and handle them.
• Columns are logically like a Map<byte[] column, byte[]
value>
• Rows edits are atomic and changes are strongly consistent
(replicas are in sync)
(replicas are in sync)
8/23/11 NoSQLNow! 2011 Conference 58
59. Sorted Map Datastore (logical view)
p ( g )
Implicit PRIMARY KEY in
RDBMS terms
RDBMS terms Data is all stored as byte[]
y
Row key Data (column : value)
cutting
i ‘info:height’: ‘9ft’, ‘info:state’: ‘CA’
‘i f h i h ’ ‘9f ’ ‘i f ’ ‘CA’
‘roles:ASF’: ‘Director’, ‘roles:Hadoop’: ‘Founder’
tlipcon ‘info:height’: ‘5ft7’, ‘info:state’: ‘CA’
‘roles:Hadoop’: ‘Committer’@ts=2010,
‘roles:Hadoop’: ‘PMC’@ts=2011,
‘roles:Hive’: ‘Contributor’
Different rows may have different sets
of columns(table is sparse) A single cell might have different
Useful for *‐To‐Many mappings values at different timestamps
8/23/11 NoSQLNow! 2011 Conference 59
60. Apache HBase Depends upon HDFS
p p p
• Relies on HDFS for data durability and reliability.
• Uses HDFS to store its Write‐Ahead Log (WAL).
• Need flush/sync support in HDFS in order to prevent data loss
problems.
8/23/11 NoSQLNow! 2011 Conference 60
61. HBase in Numbers
• Largest cluster: ~1000 nodes, ~1PB
• Most clusters: 5‐20 nodes, 100GB‐4TB
• Writes: 1‐3ms, 1k‐10k writes/sec per node
• Reads: 0‐3ms cached, 10‐30ms disk
• 10‐40k reads / second / node from cache
• Cell size: 0‐3MB preferred
8/23/11 NoSQLNow! 2011 Conference 61
62. Access data via an API. There is “noSQL”*
Q
• HBase API
• get(row)
• put(row, Map<column, value>)
• scan(key range, filter)
(k fil )
• increment(row, columns)
• … (checkAndPut, delete, etc…)
(checkAndPut delete etc )
• *Ok that’s a slight lie
Ok, that s a slight lie.
• There is work on integrating Apache Hive, a SQL‐like query
language, with HBase.
• This is not optimal; x5 slower than normal Hive+HDFS.
8/23/11 NoSQLNow! 2011 Conference 62
63. Cost Transparency
p y
• Goal: Want predictable latency of random read and
write operations.
• To do this, you have to understand some of the physical layout
of your datastore.
of your datastore
• Efficiencies are based on Locality.
• A few physical concepts to help:
• Column Families
Column Families
• Regions
8/23/11 NoSQLNow! 2011 Conference 63
64. 0000000000
Column Families 1111111111
2222222222
• Just a set of related columns. 3333333333
4444444444
• Each may have different
Each may have different 5555555555
columns and access patterns. 6666666666
• Each may have parameters 7777777777
set per column family:
set per column family:
• Block Compression (none, gzip,
LZO, Snappy)
• Version retention policies
0000000000 0000000000
• Cache priority
1111111111 1111111111
• Improves read performance. 2222222222 2222222222
• CFs stored separately: access 3333333333 3333333333
one without wasting IO on the
other. 4444444444 4444444444
• Store related data together for 5555555555 5555555555
better compression
better compression 6666666666 6666666666
7777777777 7777777777
8/23/11 NoSQLNow! 2011 Conference 64
65. 0000000000
Sparse Columns
p 1111111111
2222222222
3333333333
• Provides schema flexibility 4444444444
• Add columns later, no need 5555555555
to transform entire schema. 6666666666
• Use for writing aggregates
g gg g
7777777777
atomically (“prejoins”)
• Improves performance
• Null columns don’t take 0000000000 0000000000
space. You don’t need to
1111111111 1111111111
2222222222 2222222222
read what is not there. 3333333333
• If you have a traditional db 4444444444 4444444444
table with lots of nulls, you 5555555555
data will probably fit well
data will probably fit well 6666666666
7777777777 7777777777
8/23/11 NoSQLNow! 2011 Conference 65
66. Regions
g
• Tables are divided into sets of rows called regions
• Read and write load are scaled by spreading across
many regions.
0000000000
1111111111
2222222222
0000000000
1111111111
2222222222
3333333333
3333333333
4444444444
4444444444
5555555555
5555555555
6666666666
6666666666
7777777777
7777777777
8/23/11 NoSQLNow! 2011 Conference 66
67. Sorted Map Datastore (physical view)
p (p y )
info Column Family
Row key Column key Timestamp Cell value
cutting info:height 1273516197868 9ft
cutting info:state 1043871824184 CA
tlipcon
p info:height
g 1273878447049 5ft7
tlipcon info:state 1273616297446 CA
roles Column Family
Row key
Row key Column key
Column key Timestamp Cell value
Cell value
cutting roles:ASF 1273871823022 Director
Sorted
on disk by cutting roles:Hadoop 1183746289103 Founder
Row key, Col
Row key Col tlipcon
li roles:Hadoop
l H d 1300062064923 PMC
key,
descending tlipcon roles:Hadoop 1293388212294 Committer
timestamp
tlipcon roles:Hive 1273616297446 Contributor
Milliseconds since unix epoch
8/23/11 NoSQLNow! 2011 Conference 67
68. HBase purposely doesn’t have everything
p p y y g
• No atomic multi‐row operations
• No global time ordering
• No built‐in SQL query language
• No query Optimizer
q y p
8/23/11 NoSQLNow! 2011 Conference 68
69. HBase vs just HDFS
j
Plain HDFS/MR HBase
Write pattern Append‐only Random write, bulk
incremental
Read pattern Full table scan, partition table Random read, small range
scan scan, or table scan
Hive (SQL) performance Very good 4‐5x slower
Structured t
St t d storage Do‐it‐yourself / TSV /
D it lf / TSV / Sparse column‐family data
S l f il d t
SequenceFile / Avro / ? model
Max data size 30+ PB ~1PB
If you have neither random write nor random read, stick to HDFS!
8/23/11 NoSQLNow! 2011 Conference 69
70. What if I don’t know what my schema should be?
y
• MR and HBase complement each other.
• Use HDFS for long sequential writes.
• Use MR for large batch jobs.
• Use HBase for random writes and reads
Use HBase for random writes and reads.
• Applications need HBase to have data structured in a
pp
certain way.
• Save raw data to HDFS and then experiment.
• MR for data transformation and ETL‐like jobs from raw
data.
• U b lk i
Use bulk import from MR to HBase.
tf MR t HB
8/23/11 NoSQLNow! 2011 Conference 70
71. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 71
72. Apache HBase in Production
p
• Facebook :
• Messages
• StumbleUpon:
• http://su pr
http://su.pr
• Mozilla :
• Socorro ‐‐ receives crash reports
p
• Yahoo:
• Web Crawl Cache
• Twitter:
• stores users and tweets for analytics
• … many others
h
8/23/11 NoSQLNow! 2011 Conference 72
73. High Level Architecture
High Le el Architect re
Your PHP Application
MapReduce Thrift/REST Gateway Your Java Application
Hive/Pig
Java Client
ZooKeeper
HBase
HDFS
8/23/11 NoSQLNow! 2011 Conference 73
74. Data‐centric schema design?
g
• Entity relationship model.
• Design schema in “Normalized form”.
• Figure out your queries.
• DBA
DBA to sets primary/foreign keys and indexes once query is
i /f i k di d i
known.
• Issues:
• Difficult and expensive to change schema.
p g
• Difficult and expensive to add columns.
8/23/11 NoSQLNow! 2011 Conference 74
75. Q y
Query‐centric schema design
g
• Know your queries, then design your schema
• Pick row keys to spread region load
• Spreading loads can increase read and write efficiency.
p g y
• Pick column‐family members for better reads
• Create these by knowing fields needed by queries.
• I b
Its better to have a fewer than many.
h f h
• Notice:
• App developers optimize the queries, not DBAs.
• If you’ve done the relational DB query optimizations, you are
mostly there already!
mostly there already!
8/23/11 NoSQLNow! 2011 Conference 75
76. Schema design exercises
g
• URL Shortener
• Bit.ly, goo.gl, su.pr etc.
• Web table
• Google BigTable’s example, Yahoo!’s Web Crawl Cache
• Facebook Messages
• Conversations and Inbox Search
Conversations and Inbox Search
• Transition strategies
8/23/11 NoSQLNow! 2011 Conference 76
77. Lookup hash, track click, and
Url Shortener Service forward to full url
forward to full url
Enter new long url, generate short url,
and store to user’s mapping
pp g
Look up all of a user’s Track historical click counts
shortened urls and display over time
8/23/11 NoSQLNow! 2011 Conference 77
78. Url Shortener schema
• All queries have at least one join.
• Constraints when adding new urls, and short urls.
• How do we delete users?
8/23/11 NoSQLNow! 2011 Conference 78
79. Url Shortener HBase schema
• Most common
queries are single
i i l
gets
• Use compression
settings on content
g
column families.
• Use composite row
key to group all of a
user’s shortened urls
’ h t d l
8/23/11 NoSQLNow! 2011 Conference 79
80. Web Tables
• Goal: Manage web crawls and its data by keeping
snapshots of the web.
• Google used BigTable for Web table example
• Y h
Yahoo uses HBase for Web crawl cache
HB f W b l h
Full scan applications
HBase
HB
Random access
applications
HDFS
8/23/11 NoSQLNow! 2011 Conference 80
81. Web Table queries
q
• Crawler continuously updating link and pages
• Want to track individual pages over time
• Want to group related pages from same site
• Want to calculate PageRank (links and backlinks)
• Want to do build a search index
• Want to do ad‐hoc analytical queries on page content
8/23/11 NoSQLNow! 2011 Conference 81
83. Web table Schema Design
g
• Want to keep related pages together
• Reverse url so that related pages are near each other
Reverse url so that related pages are near each other.
• archive.cloudera.com => com.cloudera.archive
• www.cloudera.com => com.cloudera.www
• Want to track pages over time
Want to track pages over time
• reverseurl‐crawltimestamp: put all of same url together
• Just scan a localized set of pages.
• Want to calculate pagerank (links and backlinks)
Want to calculate pagerank (links and backlinks)
• Just need links, so put raw content in different column family.
• Avoid having to do IO to read unused raw content.
• Want to index newer pages
Want to index newer pages
• Use Map Reduce on most recently crawled content.
• Want to do analytical queries
• We’ll do a full scan with filters.
We ll do a full scan with filters.
8/23/11 NoSQLNow! 2011 Conference 83
84. Facebook Messages (as of 12/10)
g ( / )
• 15Bn/month message email, 1k = 14TB
• 120Bn /month, 100 bytes = 11TB
Create a new Keyword search of
message/conversation
/ ti messages
Show full List most recent
conversation conversations
8/23/11 NoSQLNow! 2011 Conference 84
85. Possible Schema Design
g
• Show my most recent conversations
• Have a “conversation” table using user revTimeStamp as key
Have a conversation table using user‐revTimeStamp as key
• Have a Metadata column family
• Metadata contains date, to/from, one line of most recent
• Show me the full conversation
• Use same “conversation” table
• Content column family contains a conversation
• We already have full row key from previous, so this is just a quick lookup
• S
Search my inbox for keywords
h i b f k d
• Have a separate “inboxSearch” table
• Row key design: userId‐word‐messageId‐lastUpdateRevTimestamp
• Works for type‐ahead / partial messages
Works for type ahead / partial messages
• Show top N message ids with word
• Send new message
• Update both tables and to both users’s rows
• Update recent conversations and keyword index
8/23/11 NoSQLNow! 2011 Conference 85
86. Facebook MySQL to HBase transition
y Q
• Initially a normalized MySQL email schema sharded over
1000 production servers with 500M users.
000 d i i h 00
• How do we export users’ emails?
• Di t
Direct approach:h
• Big join – point table for Tb’s of data (500M users!)
• This would kill the production servers.
This would kill the production servers.
• Incremental approach:
• Snapshot copy via naïve bulk load into migration HBase cluster.
• Have incremental fetches from db for new live data.
• Use MR on migration HBase to do join, writing to final cluster.
• App writes to both places until migration complete.
A it t b th l til i ti l t
8/23/11 NoSQLNow! 2011 Conference 86
87. Row Key tricks
y
• Row Key design for schemas are critical
• Reasonable number of regions
Reasonable number of regions.
• Make sure key distributes to spread write load.
• Take advantage of lexicographic sort order.
• Numeric Keys and lexicographic sort
• Store numbers big‐endian.
• Pad ASCII numbers with 0’s.
• Use reversal to have most significant traits first.
• Reverse URL.
• Reverse timestamp to get most recent first.
p g
• Use composite keys to make key distribute nicely and work well
with sub‐scans
• Ex: User‐ReverseTimeStamp
• Do not use current timestamp as first part of row key!
8/23/11 NoSQLNow! 2011 Conference 87
88. Key Take‐aways
y y
• Denormalized schema localized data for single lookups.
• Rowkey is critical for lookups and subset scans.
• Make sure when writing row keys are distributed.
• Use Bulk loads and Map Reduce to re‐organize or
change you schema (during down time).
g y ( g )
• Multiple clusters for different workloads if you can
p y
afford it.
8/23/11 NoSQLNow! 2011 Conference 88
89. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 89
92. Name Node and Secondary Name Node
y
• NameNode
• The most critical node in the system
The most critical node in the system.
• Stores file system metadata on disk and in memory.
• Directory structures, permissions
• Modification stored as an edit log
Modification stored as an edit log.
• Fault tolerant but not highly available yet.
• Secondary NameNode
Secondary NameNode
• Not a hot standby!
• Gets a copy of file system metadata and edit log.
• Periodically compacts image and edit log and ships to NameNode.
d ll d d l d h d
• Make sure your DNS is setup properly!
8/23/11 NoSQLNow! 2011 Conference 92
93. Data nodes
• HDFS splits files into 64MB (or 128MB) blocks.
• Data nodes store and serves these blocks.
• By default, pipeline writes to 3 different machines.
• By default, local machine, at machines on other racks.
• Locality helps significantly on subsequent reads and
computation scheduling.
computation scheduling
8/23/11 NoSQLNow! 2011 Conference 93
94. Job Tracker and Task Trackers
• Now, we want to process that data!
• Job Tracker
• Schedules work and resource usage through out the cluster.
• Makes sure work gets done.
• Controls retry, speculative execution, etc.
• Task Trackers
Task Trackers
• These slaves do the “map” and “reduce” work.
• Co‐located with data nodes.
8/23/11 NoSQLNow! 2011 Conference 94
96. HMaster and ZooKeeper
p
• HMaster
• Controls which Regions are served by which Region Servers.
• Assigns regions to new region servers when they arrive or go
down.
• Can have a hot standby master if main master goes down.
• All region state kept in ZooKeeper.
• Apache ZooKeeper
• Highly Available System for coordination.
Highly Available System for coordination
• Generally 3 or 5 machines (always an odd number).
• Uses consensus to guarantee common shared state.
• Writes are considered expensive.
8/23/11 NoSQLNow! 2011 Conference 96
97. Region Server
g
• Tables are chopped up into regions.
• A region is only served by one region server at a time.
• Regions are served by a “region server”.
• Load balancing if region server goes down.
• Co‐locate region servers with data nodes.
• Takes advantage of HDFS file locality.
• Important that clocks are in reasonable sync. Use NTP!
8/23/11 NoSQLNow! 2011 Conference 97
98. Stability and Tuning Hints
y g
• Monitor your cluster.
• Avoid memory swapping.
• Do not oversubscribe memory.
• Can suffer from cascading failures
Can suffer from cascading failures.
• Mostly scan jobs?
Mostly scan jobs?
• Small read cache, low swapiness.
• Large max region size for large column families.
g g g
• Avoid costly “region splits”.
• Make the ZK timeout higher.
• longer to recover from failure but prevents cascading failure.
8/23/11 NoSQLNow! 2011 Conference 98
99. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 99
101. We’ll use Apache Whirr
p
Apache Whirr is set of
Apache Whirr is set of
tools and libraries for
deploying clusters on
cloud services in a
cloud services in a
cloud‐neutral way.y
8/23/11 NoSQLNow! 2011 Conference 101
102. This is great for setting up a cluster...
g g p
jon@grimlock:~/whirr-0.5.0-incubating$
bin/whirr launch-cluster --config
recipes/hbase-ec2.properties
i /hb 2 i
jon@grimlock:~/whirr-0.5.0-incubating$
j i l k / hi i b i $
bin/whirr launch-cluster --config
recipes/scm-ec2.properties
recipes/scm ec2 properties
8/23/11 NoSQLNow! 2011 Conference 102
103. and an easy way to tear down a cluster.
y y
jon@grimlock:~/whirr-0.5.0-incubating$
bin/whirr destroy-cluster --config
recipes/hbase-ec2.properties
i /hb 2 i
jon@grimlock:~/whirr-0.5.0-incubating$
j i l k / hi i b i $
bin/whirr destroy-cluster --config
recipes/scm-ec2.properties
recipes/scm ec2 properties
8/23/11 NoSQLNow! 2011 Conference 103
106. What did we just do?
j
• Whirr
• Provisioned the machines on EC2.
• Installed SCM on all the machines.
• Cloudera Service and Configuration Manager Express
• O h t t d th d l
Orchestrated the deployment the services in the proper order.
t th i i th d
• ZooKeeper, Hadoop, HDFS, HBase and Hue
• Set service configurations.
g
• Free download for kicking off up to 50 nodes!
http://www.cloudera.com/products‐services/scm‐express/
8/23/11 NoSQLNow! 2011 Conference 106
107. To Cloud or not to Cloud?
• The key feature of a cloud deployment
• Elasticity: The ability to expand and contract the number of
machines being used on demand.
• Things to consider:
• E
Economics of machines and people.
i f hi d l
• Capital Expenses vs Operational Expenses.
• Workload requirements: Performance / SLAs.
q /
• Previous investments.
8/23/11 NoSQLNow! 2011 Conference 107
108. Economics of a cluster
Economics of a cluster
EC2 Cloud deployment
EC2 Cloud deployment Private Cluster
Private Cluster
• 10 small instances • 10 commodity servers
• 1 core, 1.7GB ram, 160GB disk
• $0 085/hr/mchn => $7 446/yr
$0.085/hr/mchn => $7,446/yr
• 8 core, 24 GB ram, 6TB disk
• Reserved $227.50/yr/machine • $6500 /machine => $65,000
+$0.03/hr/mchn=> $4903/yr. • + physical space
• + networking gear
+ networking gear
• 10 Dedicated‐Reserved Quad
XLs instances • + power
• 8 core, 23GB ram, 1690GB disk • + admin costs
• $6600/yr/mchn + + • + more setup time
$0.84/hr/mchn +
$10/hr/region => 66,000 +
73,584 + 87,600 =>
$227,184/yr
$227 184/yr
8/23/11 NoSQLNow! 2011 Conference 108
109. Pros of using the cloud
g
• With Virtualized machines, you can install any SW you
want!
!
• Great for bursty or occasional loads that expand and shrink.
• G t if
Great if your apps and data is already in the cloud.
d d t i l d i th l d
• Logs already live in S3 for example.
• Great if you don’t have a large ops team
Great if you don t have a large ops team.
• Save money on people dealing with colos, hardware failures.
• Steadier ops team personnel requirements, (unless catastrophic
failure).
• Great for experimentation.
• Great for testing/QA clusters at scale.
f i / l l
8/23/11 NoSQLNow! 2011 Conference 109
110. Cons of using the cloud
g
• Getting data in and out of EC2.
• N t
Not cost, but amount of time.
t b t t f ti
• AWS Direct connect can help.
• Virtualization causes varying network connectivity.
y g y
• ZooKeeper timeouts can cause cascading failure.
• Some connections fast, others slow.
• Dedicated or Cluster compute instance could improve
Dedicated or Cluster‐compute instance could improve.
• Virtualization causes unpredictable IO performance.
• EBS is like a SAN and an eventual bottleneck.
• Ephemeral disks perform better but not recoverable on failures.
• Still need to deal with Disaster recovery.
• What happens if an EC2 or a region goes down? (4/21/11)
What happens if an EC2 or a region goes down? (4/21/11)
8/23/11 NoSQLNow! 2011 Conference 110
111. Cloudera’s Experience with Hadoop in the Cloud
p p
• Some Enterprise Hadoop/MR use the Cloud.
• Good for daily jobs with moderate amounts of data (GB’s),
( )
generally computationally expensive.
• Ex: Periodic matching or recommendation applications.
Ex: Periodic matching or recommendation applications.
• Spin up cluster.
• Upload a data set to S3.
• Do an n2 matching or recommendation job.
• Mapper expands data.
• Reducer gets small amount of data back.
g
• Write to S3.
• Download result set.
• Tear down cluster
Tear down cluster.
8/23/11 NoSQLNow! 2011 Conference 111
112. Cloudera’s Experience with HBase in the cloud
p
• Almost all enterprise HBase users use physical hardware.
• Some initially used the cloud, but transitioned to physical
hardware.
• O
One story:
t
• EC2: 40nodes in ec2 xl instances,
• Bought 10 physical machines and got similar or better performance.
Bought 10 physical machines and got similar or better performance.
• Why?
• Physical hardware gives more control of machine build out, network
infrastructure, locality which are critical for performance.
• HBase is up all the time and usually grows over time.
8/23/11 NoSQLNow! 2011 Conference 112
113. Outline
• Motivation
• Enter Apache Hadoop
• Enter Apache HBase
• Real‐World Applications
• System Architecture
• Deployment (in the Cloud)
• Conclusions
8/23/11 NoSQLNow! 2011 Conference 113
114. Key takeaways
y y
• Apache HBase is not a Database! There are other scalable
databases.
databases
• Query‐centric schema design, not data‐centric schema
design.
• I
In production at 100’s of TB scale at several large
d ti t 100’ f TB l t ll
enterprises.
• If you are restructuring your SQL DB to optimize it, you may
be a candidate for HBase.
• HBase complements and depends upon Hadoop.
• Hadoop makes sense in the cloud for some production
Hadoop makes sense in the cloud for some production
workloads.
• HBase in the cloud for experimentation but generally is in
physical hardware for production.
physical hardware for production
8/23/11 NoSQLNow! 2011 Conference 114
115. HBase vs RDBMS
RDBMS HBase
Data layout Row‐oriented Column‐family‐oriented
Transactions Multi‐row ACID Single row only
Query language SQL get/put/scan/etc *
Security Authentication/Authorization Work in progress
Indexes On arbitrary columns Row‐key only*
Max data size
Max data si e TBs ~1PB
Read/write 1000s queries/second Millions of
throughput limits “queries”/second
8/23/11 NoSQLNow! 2011 Conference 115
116. HBase vs other “NoSQL”
Q
• Favors Strict Consistency over Availability (but
availability is good in practice!)
• Great Hadoop integration (very efficient bulk loads,
MapReduce analysis)
M R d l i)
• Ordered range partitions (not hash)
• Automatically shards/scales (just turn on more servers,
really proven at petabyte scale)
• S
Sparse column storage (not key‐value)
l ( k l )
8/23/11 NoSQLNow! 2011 Conference 116
117. HBase vs just HDFS
j
Plain HDFS/MR HBase
Write pattern Append‐only Random write, bulk
incremental
Read pattern Full table scan, partition table Random read, small range
scan scan, or table scan
Hive (SQL) performance Very good 4‐5x slower
Structured t
St t d storage Do‐it‐yourself / TSV /
D it lf / TSV / Sparse column‐family data
S l f il d t
SequenceFile / Avro / ? model
Max data size 30+ PB ~1PB
If you have neither random write nor random read, stick to HDFS!
8/23/11 NoSQLNow! 2011 Conference 117
118. More resources?
• Download Hadoop and HBase!
• CDH ‐ Cloudera’s Distribution including
Apache Hadoop
http://cloudera.com/
http://cloudera com/
• http://hadoop.apache.org/
• Try it out! (Locally, VM, or EC2)
Try it out! (Locally, VM, or EC2)
• Watch free training videos on
http://cloudera.com/
p // /
8/23/11 NoSQLNow! 2011 Conference 118