SlideShare ist ein Scribd-Unternehmen logo
1 von 31
Downloaden Sie, um offline zu lesen
Intro to HBase
                      Internals &
                     Schema Design
                           (for HBase Users)
                     Alex Baranau, Sematext International, 2012




Monday, July 9, 12
About Me


                     Software Engineer at Sematext International

                     http://blog.sematext.com/author/abaranau

                     @abaranau

                     http://github.com/sematext (abaranau)




Monday, July 9, 12
Agenda


                     Logical view

                     Physical view

                     Schema design

                     Other/Advanced topics




Monday, July 9, 12
Why?
                     Why should I (HBase user) care about
                     HBase internals?

                       HBase will not adjust cluster
                       settings to optimal based on usage
                       patterns automatically

                       Schema design, table settings
                       (defined upon creation), etc.
                       depend on HBase implementation
                       aspects


Monday, July 9, 12
Logical View




Monday, July 9, 12
Logical View: Regions
                     HBase cluster serves multiple tables, distinguished by
                     name

                     Each table contains of rows

                     Each row contains cells:
                     (row key, column family, column, timestamp) -> value

                     Table is split into Regions (table shards, each
                     contains full rows), defined by start and end row keys




Monday, July 9, 12
Logical View: Regions are
                              Shards
                     Regions are “atoms of distribution”

                     Each region assigned to single RegionServer
                     (HBase cluster slave)

                       Rows of particular Region served by single
                       RS (cluster slave)

                       Regions are distributed evenly across RSs

                       Region has configurable max size

                     When region reaches max size (or on request)
                     it is split into two smaller regions, which
                     can be assigned to different RSs



Monday, July 9, 12
Logical View: Regions on
                              Cluster
                                                ZooKeeper
                                                ZooKeeper
                                                 ZooKeeper


                     client              HMaster
                                          HMaster


                                          Region   Region

                                         RegionServer


                              Region   Region      Region   Region
                                                   RegionServer
                                                    RegionServer
                              RegionServer         RegionServer


Monday, July 9, 12
Logical View: Regions Load

                     It is essential for Regions under the
                     load to be evenly distributed across
                     the cluster

                     It is HBase user’s job to make sure
                     the above is true. Note: even
                     distribution of Regions over cluster
                     doesn’t imply that the load is evenly
                     distributed



Monday, July 9, 12
Logical View: Regions Load

                     Take into account that rows are stored in ordered
                     manner

                     Make sure you don’t write rows with sequential
                     keys to avoid RS hotspotting*
                        When writing data with monotonically increasing/decreasing
                        keys, data is written at one RS at a time

                     Use pre-splitting of the table upon creation
                        Starting with single region means using one RS for some time

                     In general, splitting can be expensive

                     Increase max region size

      * see https://github.com/sematext/HBaseWD



Monday, July 9, 12
Logical View: Slow RSs
                      When load is distributed evenly, watch for
                      slowest RSs (HBase slaves)

                        Since every region served by single RS,
                        one slow RS can slow down cluster
                        performance e.g. when:

                           data is written into multiple RSs at
                           even pace (random value-based row keys)

                           data is being read from many RSs when
                           doing scan



Monday, July 9, 12
Physical View




Monday, July 9, 12
Physical View: Write/Read Flow
                     HTable       client                           client
                       buffer                          HTable


               write                                     read         RegionServer

                         Region                    z            Region ...
                                                                                         ...
                                        Store                               Store
                       MemStore                                  MemStore
                                        (per CF)                            (per CF)

                          flush

                       HFile    HFile     ...            HFile     HFile



                                          Write Ahead Log

                                                                                       HDFS
Monday, July 9, 12
Physical: Speed up Writing

                     Enabling & increasing client-side buffer reduces RPC
                     operations amount

                       warn: possible loss of buffered data

                          in case of client failure; design for failover

                          in case of write failure (networking/server-
                          side issues); can be handled on client

                     Disabling WAL increases write speed

                       warn: possible data loss in case of RS failure

                     Use bulk import functionality (writes HFiles
                     directly, which can be later added to HBase)




Monday, July 9, 12
Physical: Memstore Flushes
                     When memstore is flushed N HFiles are created (one per
                     CF)

                     Memstore size which causes flushing is configured on
                     two levels:

                        per RS: % of heap occupied by memstores

                        per table: size in MB of single memstore (per CF)
                        of Region

                     When Region memstores flushes, memstores of all CFs
                     are flushed

                        Uneven data amount between CFs causes too many
                        flushes & creation of too many HFiles (one per CF
                        every time)

                        In most cases having one CF is the best design

Monday, July 9, 12
Physical: Memstore Flushes
                     Important: there are Memstore size
                     thresholds which cause writes to be blocked,
                     so slow memstore flushes and overuse of
                     memory by memstore can cause write perf
                     degradation

                       Hint: watch for flush queue size metric on
                       RSs

                     At the same time the more memory memstore
                     uses the better for writing/reading perf
                     (unless it reaches those “write blocking”
                     thresholds)



Monday, July 9, 12
Physical: Memstore Flushes

                        Example of good situation
                                                       *




                * http://sematext.com/spm/index.html

Monday, July 9, 12
Physical: HFiles Compaction
                     HFiles are periodically compacted into bigger
                     HFiles containing same data

                       Reading from less HFiles faster

                     Important: there’s a configured max number of files
                     in Store which, when reached causes writes to block

                       Hint: watch for compaction queue size metric on
                       RSs


                                    read                       Store
                                                    MemStore
                                                               (per CF)




                                            HFile     HFile



Monday, July 9, 12
Physical: Data Locality
                       RSs are usually collocated                    HDFS

                       with HDFS DataNodes                          MapReduce

                                                                     HBase



                                   RegionServer      RegionServer




                                                                      TaskTracker
                     TaskTracker




                                     DataNode          DataNode



                                        Slave Node            Slave Node




Monday, July 9, 12
Physical: Data Locality
                     HBase tries to assign Regions to RSs so that
                     Region data stored physically on the same node.
                     But sometimes fails

                       after Region splits there’s no guarantee that
                       there’s a node that has all blocks (HDFS level)
                       of new Region and

                       no guarantee that HBase will not re-assign this
                       Region to different RS in future (even
                       distribution of Regions takes preference over
                       data locality)

                     There’s an ongoing work towards better preserving
                     data locality



Monday, July 9, 12
Physical: Data Locality
                     Also, data locality can break when:

                        Adding new slaves to cluster

                        Removing slaves from cluster

                            Incl. node failures

                     Hint: look at networking IO between slaves when writing/reading
                     data, it should be minimal

                     Important:

                        make sure HDFS is well balanced (use balancer tool)

                        try to rebalance Regions in HBase cluster if possible (HBase
                        Master restart will do that) to regain data locality

                        Pre-split table on creation to limit (ideally avoid) splits
                        and regions movement; manage splits manually sometimes helps




Monday, July 9, 12
Schema Design
                       (very briefly)




Monday, July 9, 12
Schema: row keys
                     Using row key (or keys range) is the most
                     efficient way to retrieve the data from HBase

                       Row key design is major part of schema design

                       Note: no secondary indices available out of
                       the box


                                  Row Key                  Data
                      ‘login_2012-03-01.00:09:17’   d:{‘user’:‘alex’}
                                    ...                    ...
                      ‘login_2012-03-01.23:59:35’   d:{‘user’:‘otis’}
                      ‘login_2012-03-02.00:00:21’   d:{‘user’:‘david’}




Monday, July 9, 12
Schema: row keys
                     Redundancy is OK!
                        warn: changing two rows in HBase is not atomic operation



                                  Row Key                       Data
                     ‘login_2010-01-01.00:09:17’          d:{‘user’:‘alex’}
                                    ...                          ...
                     ‘login_2012-03-01.23:59:35’          d:{‘user’:‘otis’}
                     ‘alex_2010-01-01.00:09:17’         d:{‘action’:‘login’}
                                    ...                          ...
                     ‘otis_2012-03-01.23:59:35’         d:{‘action’:‘login’}
                     ‘alex_login_2010-01-01.00:09:17’     d:{‘device’:’pc’}
                                    ...                          ...
                     ‘otis_login_2012-03-01.23:59:35’   d:{‘device’:‘mobile’}




Monday, July 9, 12
Schema: Relations
                        Not relational

                        No joins

                        Denormalization is OK! Use ‘nested entities’

                                            Row Key                         Data

                                                          d:{
                                                          student_firstname:Alex,
                                                          student_lastname:Baranau,
                     student
                                                              professor_math_firstname:David,
                        *            ‘student_abaranau’       professor_math_lastname:Smart,


                        *                                     professor_cs_firstname:Jack,

             professor                                        professor_cs_lastname:Weird,

                                                          }

                                         ‘prof_dsmart’    d:{...}


Monday, July 9, 12
Schema: row key/CF/qual size

                     HBase stores cells individually

                        great for “sparse” data

                        row key, CF name and column name stored with each
                        cell which may affect data amount to be stored and
                        managed

                           keep them short

                           serialize and store many values into single cell

                             Row Key                     Data
                                          d:{
                                          s:Alex#Baranau#cs#2009,
                           ‘s_abaranau’   p_math:David#Smart,
                                          p_cs:Jack#Weird,
                                          }



Monday, July 9, 12
Other/Advanced
                         Topics




Monday, July 9, 12
Advanced: Co-Processors
                     CoProcessors API (HBase 0.92.0+) allows to:

                       execute (querying/aggregation/etc.)
                       logic on server side (you may think of
                       it as of stored procedures in RDBMS)

                       perform auditing of actions performed on
                       server-side (you may think of it as of
                       triggers in RDBMS)

                       apply security rules for data access

                       and many more cool stuff



Monday, July 9, 12
Other: Use Compression
                      Using compression:

                         reduces data amount to be stored on disks

                         reduces data amount to be transferred when RS reading data not
                         from local replica

                         increases amount of CPU used, but CPU isn’t usually a bottleneck

                      Favor compression speed over compression ratio

                         SNAPPY is good

                      Use wisely:

                         e.g. avoid wasting CPU cycles on compressing images

                             compression can be configured on per CF basis, so storing
                             non-compressible data in separate CF sometimes helps

                         data blocks are uncompressed in memory, avoid this to cause OOME

                         note: when scanning (seeking data to return for scan) many data
                         blocks can be uncompressed even if none of the data will be
                         returned from those block


Monday, July 9, 12
Other: Use Monitoring



                        TBD

                        Ganglia, Cacti, other*, Just use it!




                * http://sematext.com/spm/index.html

Monday, July 9, 12
Qs?




                     Sematext is hiring!
Monday, July 9, 12

Weitere ähnliche Inhalte

Was ist angesagt?

How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...DataWorks Summit/Hadoop Summit
 
NoSQL Graph Databases - Why, When and Where
NoSQL Graph Databases - Why, When and WhereNoSQL Graph Databases - Why, When and Where
NoSQL Graph Databases - Why, When and WhereEugene Hanikblum
 
Troubleshooting Kerberos in Hadoop: Taming the Beast
Troubleshooting Kerberos in Hadoop: Taming the BeastTroubleshooting Kerberos in Hadoop: Taming the Beast
Troubleshooting Kerberos in Hadoop: Taming the BeastDataWorks Summit
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingDatabricks
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & FeaturesDataStax Academy
 
Transformations and actions a visual guide training
Transformations and actions a visual guide trainingTransformations and actions a visual guide training
Transformations and actions a visual guide trainingSpark Summit
 
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...DataStax
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache icebergAlluxio, Inc.
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
NOSQL Database: Apache Cassandra
NOSQL Database: Apache CassandraNOSQL Database: Apache Cassandra
NOSQL Database: Apache CassandraFolio3 Software
 
PostgreSQL HA
PostgreSQL   HAPostgreSQL   HA
PostgreSQL HAharoonm
 
MySQL with DRBD/Pacemaker/Corosync on Linux
 MySQL with DRBD/Pacemaker/Corosync on Linux MySQL with DRBD/Pacemaker/Corosync on Linux
MySQL with DRBD/Pacemaker/Corosync on LinuxPawan Kumar
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiDatabricks
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDBMike Dirolf
 
Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive

Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive

Cloudera, Inc.
 
Design of Hadoop Distributed File System
Design of Hadoop Distributed File SystemDesign of Hadoop Distributed File System
Design of Hadoop Distributed File SystemDr. C.V. Suresh Babu
 

Was ist angesagt? (20)

How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...
 
NoSQL Graph Databases - Why, When and Where
NoSQL Graph Databases - Why, When and WhereNoSQL Graph Databases - Why, When and Where
NoSQL Graph Databases - Why, When and Where
 
Troubleshooting Kerberos in Hadoop: Taming the Beast
Troubleshooting Kerberos in Hadoop: Taming the BeastTroubleshooting Kerberos in Hadoop: Taming the Beast
Troubleshooting Kerberos in Hadoop: Taming the Beast
 
SQOOP PPT
SQOOP PPTSQOOP PPT
SQOOP PPT
 
Large Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured StreamingLarge Scale Lakehouse Implementation Using Structured Streaming
Large Scale Lakehouse Implementation Using Structured Streaming
 
Cassandra Introduction & Features
Cassandra Introduction & FeaturesCassandra Introduction & Features
Cassandra Introduction & Features
 
Transformations and actions a visual guide training
Transformations and actions a visual guide trainingTransformations and actions a visual guide training
Transformations and actions a visual guide training
 
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...
Maximum Overdrive: Tuning the Spark Cassandra Connector (Russell Spitzer, Dat...
 
Building an open data platform with apache iceberg
Building an open data platform with apache icebergBuilding an open data platform with apache iceberg
Building an open data platform with apache iceberg
 
Apache Ranger
Apache RangerApache Ranger
Apache Ranger
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
NOSQL Database: Apache Cassandra
NOSQL Database: Apache CassandraNOSQL Database: Apache Cassandra
NOSQL Database: Apache Cassandra
 
PostgreSQL HA
PostgreSQL   HAPostgreSQL   HA
PostgreSQL HA
 
MySQL with DRBD/Pacemaker/Corosync on Linux
 MySQL with DRBD/Pacemaker/Corosync on Linux MySQL with DRBD/Pacemaker/Corosync on Linux
MySQL with DRBD/Pacemaker/Corosync on Linux
 
A Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and HudiA Thorough Comparison of Delta Lake, Iceberg and Hudi
A Thorough Comparison of Delta Lake, Iceberg and Hudi
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive

Apache Kudu: Technical Deep Dive


Apache Kudu: Technical Deep Dive


 
Sqoop
SqoopSqoop
Sqoop
 
Design of Hadoop Distributed File System
Design of Hadoop Distributed File SystemDesign of Hadoop Distributed File System
Design of Hadoop Distributed File System
 
Apache flink
Apache flinkApache flink
Apache flink
 

Ähnlich wie Intro to HBase Internals & Schema Design (for HBase users)

Facebook keynote-nicolas-qcon
Facebook keynote-nicolas-qconFacebook keynote-nicolas-qcon
Facebook keynote-nicolas-qconYiwei Ma
 
支撑Facebook消息处理的h base存储系统
支撑Facebook消息处理的h base存储系统支撑Facebook消息处理的h base存储系统
支撑Facebook消息处理的h base存储系统yongboy
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase强 王
 
Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base InstallCloudera, Inc.
 
Hbase introduction
Hbase introductionHbase introduction
Hbase introductionyangwm
 
Storage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesStorage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesLINE Corporation (Tech Unit)
 
PostgreSQL Scaling And Failover
PostgreSQL Scaling And FailoverPostgreSQL Scaling And Failover
PostgreSQL Scaling And FailoverJohn Paulett
 
Introduction To HBase
Introduction To HBaseIntroduction To HBase
Introduction To HBaseAnil Gupta
 
Hadoop Interview Questions and Answers
Hadoop Interview Questions and AnswersHadoop Interview Questions and Answers
Hadoop Interview Questions and AnswersMindsMapped Consulting
 
HBase User Group #9: HBase and HDFS
HBase User Group #9: HBase and HDFSHBase User Group #9: HBase and HDFS
HBase User Group #9: HBase and HDFSCloudera, Inc.
 
A presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASA presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASRahul Janghel
 
Scalability
ScalabilityScalability
Scalabilityfelho
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hari Shankar Sreekumar
 

Ähnlich wie Intro to HBase Internals & Schema Design (for HBase users) (20)

Facebook keynote-nicolas-qcon
Facebook keynote-nicolas-qconFacebook keynote-nicolas-qcon
Facebook keynote-nicolas-qcon
 
支撑Facebook消息处理的h base存储系统
支撑Facebook消息处理的h base存储系统支撑Facebook消息处理的h base存储系统
支撑Facebook消息处理的h base存储系统
 
Facebook Messages & HBase
Facebook Messages & HBaseFacebook Messages & HBase
Facebook Messages & HBase
 
Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
 
Hw09 Practical HBase Getting The Most From Your H Base Install
Hw09   Practical HBase  Getting The Most From Your H Base InstallHw09   Practical HBase  Getting The Most From Your H Base Install
Hw09 Practical HBase Getting The Most From Your H Base Install
 
HBase with MapR
HBase with MapRHBase with MapR
HBase with MapR
 
Hbase
HbaseHbase
Hbase
 
Hbase introduction
Hbase introductionHbase introduction
Hbase introduction
 
Storage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesStorage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messages
 
PostgreSQL Scaling And Failover
PostgreSQL Scaling And FailoverPostgreSQL Scaling And Failover
PostgreSQL Scaling And Failover
 
Introduction To HBase
Introduction To HBaseIntroduction To HBase
Introduction To HBase
 
Introduction to HBase
Introduction to HBaseIntroduction to HBase
Introduction to HBase
 
Hadoop Interview Questions and Answers
Hadoop Interview Questions and AnswersHadoop Interview Questions and Answers
Hadoop Interview Questions and Answers
 
Hbase 20141003
Hbase 20141003Hbase 20141003
Hbase 20141003
 
HBase User Group #9: HBase and HDFS
HBase User Group #9: HBase and HDFSHBase User Group #9: HBase and HDFS
HBase User Group #9: HBase and HDFS
 
A presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NASA presentaion on Panasas HPC NAS
A presentaion on Panasas HPC NAS
 
Hadoop Architecture
Hadoop ArchitectureHadoop Architecture
Hadoop Architecture
 
Scalability
ScalabilityScalability
Scalability
 
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
Hadoop architecture (Delhi Hadoop User Group Meetup 10 Sep 2011)
 
Hbase: an introduction
Hbase: an introductionHbase: an introduction
Hbase: an introduction
 

Kürzlich hochgeladen

Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 

Kürzlich hochgeladen (20)

Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 

Intro to HBase Internals & Schema Design (for HBase users)

  • 1. Intro to HBase Internals & Schema Design (for HBase Users) Alex Baranau, Sematext International, 2012 Monday, July 9, 12
  • 2. About Me Software Engineer at Sematext International http://blog.sematext.com/author/abaranau @abaranau http://github.com/sematext (abaranau) Monday, July 9, 12
  • 3. Agenda Logical view Physical view Schema design Other/Advanced topics Monday, July 9, 12
  • 4. Why? Why should I (HBase user) care about HBase internals? HBase will not adjust cluster settings to optimal based on usage patterns automatically Schema design, table settings (defined upon creation), etc. depend on HBase implementation aspects Monday, July 9, 12
  • 6. Logical View: Regions HBase cluster serves multiple tables, distinguished by name Each table contains of rows Each row contains cells: (row key, column family, column, timestamp) -> value Table is split into Regions (table shards, each contains full rows), defined by start and end row keys Monday, July 9, 12
  • 7. Logical View: Regions are Shards Regions are “atoms of distribution” Each region assigned to single RegionServer (HBase cluster slave) Rows of particular Region served by single RS (cluster slave) Regions are distributed evenly across RSs Region has configurable max size When region reaches max size (or on request) it is split into two smaller regions, which can be assigned to different RSs Monday, July 9, 12
  • 8. Logical View: Regions on Cluster ZooKeeper ZooKeeper ZooKeeper client HMaster HMaster Region Region RegionServer Region Region Region Region RegionServer RegionServer RegionServer RegionServer Monday, July 9, 12
  • 9. Logical View: Regions Load It is essential for Regions under the load to be evenly distributed across the cluster It is HBase user’s job to make sure the above is true. Note: even distribution of Regions over cluster doesn’t imply that the load is evenly distributed Monday, July 9, 12
  • 10. Logical View: Regions Load Take into account that rows are stored in ordered manner Make sure you don’t write rows with sequential keys to avoid RS hotspotting* When writing data with monotonically increasing/decreasing keys, data is written at one RS at a time Use pre-splitting of the table upon creation Starting with single region means using one RS for some time In general, splitting can be expensive Increase max region size * see https://github.com/sematext/HBaseWD Monday, July 9, 12
  • 11. Logical View: Slow RSs When load is distributed evenly, watch for slowest RSs (HBase slaves) Since every region served by single RS, one slow RS can slow down cluster performance e.g. when: data is written into multiple RSs at even pace (random value-based row keys) data is being read from many RSs when doing scan Monday, July 9, 12
  • 13. Physical View: Write/Read Flow HTable client client buffer HTable write read RegionServer Region z Region ... ... Store Store MemStore MemStore (per CF) (per CF) flush HFile HFile ... HFile HFile Write Ahead Log HDFS Monday, July 9, 12
  • 14. Physical: Speed up Writing Enabling & increasing client-side buffer reduces RPC operations amount warn: possible loss of buffered data in case of client failure; design for failover in case of write failure (networking/server- side issues); can be handled on client Disabling WAL increases write speed warn: possible data loss in case of RS failure Use bulk import functionality (writes HFiles directly, which can be later added to HBase) Monday, July 9, 12
  • 15. Physical: Memstore Flushes When memstore is flushed N HFiles are created (one per CF) Memstore size which causes flushing is configured on two levels: per RS: % of heap occupied by memstores per table: size in MB of single memstore (per CF) of Region When Region memstores flushes, memstores of all CFs are flushed Uneven data amount between CFs causes too many flushes & creation of too many HFiles (one per CF every time) In most cases having one CF is the best design Monday, July 9, 12
  • 16. Physical: Memstore Flushes Important: there are Memstore size thresholds which cause writes to be blocked, so slow memstore flushes and overuse of memory by memstore can cause write perf degradation Hint: watch for flush queue size metric on RSs At the same time the more memory memstore uses the better for writing/reading perf (unless it reaches those “write blocking” thresholds) Monday, July 9, 12
  • 17. Physical: Memstore Flushes Example of good situation * * http://sematext.com/spm/index.html Monday, July 9, 12
  • 18. Physical: HFiles Compaction HFiles are periodically compacted into bigger HFiles containing same data Reading from less HFiles faster Important: there’s a configured max number of files in Store which, when reached causes writes to block Hint: watch for compaction queue size metric on RSs read Store MemStore (per CF) HFile HFile Monday, July 9, 12
  • 19. Physical: Data Locality RSs are usually collocated HDFS with HDFS DataNodes MapReduce HBase RegionServer RegionServer TaskTracker TaskTracker DataNode DataNode Slave Node Slave Node Monday, July 9, 12
  • 20. Physical: Data Locality HBase tries to assign Regions to RSs so that Region data stored physically on the same node. But sometimes fails after Region splits there’s no guarantee that there’s a node that has all blocks (HDFS level) of new Region and no guarantee that HBase will not re-assign this Region to different RS in future (even distribution of Regions takes preference over data locality) There’s an ongoing work towards better preserving data locality Monday, July 9, 12
  • 21. Physical: Data Locality Also, data locality can break when: Adding new slaves to cluster Removing slaves from cluster Incl. node failures Hint: look at networking IO between slaves when writing/reading data, it should be minimal Important: make sure HDFS is well balanced (use balancer tool) try to rebalance Regions in HBase cluster if possible (HBase Master restart will do that) to regain data locality Pre-split table on creation to limit (ideally avoid) splits and regions movement; manage splits manually sometimes helps Monday, July 9, 12
  • 22. Schema Design (very briefly) Monday, July 9, 12
  • 23. Schema: row keys Using row key (or keys range) is the most efficient way to retrieve the data from HBase Row key design is major part of schema design Note: no secondary indices available out of the box Row Key Data ‘login_2012-03-01.00:09:17’ d:{‘user’:‘alex’} ... ... ‘login_2012-03-01.23:59:35’ d:{‘user’:‘otis’} ‘login_2012-03-02.00:00:21’ d:{‘user’:‘david’} Monday, July 9, 12
  • 24. Schema: row keys Redundancy is OK! warn: changing two rows in HBase is not atomic operation Row Key Data ‘login_2010-01-01.00:09:17’ d:{‘user’:‘alex’} ... ... ‘login_2012-03-01.23:59:35’ d:{‘user’:‘otis’} ‘alex_2010-01-01.00:09:17’ d:{‘action’:‘login’} ... ... ‘otis_2012-03-01.23:59:35’ d:{‘action’:‘login’} ‘alex_login_2010-01-01.00:09:17’ d:{‘device’:’pc’} ... ... ‘otis_login_2012-03-01.23:59:35’ d:{‘device’:‘mobile’} Monday, July 9, 12
  • 25. Schema: Relations Not relational No joins Denormalization is OK! Use ‘nested entities’ Row Key Data d:{ student_firstname:Alex, student_lastname:Baranau, student professor_math_firstname:David, * ‘student_abaranau’ professor_math_lastname:Smart, * professor_cs_firstname:Jack, professor professor_cs_lastname:Weird, } ‘prof_dsmart’ d:{...} Monday, July 9, 12
  • 26. Schema: row key/CF/qual size HBase stores cells individually great for “sparse” data row key, CF name and column name stored with each cell which may affect data amount to be stored and managed keep them short serialize and store many values into single cell Row Key Data d:{ s:Alex#Baranau#cs#2009, ‘s_abaranau’ p_math:David#Smart, p_cs:Jack#Weird, } Monday, July 9, 12
  • 27. Other/Advanced Topics Monday, July 9, 12
  • 28. Advanced: Co-Processors CoProcessors API (HBase 0.92.0+) allows to: execute (querying/aggregation/etc.) logic on server side (you may think of it as of stored procedures in RDBMS) perform auditing of actions performed on server-side (you may think of it as of triggers in RDBMS) apply security rules for data access and many more cool stuff Monday, July 9, 12
  • 29. Other: Use Compression Using compression: reduces data amount to be stored on disks reduces data amount to be transferred when RS reading data not from local replica increases amount of CPU used, but CPU isn’t usually a bottleneck Favor compression speed over compression ratio SNAPPY is good Use wisely: e.g. avoid wasting CPU cycles on compressing images compression can be configured on per CF basis, so storing non-compressible data in separate CF sometimes helps data blocks are uncompressed in memory, avoid this to cause OOME note: when scanning (seeking data to return for scan) many data blocks can be uncompressed even if none of the data will be returned from those block Monday, July 9, 12
  • 30. Other: Use Monitoring TBD Ganglia, Cacti, other*, Just use it! * http://sematext.com/spm/index.html Monday, July 9, 12
  • 31. Qs? Sematext is hiring! Monday, July 9, 12