2. Agenda
⢠Big Data Overview
⢠Hadoop Theory and Practice
⢠MapReduce in Action
⢠NoSQL
⢠MPP Database
⢠Whatâs hot?
3. Big Five IT Trends
⢠Mobile
⢠Social Media
⢠Cloud Computing
⢠Consumerization of IT
⢠Big Data
4. Big Data Era
⢠The coming of the Big Data Era is a chance for
everyone in the technology world to decide into which
camp they fall, as this era will bring the biggest
opportunity for companies and individuals in the
technology since the dawn of the Internet.
â Rob Thomas, IBM Vice President, Business Development
6. 6
Big Data â a growing torrent
⢠2 billion internet users
⢠5 billion mobile phones in use in 2010.
⢠30 billion pieces of content shared on Facebook every month.
⢠7TB of data are processed by Twitter every day,
⢠10TB of data are processed by Facebook every day.
⢠40% projected growth in global data generated per year.
⢠235T data collected by US library of Congress in April 2011
⢠15 out of 17 sectors in the US have more data stored per company
than the US library of Congress.
⢠90% of the data in the world today has been created in the last two
years alone.
7. Data Rich World
⢠Data capture and collection
â Sensor data, Mobile device, Social Network, Web clickstream,
Traffic monitoring, Multimedia content, Smart energy meters,
DNA analysis, Industry machines in the age of Internet of
Things, Consumer activities â communicating, browsing,
buying, sharing, searching â create enormous trails of data.
⢠Data Storage
â Cost of storage is reduced tremendously
â Seagate 3 TB Barracuda @ $149.99 from Amazon.com (4.9¢/GB)
8. Technology world has changed
⢠Users: 2,000 users vs. a potential user base of 2 billion
⢠Applications: Online transaction system vs. Web applications.
⢠Application architecture: centralized vs. scale-up
⢠Infrastructure: a commodity box has more computational power
than a supercomputer a decade ago
⢠80% percent of the worldâs information is unstructured.
⢠Unstructured information is growing at 15 times the rate of
structured information.
⢠Database architecture has not kept pace
9. A Sample Case â Big Data
⢠ShopSavvy5 â mobile shopping App
â 40,000+ retailers
â Millions shoppers
â Millions retail store locations
â 240M+ product pictures and user action shots
â 3040M+ product attributes (color, size, features etc.)
â 14,720M+ prices from retailers
â 100+ price requests per second
â delivering real-time inventory and price information
10. A Sample Case â Big Data (Cont)
⢠ShopSavvy Architecture
â An entirely new platform, ProductCloud, leverages the
latest Big Data tool like Cassandra, Hadoop, and Mahout,
maintains HUGE histories of prices, products, scans and
locations that number in the hundreds of billions of items.
â Open architecture layers tools like Mahout on top of the
platform to enable new features like price prediction, user
recommendations, product categorization and product
resolution.
13. What is âBig Dataâ
⢠The term Big Data applies to information that canât be
processed or analyzed using traditional processes or tool.
⢠Big Data creates values in several ways
â Create transparency
â Enabling experimentation to discover needs, expose
variability, and improve performance
â Segmenting population to customize actions
â Replacing/supporting human decision making with machine
algorithms
â Innovating new business models, products, and services, e.g.
risk estimation.
14. 14
Big Data = Big Value
⢠$300 billion potential annual value to US health â more than double
the total annual health care spending in Spain.
⢠$350 billion potential annual value to Europeâs public sector
administration â more than GDP of Greece.
⢠$600 billion potential annual consumer surplus from using personal
location data globally.
⢠60% potential increase in retailerâs operating margins possible with
big data.
⢠140,000 to 190,000 more deep analytic talent positions, and 1.5
million data-savvy managers needed to take full advantage of big
data in the United States.
⢠Gartner predicts that âBig Data will deliver transformational benefits
to enterprises within 2 to 5 years"
15. Characteristics of Big Data
⢠Volume â Terabytes ď Zettabytes
⢠Variety â structured, semi-structured, unstructured data
⢠Velocity â Batch -> Streaming Data, Real-time
17. Traditional Data Warehouse vs. Big Data
⢠Traditional warehouses
â mostly idea for analyzing structured data and producing
insights with known and relatively stable measurements.
⢠Big Data solutions
â idea for analyzing not only raw structured data, but semi-
structured and structured data from a wide variety of
sources.
â idea when all of the data needs to be analyzed versus a
sample of data.
â Idea for iterative and exploratory analysis when business
measures are not predetermined.
18. CAP Theorem
⢠CAP
â Consistency
â Availability
â Tolerance to network Partitions
⢠Consistency models
â Strong consistency
â Weak consistency
â Eventual consistency
⢠Architectures
â CA: traditional relational database
â AP: NoSQL database
19. ACID vs. BASE
⢠ACID
â Atomicity
â Consistency
â Isolation
â Durability
⢠BASE
â Basically available
â Soft-state
â Eventual consistency
20. Lower Priorities
⢠No Complex querying functionality
â No support for SQL
â CRUD operations through database specific API
⢠No support for joins
â Materialize simple join results in the relevant row
â Give up normalization of data?
⢠No support for transactions
â Most data stores support single row transactions
â Tunable consistency and availability (e.g., Dynamo)
ď¨ Achieve high scalability
21. Why sacrifice Consistency?
⢠It is a simple solution
â nobody understands what sacrificing P means
â sacrificing A is unacceptable in the Web
â possible to push the problem to app developer
⢠C not needed in many applications
â Banks do not implement ACID (classic example wrong)
â Airline reservation only transacts reads (Huh?)
â MySQL et al. ship by default in lower isolation level
⢠Data is noisy and inconsistent anyway
â making it, say, 1% worse does not matter
22. Important Design Goals
⢠Scale out: designed for scale
â Commodity hardware
â Low latency updates
â Sustain high update/insert throughput
⢠Elasticity â scale up and down with load
⢠High availability â downtime implies lost revenue
â Replication (with multi-mastering)
â Geographic replication
â Automated failure recovery
23. A Brief History of Hadoop
⢠Hadoop is an open source project of the Apache Foundation.
⢠Hadoop has its origins in Apache Nutch, an open source web search engine, itself
a part of the Lucene project.
⢠In 2003, Google published a paper that described the architecture of Googleâs
distributed filesystem, called GFS.
⢠In 2004, Google published the paper that introduced MapReduce.
⢠It is a framework written in Java originally developed by Doug Cutting, the creator
of Apache Lucene, who named it after his son's toy elephant.
⢠2004 - Initial versions of what is now Hadoop Distributed Filesystem and Map-
Reduce implemented.
⢠January 2006 â Doug Cutting joins Yahoo!.
⢠February 2006 âAdoption of Hadoop by Yahoo! Grid team.
⢠April 2006âSort benchmark (10 GB/node) run on 188 nodes in 47.9 hours.
24. A Brief History of Hadoop (Cont)
⢠January 2007âResearch cluster reaches 900 nodes.
⢠In January 2008, Hadoop was made its own top-level project at
Apache. By this time, Hadoop was being used by many other
companies such as Facebook and the New York Times.
⢠In February 2008, Yahoo! announced that its production search index
was being generated by a 10,000-node Hadoop cluster.
⢠In April 2008, Hadoop broke a world record to become the fastest
system to sort a terabyte of data.
⢠March 2009 â 17 clusters with a total of 24,000 nodes.
⢠April 2009 â Won the minute sort by sorting 500 GB in 59 seconds
(on 1,400 nodes) and the 100 terabyte sort in 173 minutes (on 3,400
nodes).
25. Hadoop Echosystem
⢠Common - A set of components for distributed filesystems and general I/O
⢠Avro - A serialization system for efficient data storage.
⢠MapReduce - A distributed data processing model and execution
environment that runs on large clusters of commodity machines.
⢠HDFS - A distributed filesystem.
⢠Pig - A data flow language for exploring very large datasets.
⢠Hive - A distributed data warehouse system.
⢠Hbase - A distributed, column-oriented database.
⢠ZoopKeeper - A distributed, highly available coordination service.
⢠Sqoop - A tool for efficiently moving data between relational databases
and HDFS.
26. Hadoop Distributed File System - HDFS
⢠Hadoop filesystem that runs on top of existing file system
⢠Designed to handle very large files with streaming data access
patterns
⢠Use blocks to store a file or parts of file
â 64MB (default), 128MB (recommended) - compare to 4KB in UNIX
⢠1 HDFS block is supported by multiple operation system blocks
⢠Advantages of blocks
â Big throughput
â Fixed size - easy to calculate how many fit on a disk
â A file can be larger than any single disk in the network
â Fits well with replication to provide fault tolerance and availability
29. Hadoop Node Type
⢠HDFS nodes
⢠NameNode
⢠One per cluster, manages the filesystem namespace and meta data, large memory
requirements, keep entire filesystem metadata in memory
⢠DataNode
⢠Many per cluster, manages blocks with data and servers them to clients
⢠MapReduce nodes
⢠JobTracker
⢠One per cluster, receives job requests, schedules and monitors MapReduce jobs on
task trackers
⢠TaskTracker
⢠Many per cluster, each TaskTracker spawns Java Virtual Machines to run your map or
reduce task.
32. Before MapReduceâŚ
⢠Large scale data processing was difficult!
â Managing hundreds or thousands of processors
â Managing parallelization and distribution
â I/O Scheduling
â Status and monitoring
â Fault/crash tolerance
⢠MapReduce provides all of these, easily!
33. MapReduce Overview
⢠What is it?
â Programming model used by Google
â A combination of the Map and Reduce models with an
associated implementation
â Used for processing and generating large data sets
⢠How does it solve our previously mentioned problems?
â MapReduce is highly scalable and can be used across
many computers.
â Many small machines can be used to process jobs that
normally could not be processed by a large machine.
37. Map Abstraction
⢠Inputs a key/value pair
â Key is a reference to the input value
â Value is the data set on which to operate
⢠Evaluation
â Function defined by user
â Applies to every value in value input
â Might need to parse input
⢠Produces a new list of key/value pairs
â Can be different type from input pair
39. Reduce Abstraction
⢠Starts with intermediate Key / Value pairs
⢠Ends with finalized Key / Value pairs
⢠Starting pairs are sorted by key
⢠Iterator supplies the values for a given key to the Reduce function.
⢠Typically a function that:
â Starts with a large number of key/value pairs
â One key/value for each word in all files being greped (including multiple entries
for the same word)
â Ends with very few key/value pairs
â One key/value for each unique word across all the files with the number of
instances summed into this entry
⢠Broken up so a given worker works with input of the same key.
42. Why is this approach better?
⢠Creates an abstraction for dealing with complex
overhead
â The computations are simple, the overhead is messy
⢠Removing the overhead makes programs much smaller
and thus easier to use
â Less testing is required as well. The MapReduce libraries can
be assumed to work properly, so only user code needs to be
tested
⢠Division of labor also handled by the MapReduce
libraries, so programmers only need to focus on the
actual computation
43. MapReduce Advantages
⢠Automatic Parallelization:
â Depending on the size of RAW INPUT DATA ď¨ instantiate
multiple MAP tasks
â Similarly, depending upon the number of intermediate <key,
value> partitions ď¨ instantiate multiple REDUCE tasks
⢠Run-time:
â Data partitioning
â Task scheduling
â Handling machine failures
â Managing inter-machine communication
⢠Completely transparent to the programmer/analyst/user
44. MapReduce: A step backwards?
⢠Donât need 1000 nodes to process petabytes:
â Parallel DBs do it in fewer than 100 nodes
⢠No support for schema:
â Sharing across multiple MR programs difficult
⢠No indexing:
â Wasteful access to unnecessary data
⢠Non-declarative programming model:
â Requires highly-skilled programmers
⢠No support for JOINs:
â Requires multiple MR phases for the analysis
45. MapReduce VS Parallel DB
⢠Web application data is inherently distributed on a large number of
sites:
â Funneling data to DB nodes is a failed strategy
⢠Distributed and parallel programs difficult to develop:
â Failures and dynamics in the cloud
⢠Indexing:
â Sequential Disk access 10 times faster than random access.
â Not clear if indexing is the right strategy.
⢠Complex queries:
â DB community needs to JOIN hands with MR
46. NoSQL Movement
⢠Initially used for: âOpen-Source relational database that did not expose
SQL interfaceâ
⢠Popularly used for: ânon-relational, distributed data stores that often
did not attempt to provide ACID guaranteesâ
⢠Gained widespread popularity through a number of open source
projects
â HBase, Cassandra, MongDB, Redis, âŚ
⢠Scale-out, elasticity, flexible data model, high availability
47. Data in Real World
⢠There real data sets that donât make sense in the
relational model, nor modern ACID databases.
⢠Fit what into where?
â Trees
â Semi-structured data
â Web content
â Multi-dimensional cubes
â Graphs
48. NoSQL Database Technology
⢠Not only SQL
â No schema, more dynamic data model
â Denormalizing, no join
â CAP theory
â Auto-sharding (elasticity)
â Distributed query support
â Integrated caching
50. Key Value Stores
⢠Key-Valued data model
â Key is the unique identifier
â Key is the granularity for consistent access
â Value can be structured or unstructured
⢠Gained widespread popularity
â In house: Bigtable (Google), PNUTS (Yahoo!), Dynamo
(Amazon)
â Open source: HBase, Hypertable, Cassandra, Voldemort
⢠Popular choice for the modern breed of web-applications
51. Cassandra â A NoSQL Database
⢠An open source, distributed store for structured data
that scales-out on cheap, commodity hardware
⢠Simplicity of Operations
⢠Transparency
⢠Very High Availability
⢠Painless Scale-Out
⢠Solid, Predictable Performance on Commodity and
Cloud Servers
53. Column Oriented â Data Structure
⢠Tuples: {âkeyâ: {ânameâ: âvalueâ: âtimestampâ} }
insert(âcarolâ, { âcarâ: âdaewooâ, 2011/11/15 15:00 })
Row Key
jim
age: 36
2011/01/01 12:35
car: camaro
2011/01/01
12:35
gender: M
2011/01/01
12:35
carol
age: 37
2011/01/01 12:35
car: subaru
2011/01/01
12:35
gender: F
2011/01/01
12:35
johnny
age: 12
2011/01/01 12:35
gender: M
2011/01/01
12:35 Â
suzy
age: 10
2011/01/01 12:35
gender: F
2011/01/01
12:35 Â
54. Massively Parallel Processing (MPP) DB
⢠Vertica (HP)
⢠Greenplum (EMC)
⢠Netezza (IBM)
⢠Teradata (NCR)
⢠Kognitio
â In memory analytic
â No need for data partition or indexing
â Scans data in excess of 650 million rows per second per server. Linear
scalability means 100 nodes can scan over 650 billion rows per
second!
55. Vertica
⢠Supports logical relational models, SQL, ACID transactions, JDBC
⢠Columnar Store Architecture
â 50x--â1000x faster by eliminating costly disk IO
â offers aggressive data compression to reduce storage costs by up to 90%
⢠20x â 100x faster than traditional RDBMS data warehouse, runs on commodity
hardware
⢠Scale-out MPP Architecture
⢠Real-time loading and querying
⢠In-Database Analytics
⢠Automatic high availability
⢠Natively support grid computing
⢠Natively support MapReduce and Hadoop
56. Machine Learning
⢠Machine learning systems automate decision making on
data, automatically producing outputs like product
recommendations or groupings.
⢠WEKA - a Java-based framework and GUI for machine
learning algorithms.
⢠Mahout - an open source framework that can run
common machine learning algorithms on massive
datasets.
59. References
⢠Big data: The next frontier for innovation, competition and
productivity, McKinsey Global Institute, May 2011
⢠Understanding Big Data, IBM, 2012
⢠NoSQL Database Technology Whitepaper, CouchBase
⢠Big Data and Cloud Computing: Current State and Future
Opportunities, 2011
⢠Hadoop Definitive Guide
⢠How Do I Cassandra, Nov 2011
⢠BigDataUniversity.com
⢠youtube.com/ibmetinfo
⢠âŚâŚ