This document discusses frameworks in the context of big data solutions. It makes several key points:
1. Hadoop provides a stable core infrastructure for building big data solutions, with layers for resource management, distributed processing, file system, and coordination.
2. When going beyond the Hadoop core, frameworks should be selected that have a stable approach, flexible functionality, and an active community to contribute to existing solutions rather than creating new ones.
3. Performance overhead from frameworks is directly paid for with additional computing resources in large clusters, so frameworks should be chosen carefully based on their overhead. Creating new frameworks limits future flexibility the more users it has.
4. 4
CAN CHIMPS
DO BIG DATA?
Real shocking title book
available for pre-order. This is
exactly what happens now in
Big Data industry.
Roses are red.
Violets are blue.
We do Hadoop
What about YOU?
8. 8
FRAMEWORK
Is an essential supporting
structure of a building, vehicle, or
object.
In computer programming, a
software framework is an
abstraction in which software
providing generic functionality can
be selectively changed by
additional user-written code, thus
providing application-specific
software.
9. 9
FRAMEWORKS
DICTATE APPROACH
Frameworks are to
lower amount of job by
reusing. The more you
can reuse the better. But complex framework are
too massive to be flexible.
They limit your solutions.
Doing Big Data you
usually build unique
solution.
10. 10
SO DO I NEED UNIQUE
FRAMEWORKS
FOR EVERY BIG DATA PROJECT?
13. 13
OPEN SOURCE framework
for big data. Both distributed
storage and processing.
Provides RELIABILITY
and fault tolerance by
SOFTWARE design.
Example â File system
as replication factor 3 as
default one.Horisontal scalability from
single computer up to
thousands of nodes.
INFRASTRUCTURE
3 SIMPLE HADOOP PRINCIPLES
14. 14
HADOOP
INFRASTRUCTURE AS
A FRAMEWORK
â Is formed from large
number of unified nodes.
â Nodes are replaceable.
â Simple hardware without
sophisticated I/O.
â Reliability by software.
â Horizontal scalability.
16. 16
How everyone (who usually
sells something) depicts
Hadoop complexity
GREAT BIG
INFRASTRUCTURE
AROUND
SMALL
CUTE
CORE
YOUR
APPLICATION
SAFE and
FRIENDLY
17. 17
How it looks from the real
user point of view
Feeling of something wrong
CORE
HADOOP
COMPLETELY
UNKNOWN
INFRASTRUCTURE
SOMETHINGYOU
UNDERSTAND
YOUR
APPLICATION
FEAR OF
19. 19
WHAT BRICKS SHOULD WE TAKE
TO BUILD BIG DATA SOLUTION?
â We should build
unique solutions
using the same
approaches.
â So bricks are to
be flexible.
20. 20
WHAT BRICKS SHOULD WE TAKE
TO BUILD BIG DATA SOLUTION?
â We should build
robust solution with
high reliability.
â Bricks are to be
simple and
replacable.
21. 21
WHAT BRICKS SHOULD WE TAKE
TO BUILD BIG DATA SOLUTION?
â We should be able
to change our
solution over the
time.
â Bricks are to be
small.
22. 22
WHAT BRICKS SHOULD WE TAKE
TO BUILD BIG DATA SOLUTION?
â As flexible as it is
possible.
â Focused on specific
aspect without
large infrastructure
required.
â Simple and
interchangable.
23. 23
HADOOP 2.x CORE AS A FRAMEWORK
BASIC BLOCKS
â ZooKeeeper as coordinational service.
â HDFS as file system layer.
â YARN as resource management.
â MapReduce as basic distributed processing option.
26. 26
Hadoop: don't do it yourself
REUSE AS IS
â BASIC infrastructure is pretty reusable to build
with it. At least unless you know it well.
â Do you have manpower to re-implement it?
You'd beeeter contribute in this case.
30. 30
WHAT DO WE USUALLY
EXPECT FROM NEW
FRAMEWORK?
BETTER
CHEAPER
FASTER
frameworks provide
higher layer of
abstraction so
coding go faster
some part of
work is
already done
top framework
contributors are
usually top
engineers
31. 31
OOOPS...
BETTER
CHEAPER
FASTER
frameworks provide
higher layer of
abstraction so
coding go faster
some part of
work is
already done
top framework
contributors are
usually top
engineersAdditional cost of
new framework
maintenance
Additional time
of learning new
approach
Lot of
defects due
to lack of
experience
with new
framework
32. 32
BETTER
CHEAPER
FASTER
frameworks provide
higher layer of
abstraction so
coding go faster
some part of
work is
already done
top framework
contributors are
usually top
engineersAdditional cost of
new framework
maintenance
Additional time
of learning new
approach
Lot of
defects due
to lack of
experience
with new
framework
NONEXISTENT
ONLY TWO?
33. 33
JUST FEW EXAMPLES
â Spring batch â main thread who
started spring context forgot to check
task accomplishment status.
â Apache Spark â persistence to disk
was limited to 2GB due to ByteBuffer
int limitation.
â Apaceh Hbase has by now no effective
guard against client RPC timeout.
â What about binary data like hashes? No
effective out-of-the-box support by now.
ONLY
REAL
EXPERIENCE
NEW FRAMEWORKS ARE
ALWAYS HEADACHE
38. 38
SO BIG DATA TECHNOLOGY
BOOKS ARE ALWAYS OUTDATED
Great books but when they are printed they are
already old. Read original E-books with updates.
40. 40
FRAMEWORKS IN BIG DATA
HAMSTERS vs HIPSTERS
We hate
frameworks! Only
hardcore, only JDK!
Give me
framework for
every step!
41. 41
FRAMEWORKS IN BIG DATA
HAMSTERS vs HIPSTERS
Significant overhead even
comparing to MapReduce
access
Most simple way to access
your Hbase data for
analytics.
Apache Hbase is top OLTP solution for Hadoop.
Hive can provide SQL connector to it.
Hbase direct RPC for OLTP, MapReduce or Spark when you need
performance and Hive when you need faster implementation.
Crazy idea: Hive running over Hbase table snapshots.
43. 43
ETL: FRAMEWORKS COST
â We do object transformations when we do ETL
from SQL to NoSQL objects.
â Practically any ORM framework eats at least 10%
of CPU resource.
â Is it small or big amount? Depends who pays...
SQL
server
JOIN
Table1
Table2
Table3
Table4 BIG DATA shard
BIG DATA shard
BIG DATA shardETL stream
ETL stream
ETL stream
ETL stream
44. 44
10% overhead...
â Single desktop
application -
computers usually
have unused CPU
power. 10% overhead
is not so notable for
user so user accepts it.
â User pays for
electricity and
hardware.
45. 45
â Lot of mobile
clients. Can tolerate
10% performance
degradation.
Application still
works.
â All users pay for
your 10%
performance
overhead.
10% overhead...
46. 46
â Single server solution.
OK, usually you have
10% spare.
â So you pay for overhead
but you don't notice it
before it is needed. You
have the same 1 server.
10% overhead...
47. 47
â 10% overhead of
1000 servers with
properly distributed
job means up to 100
servers additionaly
needed.
â This is your direct
maintenance costs.
10% overhead...
IN CLUSTERS YOU DIRECTLY PAY
FOR OVERHEAD WITH ADDITIONAL
CLUSTER NODES.
49. 49
MAKING YOUR OWN
FRAMEWORK
â Most common reason for your
own framework is ⊠growing
complexity and support cost.
â New framework development
and migration can be cheeper
than support of existing
solutions.
â You don't want to depend on
existing framework development.
50. 50
MAKING FRAMEWORK
LAZY STYLE
â First do multiple
solutions than
integrate them into
single approach.
â GOOD You only
integrate what is
already used so less
unused work.
â BAD Your act reactive.
51. 51
MAKING FRAMEWORK
PROACTIVE STYLE
â You improve framework
before actual need.
â GOOD You are guided
by approach, not need,
so usually you have
more clear design.
â BAD Your have more
probability to do not
needed things.
52. 52
OUTSIDE YOUR TEAM
â Great, you have additional
workforce. But from now you
have external support tickets.
â Usually you can control your
users so major changes are
yet possible but harder.
â Pay more attention to
documentation and trainings
for other teams. It pays back.
53. 53
OUTSIDE YOUR COMPANY
â You receive additional
workforce. People start
contributing into your
framwork. Don't be so
optimistic.
â Community support is good
but you need to support
community applications.
â You are no longer flexible. You
don't control users of your
framework.
54. 54
LESSONS LEARNED
CORE
â Avoid inventing unique approach
for every Big Data solution. It is
critical to have good relatively
stable ground.
â Your Big Data CORE architecture
is to be layered infrastructure
constructed from small, simple,
unified, replaceable components
(UNIX way).
â Be ready for packaging issues
but try to reuse as maximum as
possible on CORE layer.
55. 55
LESSONS LEARNED
â Selecting frameworks to extend your big
data core prefer solutions with stable
approach, flexible functionality and
healthy community. Revise your
approaches as world changes fast.
â Prefer to contribute to good existing
solution rather than start your own.
â The more frequent you change
something, the more higher layer tool
you need for this. But in big data you
directly pay for any performance
overhead.
â If you have started your own framework,
the more popular it is, the fewer freedom
to modify you have so the only flexibility
is bad reason to start.
BEYOND
THE
CORE