SlideShare a Scribd company logo
1 of 45
Pig vs. MapReduce
By Donald Miner
NYC Pig User Group
August 21, 2013
About Don
@donaldpminer
dminer@clearedgeit.com
I’ll be talking about
What is Java MapReduce good for?
Why is Pig better in some ways?
When should I use which?
When do I use Pig??
Can I use Pig to do this?
YES NO
Let’s get to the point
When do I use Pig??
Can I use Pig to do this?
YES NO
USE PIG!
When do I use Pig??
Can I use Pig to do this?
YES NO
TRY TO USE PIG ANYWAYS!
When do I use Pig??
Can I use Pig to do this?
YES NO
TRY TO USE PIG ANYWAYS!
Did that work?
YES NO
When do I use Pig??
Can I use Pig to do this?
YES NO
TRY TO USE PIG ANYWAYS!
Did that work?
YES NO
OK… use Java MapReduce
Why?
• If you can do it with Pig, save yourself the pain
• Almost always developer time is worth more
than machine time
• Trying something out in Pig is not risky (time-
wise) – you might learn something about your
problem
– Ok, so it turned out to look a bit like a hack, but
who cares?
– Ok, so it ended up being slow, but who cares?
Use the right tool for the job
Pig
Java MapReduce
HTML
Get the job done faster and better
Big Data Problem TM
Which is faster,
Pig or Java MapReduce?
Hypothetically, any Pig job could be
rewritten using MapReduce… so Java
MR can only be faster.
The TRUE battle is the
Pig optimizer vs. the developer
VS
Are you better than the Pig optimizer than figuring out how
to string multiple jobs together (and other things)?
Things that are hard to express in Pig
• When something is hard to express succinctly in Pig,
you are going to end up with a slow job
i.e., building something up of several primitives
• Some examples:
– Tricky groupings or joins
– Combining lots of data sets
– Tricky usage of the distributed cache (replicated join)
– Tricky cross products
– Doing crazy stuff in nested FOREACH
• In these cases, Pig is going to spawn off a bunch of
MapReduce jobs, which could have been done with
less
This is change in “speed” that doesn’t just have to do with cost-of-abstraction
The Fancy MAPREDUCE keyword!
Pig has a relational operator called MAPREDUCE
that allows your to plug in a Java MapReduce
job!
Use this to only replace the tricky things
… don’t throw out all the stuff Pig is good at
B = MAPREDUCE 'wordcount.jar' STORE A INTO 'inputDir' LOAD 'outputDir'
AS (word:chararray, count: int) `org.myorg.WordCount inputDir outputDir`;
Have the best of both worlds!
To the rescue…
Somewhat related:
Is developer time worthless?
Does speed really matter?
Time spent writing Pig job
Runtime of Pig job x times job is ran
Time spent maintaining Pig job
Time spent writing MR job
Runtime of MR job x times job is ran
Time spent maintaining MR job
When does the scale tip in one direction or the other?
Will the job run many times? Or once?
Are your Java programmers sloppy?
Is the Java MR significantly faster in this case?
Is 14 minutes really that different from 20 minutes?
Why is development so much faster in Pig?
• Fewer java-level bugs to work out
… but bugs might be harder to figure out
• Fewer lines of code simply means less typing
• Compilation and deployment can significantly slow
down incremental improvements
• Easier to read: The purpose of the analytic is more
straightforward (the context is self-evident)
Avoiding Java!
• Not everyone is a Java expert
… especially all those SQL guys you are repurposing
• The higher level of abstraction makes Pig
easier to learn and read
– I’ve had both software engineers and SQL
developers become productive in Pig in <4 days
Oh, you want to learn Hadoop? Read this first!
But can I really?
not really.
Pig is good at moving data sets between states
… but not so good at manipulating the data itself
examples: advanced string operations, math,
complex aggregates, dates, NLP, model building
You need user-defined functions (UDFs)
I’ve seen too many people try to avoid UDFs
UDFs are powerful:
manipulate bags after a GROUP BY
Plug into external libraries like NLTK or OpenNLP
Loaders for complex custom data types
Exploiting the order of data
Ok, so I still want to avoid Java
Do you work by yourself???
Give someone else the task of writing you a UDF!
(they are bite-size little projects)
Current UDF support in 0.11.1:
Java, Python, JavaScript, Ruby, Groovy
These can help you avoid Java if you simply don’t like it (me)
Why did you write a book on MR
Design Patterns if you think you should
do stuff in Pig??
Good question!
• I’ve seen plenty of devs do DUMB stuff
in Pig just because there is a keyword
for it
e.g., silly joins, ordering, using the
PARALLEL keyword wrong
• Knowing how MapReduce works will
result in you writing better Pig
• In particular– how do Pig optimizations
and relational keywords translate into
MapReduce design patterns?
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
A STORY ABOUT MAINTAINABILITY
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
IT guy here. Your
MapReduce job is blowing
up the cluster, how do I fix
this thing?
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Ah, that’s pretty easy to
fix. Just comment out that
first line in the mapper
function.
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Ok, how do I do that?
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easy
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easy
First, check the code out
of git
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download, install
and configure Eclipse.
Don’t forget to set your
CLASSPATH!
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download,
install and configure
Eclipse. Don’t forget
to set your
CLASSPATH!
Ok, now comment out
line # 851 in
/home/itguy/java/src/co
m/hadooprus/hadoop/ha
doop/mapreducejobs/job
s/codes/analytic/mymapr
educejob/mapper.java
. . .
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download,
install and configure
Eclipse. Don’t forget
to set your
CLASSPATH!
Ok, now comment out
line # 851 in
/home/itguy/java/src/co
m/hadooprus/hadoop/h
adoop/mapreducejobs/j
obs/codes/analytic/mym
apreducejob/mapper.jav
a
. . . . . .
Now, build the .jar
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download,
install and configure
Eclipse. Don’t forget
to set your
CLASSPATH!
Ok, now comment out
line # 851 in
/home/itguy/java/src/co
m/hadooprus/hadoop/h
adoop/mapreducejobs/j
obs/codes/analytic/mym
apreducejob/mapper.jav
a
. . . . . . . . .Now, compile
the .jar
And ship the .jar to the
cluster, replacing the old
one
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download,
install and configure
Eclipse. Don’t forget
to set your
CLASSPATH!
Ok, now comment out
line # 851 in
/home/itguy/java/src/co
m/hadooprus/hadoop/h
adoop/mapreducejobs/j
obs/codes/analytic/mym
apreducejob/mapper.jav
a
. . . . . . . . . .
. . . .
Now, compile
the .jar
And ship the .jar to
the cluster, replacing
the old one
Ok, now run the hadoop
jar command. Don’t
forget the CLASSPATH!
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
Oh, that’s easyFirst, check the
code out of git
Then, download,
install and configure
Eclipse. Don’t forget
to set your
CLASSPATH!
Ok, now comment out
line # 851 in
/home/itguy/java/src/co
m/hadooprus/hadoop/h
adoop/mapreducejobs/j
obs/codes/analytic/mym
apreducejob/mapper.jav
a
. . . . . . . . . .
. . . . . .
Now, compile
the .jar
And ship the .jar to
the cluster, replacing
the old one
Ok, now run the hadoop
jar command. Don’t
forget the CLASSPATH!
Did that work?
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
No
SCENARIO #1:
JUST CHANGE THAT ONE LITTLE LINE
. . .
Ah, let’s try something
else and do that again!
SCENARIO #2:
JUST CHANGE THAT ONE LITTLE LINE
(this time with Pig)
SCENARIO #2:
JUST CHANGE THAT ONE LITTLE LINE
(this time with Pig)
IT guy here. Your
MapReduce job is blowing
up the cluster, how do I fix
this thing?
SCENARIO #2:
JUST CHANGE THAT ONE LITTLE LINE
(this time with Pig)
Ah, that’s pretty easy to
fix. Just comment out that
line that says “FILTER blah
blah” and save the file.
SCENARIO #2:
JUST CHANGE THAT ONE LITTLE LINE
(this time with Pig)
Ok, thanks!
Pig: Deployment & Maintainability
• Don’t have to worry about version mismatch (for the
most part)
• You can have multiple Pig client libraries installed at
once
• Takes compilation out of the build and deployment
process
• Can make changes to scripts in place if you have to
• Iteratively tweaking scripts during development and
debugging
• Less chances for the developer to write Java-level bugs
Some Caveats
• Hadoop Streaming provides some of these
same benefits
• Big problems in both are still going to take
time
• If you are using Java UDFs, you still need to
compile them (which is why I use Python)
Unstructured Data
• Delimited data is pretty easy
• Pig has issues dealing with out of the box:
– Media: images, videos, audio
– Time series: utilizing order of data, lists
– Ambiguously delimited text
– Log data: rows with different context/meaning/format
You can write custom loaders and tons of UDFs…
but what’s the point?
What about semi-structured data?
• Some forms more natural that others
– Well-defined JSON/XML schemas are usually OK
• Pig has trouble dealing with:
– Complex operations on unbounded lists of objects
(e.g., bags)
– Very Flexible schemas (think BigTable/Hbase)
– Poorly designed JSON/XML
Sometimes, it’s just more pain than it’s worth to try
to do in Pig
Pig vs. Hive vs. MapReduce
• Same arguments apply for Hive vs. Java MR
• Using Pig or Hive doesn’t make that big of a difference
… but pick one because UDFs/Storage functions aren’t easily interchangeable
• I think you’ll like Pig better than Hive
(just like everyone likes emacs more than vi)
WRAP UP: AN ANALOGY (#1)
Pig is a scripting language,
Hadoop’s MapReduce is a compiled language.
PYTHON
C
::
WRAP UP: AN ANALOGY (#2)
Pig is a higher level of abstraction,
Hadoop’s MapReduce is a lower level of abstraction.
SQL
C
::
A lot of the same arguments apply!
• Compilation
– Don’t have to compile Pig
• Efficiency of code
– Pig will be a bit less efficient (but…)
• Lines of code and verbosity
– Pig will have fewer lines of code
• Optimization
– Pig has more opportunities to do automatic optimization of queries
• Code portability
– The same Pig script will work across versions (for the most part)
• Code readability
– It should be easier to understand a Pig script
• Underlying bugs
– Underlying bugs in Pig can cause frustrating problems (thanks be to God for open source)
• Amount of control and space of possibilities
– There are fewer things you CAN do in Pig

More Related Content

More from Donald Miner

Data, The New Currency
Data, The New CurrencyData, The New Currency
Data, The New CurrencyDonald Miner
 
The Amino Analytical Framework - Leveraging Accumulo to the Fullest
The Amino Analytical Framework - Leveraging Accumulo to the Fullest The Amino Analytical Framework - Leveraging Accumulo to the Fullest
The Amino Analytical Framework - Leveraging Accumulo to the Fullest Donald Miner
 
Hadoop for Data Science
Hadoop for Data ScienceHadoop for Data Science
Hadoop for Data ScienceDonald Miner
 
MapReduce Design Patterns
MapReduce Design PatternsMapReduce Design Patterns
MapReduce Design PatternsDonald Miner
 
Data science and Hadoop
Data science and HadoopData science and Hadoop
Data science and HadoopDonald Miner
 

More from Donald Miner (6)

SQL on Accumulo
SQL on AccumuloSQL on Accumulo
SQL on Accumulo
 
Data, The New Currency
Data, The New CurrencyData, The New Currency
Data, The New Currency
 
The Amino Analytical Framework - Leveraging Accumulo to the Fullest
The Amino Analytical Framework - Leveraging Accumulo to the Fullest The Amino Analytical Framework - Leveraging Accumulo to the Fullest
The Amino Analytical Framework - Leveraging Accumulo to the Fullest
 
Hadoop for Data Science
Hadoop for Data ScienceHadoop for Data Science
Hadoop for Data Science
 
MapReduce Design Patterns
MapReduce Design PatternsMapReduce Design Patterns
MapReduce Design Patterns
 
Data science and Hadoop
Data science and HadoopData science and Hadoop
Data science and Hadoop
 

Recently uploaded

Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 

Recently uploaded (20)

Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

Pig vs mapreduce

  • 1. Pig vs. MapReduce By Donald Miner NYC Pig User Group August 21, 2013
  • 3. I’ll be talking about What is Java MapReduce good for? Why is Pig better in some ways? When should I use which?
  • 4. When do I use Pig?? Can I use Pig to do this? YES NO Let’s get to the point
  • 5. When do I use Pig?? Can I use Pig to do this? YES NO USE PIG!
  • 6. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS!
  • 7. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS! Did that work? YES NO
  • 8. When do I use Pig?? Can I use Pig to do this? YES NO TRY TO USE PIG ANYWAYS! Did that work? YES NO OK… use Java MapReduce
  • 9. Why? • If you can do it with Pig, save yourself the pain • Almost always developer time is worth more than machine time • Trying something out in Pig is not risky (time- wise) – you might learn something about your problem – Ok, so it turned out to look a bit like a hack, but who cares? – Ok, so it ended up being slow, but who cares?
  • 10. Use the right tool for the job Pig Java MapReduce HTML Get the job done faster and better Big Data Problem TM
  • 11. Which is faster, Pig or Java MapReduce? Hypothetically, any Pig job could be rewritten using MapReduce… so Java MR can only be faster. The TRUE battle is the Pig optimizer vs. the developer VS Are you better than the Pig optimizer than figuring out how to string multiple jobs together (and other things)?
  • 12. Things that are hard to express in Pig • When something is hard to express succinctly in Pig, you are going to end up with a slow job i.e., building something up of several primitives • Some examples: – Tricky groupings or joins – Combining lots of data sets – Tricky usage of the distributed cache (replicated join) – Tricky cross products – Doing crazy stuff in nested FOREACH • In these cases, Pig is going to spawn off a bunch of MapReduce jobs, which could have been done with less This is change in “speed” that doesn’t just have to do with cost-of-abstraction
  • 13. The Fancy MAPREDUCE keyword! Pig has a relational operator called MAPREDUCE that allows your to plug in a Java MapReduce job! Use this to only replace the tricky things … don’t throw out all the stuff Pig is good at B = MAPREDUCE 'wordcount.jar' STORE A INTO 'inputDir' LOAD 'outputDir' AS (word:chararray, count: int) `org.myorg.WordCount inputDir outputDir`; Have the best of both worlds! To the rescue…
  • 14. Somewhat related: Is developer time worthless? Does speed really matter? Time spent writing Pig job Runtime of Pig job x times job is ran Time spent maintaining Pig job Time spent writing MR job Runtime of MR job x times job is ran Time spent maintaining MR job When does the scale tip in one direction or the other? Will the job run many times? Or once? Are your Java programmers sloppy? Is the Java MR significantly faster in this case? Is 14 minutes really that different from 20 minutes?
  • 15. Why is development so much faster in Pig? • Fewer java-level bugs to work out … but bugs might be harder to figure out • Fewer lines of code simply means less typing • Compilation and deployment can significantly slow down incremental improvements • Easier to read: The purpose of the analytic is more straightforward (the context is self-evident)
  • 16. Avoiding Java! • Not everyone is a Java expert … especially all those SQL guys you are repurposing • The higher level of abstraction makes Pig easier to learn and read – I’ve had both software engineers and SQL developers become productive in Pig in <4 days Oh, you want to learn Hadoop? Read this first!
  • 17. But can I really? not really. Pig is good at moving data sets between states … but not so good at manipulating the data itself examples: advanced string operations, math, complex aggregates, dates, NLP, model building You need user-defined functions (UDFs) I’ve seen too many people try to avoid UDFs UDFs are powerful: manipulate bags after a GROUP BY Plug into external libraries like NLTK or OpenNLP Loaders for complex custom data types Exploiting the order of data
  • 18. Ok, so I still want to avoid Java Do you work by yourself??? Give someone else the task of writing you a UDF! (they are bite-size little projects) Current UDF support in 0.11.1: Java, Python, JavaScript, Ruby, Groovy These can help you avoid Java if you simply don’t like it (me)
  • 19. Why did you write a book on MR Design Patterns if you think you should do stuff in Pig?? Good question! • I’ve seen plenty of devs do DUMB stuff in Pig just because there is a keyword for it e.g., silly joins, ordering, using the PARALLEL keyword wrong • Knowing how MapReduce works will result in you writing better Pig • In particular– how do Pig optimizations and relational keywords translate into MapReduce design patterns?
  • 20. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE A STORY ABOUT MAINTAINABILITY
  • 21. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE IT guy here. Your MapReduce job is blowing up the cluster, how do I fix this thing?
  • 22. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Ah, that’s pretty easy to fix. Just comment out that first line in the mapper function.
  • 23. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Ok, how do I do that?
  • 24. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easy
  • 25. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easy First, check the code out of git
  • 26. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH!
  • 27. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/ha doop/mapreducejobs/job s/codes/analytic/mymapr educejob/mapper.java . . .
  • 28. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . Now, build the .jar
  • 29. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . .Now, compile the .jar And ship the .jar to the cluster, replacing the old one
  • 30. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . . . . . . . Now, compile the .jar And ship the .jar to the cluster, replacing the old one Ok, now run the hadoop jar command. Don’t forget the CLASSPATH!
  • 31. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE Oh, that’s easyFirst, check the code out of git Then, download, install and configure Eclipse. Don’t forget to set your CLASSPATH! Ok, now comment out line # 851 in /home/itguy/java/src/co m/hadooprus/hadoop/h adoop/mapreducejobs/j obs/codes/analytic/mym apreducejob/mapper.jav a . . . . . . . . . . . . . . . . Now, compile the .jar And ship the .jar to the cluster, replacing the old one Ok, now run the hadoop jar command. Don’t forget the CLASSPATH! Did that work?
  • 32. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE No
  • 33. SCENARIO #1: JUST CHANGE THAT ONE LITTLE LINE . . . Ah, let’s try something else and do that again!
  • 34. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig)
  • 35. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) IT guy here. Your MapReduce job is blowing up the cluster, how do I fix this thing?
  • 36. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) Ah, that’s pretty easy to fix. Just comment out that line that says “FILTER blah blah” and save the file.
  • 37. SCENARIO #2: JUST CHANGE THAT ONE LITTLE LINE (this time with Pig) Ok, thanks!
  • 38. Pig: Deployment & Maintainability • Don’t have to worry about version mismatch (for the most part) • You can have multiple Pig client libraries installed at once • Takes compilation out of the build and deployment process • Can make changes to scripts in place if you have to • Iteratively tweaking scripts during development and debugging • Less chances for the developer to write Java-level bugs
  • 39. Some Caveats • Hadoop Streaming provides some of these same benefits • Big problems in both are still going to take time • If you are using Java UDFs, you still need to compile them (which is why I use Python)
  • 40. Unstructured Data • Delimited data is pretty easy • Pig has issues dealing with out of the box: – Media: images, videos, audio – Time series: utilizing order of data, lists – Ambiguously delimited text – Log data: rows with different context/meaning/format You can write custom loaders and tons of UDFs… but what’s the point?
  • 41. What about semi-structured data? • Some forms more natural that others – Well-defined JSON/XML schemas are usually OK • Pig has trouble dealing with: – Complex operations on unbounded lists of objects (e.g., bags) – Very Flexible schemas (think BigTable/Hbase) – Poorly designed JSON/XML Sometimes, it’s just more pain than it’s worth to try to do in Pig
  • 42. Pig vs. Hive vs. MapReduce • Same arguments apply for Hive vs. Java MR • Using Pig or Hive doesn’t make that big of a difference … but pick one because UDFs/Storage functions aren’t easily interchangeable • I think you’ll like Pig better than Hive (just like everyone likes emacs more than vi)
  • 43. WRAP UP: AN ANALOGY (#1) Pig is a scripting language, Hadoop’s MapReduce is a compiled language. PYTHON C ::
  • 44. WRAP UP: AN ANALOGY (#2) Pig is a higher level of abstraction, Hadoop’s MapReduce is a lower level of abstraction. SQL C ::
  • 45. A lot of the same arguments apply! • Compilation – Don’t have to compile Pig • Efficiency of code – Pig will be a bit less efficient (but…) • Lines of code and verbosity – Pig will have fewer lines of code • Optimization – Pig has more opportunities to do automatic optimization of queries • Code portability – The same Pig script will work across versions (for the most part) • Code readability – It should be easier to understand a Pig script • Underlying bugs – Underlying bugs in Pig can cause frustrating problems (thanks be to God for open source) • Amount of control and space of possibilities – There are fewer things you CAN do in Pig

Editor's Notes

  1. Donald&apos;s talk will cover how to use native MapReduce in conjunction with Pig, including a detailed discussion of when users might be best served to use one or the other.