SlideShare ist ein Scribd-Unternehmen logo
1 von 55
   “Cloud” refers to large Internet services
    like Google, Yahoo, etc that run on 10,000’s
    of machines
   More recently, “cloud computing” refers to
    services by these companies that let
    external customers rent computing cycles
    on their clusters
    • Amazon EC2: virtual machines at 10¢/hour, billed hourly
    • Amazon S3: storage at 15¢/GB/month

   Attractive features:
    • Scale: up to 100’s of nodes
    • Fine-grained billing: pay only for what you use
    • Ease of use: sign up with credit card, get root access
   Simple data-parallel programming
    model designed for scalability and fault-
    tolerance
   Pioneered by Google
    • Processes 20 petabytes of data per day


   Popularized by open-source Hadoop
    project
    • Used at Yahoo!, Facebook, Amazon, …
   At Google:
    • Index construction for Google Search
    • Article clustering for Google News
    • Statistical machine translation
   At Yahoo!:
    • “Web map” powering Yahoo! Search
    • Spam detection for Yahoo! Mail
   At Facebook:
    • Data mining
    • Ad optimization
    • Spam detection
www.facebook.com/lexicon
www.facebook.com/lexicon
   In research:
    • Astronomical image analysis (Washington)
    • Bioinformatics (Maryland)
    • Analyzing Wikipedia conflicts (PARC)
    • Natural language processing (CMU)
    • Particle physics (Nebraska)
    • Ocean climate simulation (Washington)
    • <Your application here>
   MapReduce architecture
   Example applications
   Getting started with Hadoop
   Higher-level languages over Hadoop: Pig
    and Hive
   Amazon Elastic MapReduce
1.   Scalability to large data volumes:
     • 1000’s of machines, 10,000’s of disks


1.   Cost-efficiency:
     • Commodity machines (cheap, but unreliable)
     • Commodity network
     • Automatic fault-tolerance (fewer administrators)
     • Easy to use (fewer programmers)
Aggregation switch


           Rack switch




   40 nodes/rack, 1000-4000 nodes in cluster
   1 Gbps bandwidth within rack, 8 Gbps out of rack
   Node specs (Yahoo terasort):
    8 x 2GHz cores, 8 GB RAM, 4 disks (= 4 TB?)


                 Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/YahooHadoopIntro-apachecon-us-2008.pdf
Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf
1.   Cheap nodes fail, especially if you have
     many
     • Mean time between failures for 1 node = 3 years
     • Mean time between failures for 1000 nodes = 1 day
     • Solution: Build fault-tolerance into system

1.   Commodity network = low bandwidth
     • Solution: Push computation to the data

1.   Programming distributed systems is hard
     • Solution: Data-parallel programming model: users
      write “map” & “reduce” functions, system distributes
      work and handles faults
   Distributed file system (HDFS)
    • Single namespace for entire cluster
    • Replicates data 3x for fault-tolerance

   MapReduce framework
    • Executes user jobs specified as “map” and
      “reduce” functions
    • Manages work distribution & fault-tolerance
   Files split into 128MB            Namenode
    blocks                                        File1
                                                   11
   Blocks replicated across                       22
                                                   33
    several datanodes                              44


    (usually 3)
   Single namenode stores
    metadata (file names,
    block locations, etc)        11    22   11    33
   Optimized for large files,   22
                                 44
                                       11
                                       33
                                            44
                                            33
                                                  22
                                                  44

    sequential reads                  Datanodes
   Files are append-only
   Data type: key-value records

   Map function:
           (Kin, Vin)  list(Kinter, Vinter)

   Reduce function:
        (Kinter, list(Vinter))  list(Kout, Vout)
def mapper(line):
    foreach word in line.split():
        output(word, 1)


def reducer(key, values):
    output(key, sum(values))
Input       Map            Shuffle & Sort              Reduce   Output

                         the, 1
                        brown, 1
 the quick               fox, 1                                   brown, 2
              Map
brown fox                                                          fox, 2
                                                         Reduce
                                                                   how, 1
                    the, 1
                    fox, 1
                                                                   now, 1
                    the, 1                                         the, 3
the fox ate
              Map
the mouse                                     quick, 1

                 how, 1
                                    ate, 1                         ate, 1
                 now, 1
                                   mouse, 1
                brown, 1                                 Reduce    cow, 1
 how now                                                          mouse, 1
              Map                   cow, 1
brown cow                                                         quick, 1
   Single master controls job execution on
    multiple slaves

   Mappers preferentially placed on same
    node or same rack as their input block
    • Minimizes network usage

   Mappers save outputs to local disk before
    serving them to reducers
    • Allows recovery if a reducer crashes
    • Allows having more reducers than nodes
   A combiner is a local aggregation function
    for repeated keys produced by same map
   Works for associative functions like sum,
    count, max
   Decreases size of intermediate data
   Example: map-side aggregation for Word
    Count:
            def combiner(key, values):
                output(key, sum(values))
Input       Map & Combine Shuffle & Sort                    Reduce   Output

                              the, 1
                             brown, 1
 the quick                    fox, 1                                   brown, 2
                   Map
brown fox                                                               fox, 2
                                                              Reduce
                                                                        how, 1
                          the, 2                                        now, 1
                          fox, 1
                                                                        the, 3
the fox ate
                   Map
the mouse                                          quick, 1

                       how, 1
                                         ate, 1                         ate, 1
                       now, 1
                                        mouse, 1
                      brown, 1                                Reduce    cow, 1
 how now                                                               mouse, 1
                   Map                   cow, 1
brown cow                                                              quick, 1
1. If a task crashes:
  • Retry on another node
     OK for a map because it has no dependencies
     OK for reduce because map outputs are on disk
  • If the same task fails repeatedly, fail the job or ignore that
    input block (user-controlled)


 Note: For these fault tolerance features to work,
  your map and reduce tasks must be side-effect-free
2. If a node crashes:
  • Re-launch its current tasks on other nodes
  • Re-run any maps the node previously ran
    Necessary because their output files were lost along
     with the crashed node
3. If a task is going slowly (straggler):
  • Launch second copy of task on another node
    (“speculative execution”)
  • Take the output of whichever copy finishes first, and kill
    the other


 Surprisingly      important in large clusters
  • Stragglers occur frequently due to failing hardware,
    software bugs, misconfiguration, etc
  • Single straggler may noticeably slow down a job
   By providing a data-parallel
    programming model, MapReduce can
    control job execution in useful ways:
    • Automatic division of job into tasks
    • Automatic placement of computation near data
    • Automatic load balancing
    • Recovery from failures & stragglers


   User focuses on application, not on
    complexities of distributed computing
   MapReduce architecture
   Example applications
   Getting started with Hadoop
   Higher-level languages over Hadoop:
    Pig and Hive
   Amazon Elastic MapReduce
   Input: (lineNumber, line) records
   Output: lines matching a given pattern

   Map:
                 if(line matches pattern):
                     output(line)

   Reduce: identify function
    • Alternative: no reducer (map-only job)
   Input: (key, value) records
   Output: same records, sorted by key

   Map: identity function
                                             ant, bee
   Reduce: identify function   Map
                                                         Reduce [A-M]
                                    zebra
                                                            aardvark
                                                               ant
   Trick: Pick partitioning          cow                     bee
                                                              cow
    function h such that        Map
                                       pig
                                                            elephant

    k1<k2 => h(k1)<h(k2)        aardvark,
                                                         Reduce [N-Z]
                                elephant                     pig
                                                            sheep
                                Map         sheep, yak       yak
                                                            zebra
   Input: (filename, text) records
   Output: list of files containing each word
   Map:
             foreach word in text.split():
                output(word, filename)
   Combine: uniquify filenames for each word
   Reduce:
           def reduce(word, filenames):
               output(word,
    sort(filenames))
hamlet.txt
             to, hamlet.txt
to be or     be, hamlet.txt
not to be    or, hamlet.txt             afraid, (12th.txt)
             not, hamlet.txt       be, (12th.txt, hamlet.txt)
                                     greatness, (12th.txt)
                                   not, (12th.txt, hamlet.txt)
                                          of, (12th.txt)
             be, 12th.txt                or, (hamlet.txt)
 12th.txt
             not, 12th.txt               to, (hamlet.txt)
  be not     afraid, 12th.txt
afraid of    of, 12th.txt
greatness
             greatness, 12th.txt
   Input: (filename, text) records
   Output: top 100 words occurring in the most files
   Two-stage solution:
    • Job 1:
       Create inverted index, giving (word, list(file)) records
    • Job 2:
       Map each (word, list(file)) to (count, word)
       Sort these records by count as in sort job

   Optimizations:
    • Map to (word, 1) instead of (word, file) in Job 1
    • Count files in job 1’s reducer rather than job 2’s mapper
    • Estimate count distribution in advance and drop rare words
   Input: (start, end) records for sub-ranges to
    integrate
    • Easy using custom InputFormat
   Output: integral of f(x) dx over entire range
   Map:
                def map(start, end):
                   sum = 0
                   for(x = start; x < end; x +=
    step):
                        sum += f(x) * step
                     output(“”, sum)
   Reduce:
               def reduce(key, values):
                   output(key, sum(values))
   MapReduce architecture
   Example applications
   Getting started with Hadoop
   Higher-level languages over Hadoop:
    Pig and Hive
   Amazon Elastic MapReduce
   Download from hadoop.apache.org
   To install locally, unzip and set JAVA_HOME
   Details: hadoop.apache.org/core/docs/current/quickstart.html

   Three ways to write jobs:
    • Java API
    • Hadoop Streaming (for Python, Perl, etc)
    • Pipes API (C++)
public class MapClass extends MapReduceBase
   implements Mapper<LongWritable, Text, Text, IntWritable> {
     private final static IntWritable ONE = new IntWritable(1);
     public void map(LongWritable key, Text value,
                     OutputCollector<Text, IntWritable> out,
                     Reporter reporter) throws IOException {
       String line = value.toString();
       StringTokenizer itr = new StringTokenizer(line);
       while (itr.hasMoreTokens()) {
         out.collect(new text(itr.nextToken()), ONE);
       }
     }
 }
public class ReduceClass extends MapReduceBase
   implements Reducer<Text, IntWritable, Text, IntWritable> {
     public void reduce(Text key, Iterator<IntWritable> values,
                        OutputCollector<Text, IntWritable> out,
                        Reporter reporter) throws IOException {
       int sum = 0;
       while (values.hasNext()) {
         sum += values.next().get();
       }
       out.collect(key, new IntWritable(sum));
     }
 }
public static void main(String[] args) throws Exception {
   JobConf conf = new JobConf(WordCount.class);
   conf.setJobName("wordcount");

     conf.setMapperClass(MapClass.class);
     conf.setCombinerClass(ReduceClass.class);
     conf.setReducerClass(ReduceClass.class);

     FileInputFormat.setInputPaths(conf, args[0]);
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));

     conf.setOutputKeyClass(Text.class); // out keys are words (strings)
     conf.setOutputValueClass(IntWritable.class); // values are counts

     JobClient.runJob(conf);
 }
Mapper.py:    import sys
              for line in sys.stdin:
                for word in line.split():
                  print(word.lower() + "t" + 1)


Reducer.py:   import sys
              counts = {}
              for line in sys.stdin:
                word, count = line.split("t”)
                dict[word] = dict.get(word, 0) +
                int(count)
              for word, count in counts:
                print(word.lower() + "t" + 1)
   MapReduce architecture
   Example applications
   Getting started with Hadoop
   Higher-level languages over Hadoop:
    Pig and Hive
   Amazon Elastic MapReduce
   Many parallel algorithms can be
    expressed by a series of MapReduce
    jobs

   But MapReduce is fairly low-level: must
    think about keys, values, partitioning, etc

   Can we capture common “job building
    blocks”?
   Started at Yahoo! Research
   Runs about 30% of Yahoo!’s jobs
   Features:
    • Expresses sequences of MapReduce jobs
    • Data model: nested “bags” of items
    • Provides relational (SQL) operators (JOIN,
      GROUP BY, etc)
    • Easy to plug in Java functions
    • Pig Pen development environment for Eclipse
Suppose you have                               Load Users                              Load Pages
user data in one
file, page view                               Filter by age

data in another,                                                    Join on name
and you need to
                                                                    Group on url
find the top 5 most
visited pages by                                                    Count clicks

users aged 18 - 25.                                              Order by clicks

                                                                      Take top 5


        Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
import   java.io.IOException;                                                         reporter.setStatus("OK");                                       lp.setOutputKeyClass(Text.class);
import   java.util.ArrayList;                                                     }                                                                   lp.setOutputValueClass(Text.class);
import   java.util.Iterator;                                                                                                                          lp.setMapperClass(LoadPages.class);
import   java.util.List;                                                         // Do the cross product and collect the values                       FileInputFormat.addInputPath(lp, new
                                                                                 for (String s1 : first) {                                    Path("/user/gates/pages"));
import   org.apache.hadoop.fs.Path;                                                  for (String s2 : second) {                                       FileOutputFormat.setOutputPath(lp,
import   org.apache.hadoop.io.LongWritable;                                              String outval = key + "," + s1 + "," + s2;                       new Path("/user/gates/tmp/indexed_pages"));
import   org.apache.hadoop.io.Text;                                                      oc.collect(null, new Text(outval));                          lp.setNumReduceTasks(0);
import   org.apache.hadoop.io.Writable;                                                  reporter.setStatus("OK");                                    Job loadPages = new Job(lp);
import   org.apache.hadoop.io.WritableComparable;                                    }
import   org.apache.hadoop.mapred.FileInputFormat;                               }                                                                    JobConf lfu = new JobConf(MRExample.class);
import   org.apache.hadoop.mapred.FileOutputFormat;                          }                                                                        lfu.setJobName("Load and Filter Users");
import   org.apache.hadoop.mapred.JobConf;                               }                                                                            lfu.setInputFormat(TextInputFormat.class);
import   org.apache.hadoop.mapred.KeyValueTextInputFormat;               public static class LoadJoined extends MapReduceBase                         lfu.setOutputKeyClass(Text.class);
import   org.apache.hadoop.mapred.Mapper;                                    implements Mapper<Text, Text, Text, LongWritable> {                      lfu.setOutputValueClass(Text.class);
import   org.apache.hadoop.mapred.MapReduceBase;                                                                                                      lfu.setMapperClass(LoadAndFilterUsers.class);
import   org.apache.hadoop.mapred.OutputCollector;                           public void map(                                                         FileInputFormat.addInputPath(lfu, new
import   org.apache.hadoop.mapred.RecordReader;                                      Text k,                                                  Path("/user/gates/users"));
import   org.apache.hadoop.mapred.Reducer;                                           Text val,                                                        FileOutputFormat.setOutputPath(lfu,
import   org.apache.hadoop.mapred.Reporter;                                          OutputCollector<Text, LongWritable> oc,                              new Path("/user/gates/tmp/filtered_users"));
import   org.apache.hadoop.mapred.SequenceFileInputFormat;                           Reporter reporter) throws IOException {                          lfu.setNumReduceTasks(0);
import   org.apache.hadoop.mapred.SequenceFileOutputFormat;                      // Find the url                                                      Job loadUsers = new Job(lfu);
import   org.apache.hadoop.mapred.TextInputFormat;                               String line = val.toString();
import   org.apache.hadoop.mapred.jobcontrol.Job;                                int firstComma = line.indexOf(',');                                  JobConf join = new JobConf(MRExample.class);
import   org.apache.hadoop.mapred.jobcontrol.JobControl;                         int secondComma = line.indexOf(',', firstComma);                     join.setJobName("Join Users and Pages");
import   org.apache.hadoop.mapred.lib.IdentityMapper;                            String key = line.substring(firstComma, secondComma);                join.setInputFormat(KeyValueTextInputFormat.class);
                                                                                 // drop the rest of the record, I don't need it anymore,             join.setOutputKeyClass(Text.class);
public class MRExample {                                                         // just pass a 1 for the combiner/reducer to sum instead.            join.setOutputValueClass(Text.class);
    public static class LoadPages extends MapReduceBase                          Text outKey = new Text(key);                                         join.setMapperClass(IdentityMapper.class);
        implements Mapper<LongWritable, Text, Text, Text> {                      oc.collect(outKey, new LongWritable(1L));                            join.setReducerClass(Join.class);
                                                                             }                                                                        FileInputFormat.addInputPath(join, new
         public void map(LongWritable k, Text val,                        }                                                                   Path("/user/gates/tmp/indexed_pages"));
                 OutputCollector<Text, Text> oc,                          public static class ReduceUrls extends MapReduceBase                        FileInputFormat.addInputPath(join, new
                 Reporter reporter) throws IOException {                      implements Reducer<Text, LongWritable, WritableComparable,      Path("/user/gates/tmp/filtered_users"));
             // Pull the key out                                      Writable> {                                                                     FileOutputFormat.setOutputPath(join, new
             String line = val.toString();                                                                                                    Path("/user/gates/tmp/joined"));
             int firstComma = line.indexOf(',');                             public void reduce(                                                      join.setNumReduceTasks(50);
             String key = line.substring(0, firstComma);                             Text key,                                                        Job joinJob = new Job(join);
             String value = line.substring(firstComma + 1);                          Iterator<LongWritable> iter,                                     joinJob.addDependingJob(loadPages);
             Text outKey = new Text(key);                                            OutputCollector<WritableComparable, Writable> oc,                joinJob.addDependingJob(loadUsers);
             // Prepend an index to the value so we know which file                  Reporter reporter) throws IOException {
             // it came from.                                                    // Add up all the values we see                                      JobConf group = new JobConf(MRExample.class);
             Text outVal = new Text("1" + value);                                                                                                     group.setJobName("Group URLs");
             oc.collect(outKey, outVal);                                          long sum = 0;                                                       group.setInputFormat(KeyValueTextInputFormat.class);
         }                                                                        while (iter.hasNext()) {                                            group.setOutputKeyClass(Text.class);
   }                                                                                  sum += iter.next().get();                                       group.setOutputValueClass(LongWritable.class);
   public static class LoadAndFilterUsers extends MapReduceBase                       reporter.setStatus("OK");                                       group.setOutputFormat(SequenceFileOutputFormat.class);
       implements Mapper<LongWritable, Text, Text, Text> {                        }                                                                   group.setMapperClass(LoadJoined.class);
                                                                                                                                                      group.setCombinerClass(ReduceUrls.class);
         public void map(LongWritable k, Text val,                                oc.collect(key, new LongWritable(sum));                             group.setReducerClass(ReduceUrls.class);
                 OutputCollector<Text, Text> oc,                              }                                                                       FileInputFormat.addInputPath(group, new
                 Reporter reporter) throws IOException {                  }                                                                   Path("/user/gates/tmp/joined"));
             // Pull the key out                                          public static class LoadClicks extends MapReduceBase                        FileOutputFormat.setOutputPath(group, new
             String line = val.toString();                                    implements Mapper<WritableComparable, Writable, LongWritable,   Path("/user/gates/tmp/grouped"));
             int firstComma = line.indexOf(',');                      Text> {                                                                         group.setNumReduceTasks(50);
             String value = line.substring(firstComma + 1);                                                                                           Job groupJob = new Job(group);
             int age = Integer.parseInt(value);                              public void map(                                                         groupJob.addDependingJob(joinJob);
             if (age < 18 || age > 25) return;                                       WritableComparable key,
             String key = line.substring(0, firstComma);                             Writable val,                                                    JobConf top100 = new JobConf(MRExample.class);
             Text outKey = new Text(key);                                            OutputCollector<LongWritable, Text> oc,                          top100.setJobName("Top 100 sites");
             // Prepend an index to the value so we know which file                  Reporter reporter) throws IOException {                          top100.setInputFormat(SequenceFileInputFormat.class);
             // it came from.                                                    oc.collect((LongWritable)val, (Text)key);                            top100.setOutputKeyClass(LongWritable.class);
             Text outVal = new Text("2" + value);                            }                                                                        top100.setOutputValueClass(Text.class);
             oc.collect(outKey, outVal);                                 }                                                                            top100.setOutputFormat(SequenceFileOutputFormat.class);
         }                                                               public static class LimitClicks extends MapReduceBase                        top100.setMapperClass(LoadClicks.class);
   }                                                                         implements Reducer<LongWritable, Text, LongWritable, Text> {             top100.setCombinerClass(LimitClicks.class);
   public static class Join extends MapReduceBase                                                                                                     top100.setReducerClass(LimitClicks.class);
       implements Reducer<Text, Text, Text, Text> {                          int count = 0;                                                           FileInputFormat.addInputPath(top100, new
                                                                             public void reduce(                                              Path("/user/gates/tmp/grouped"));
        public void reduce(Text key,                                             LongWritable key,                                                    FileOutputFormat.setOutputPath(top100, new
                Iterator<Text> iter,                                             Iterator<Text> iter,                                         Path("/user/gates/top100sitesforusers18to25"));
                OutputCollector<Text, Text> oc,                                  OutputCollector<LongWritable, Text> oc,                              top100.setNumReduceTasks(1);
                Reporter reporter) throws IOException {                          Reporter reporter) throws IOException {                              Job limit = new Job(top100);
            // For each value, figure out which file it's from and                                                                                    limit.addDependingJob(groupJob);
store it                                                                          // Only output the first 100 records
            // accordingly.                                                       while (count < 100 && iter.hasNext()) {                             JobControl jc = new JobControl("Find top 100 sites for users
            List<String> first = new ArrayList<String>();                             oc.collect(key, iter.next());                           18 to 25");
            List<String> second = new ArrayList<String>();                            count++;                                                        jc.addJob(loadPages);
                                                                                  }                                                                   jc.addJob(loadUsers);
            while (iter.hasNext()) {                                         }                                                                        jc.addJob(joinJob);
                Text t = iter.next();                                    }                                                                            jc.addJob(groupJob);
                String value = t.toString();                             public static void main(String[] args) throws IOException {                  jc.addJob(limit);
                if (value.charAt(0) == '1')                                  JobConf lp = new JobConf(MRExample.class);                               jc.run();
first.add(value.substring(1));                                               lp.setJobName("Load Pages");                                         }
                else second.add(value.substring(1));                         lp.setInputFormat(TextInputFormat.class);                        }




                                                       Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
Users    = load ‘users’ as (name, age);
Filtered = filter Users by
                  age >= 18 and age <= 25;
Pages    = load ‘pages’ as (user, url);
Joined   = join Filtered by name, Pages by user;
Grouped = group Joined by url;
Summed   = foreach Grouped generate group,
                   count(Joined) as clicks;
Sorted   = order Summed by clicks desc;
Top5     = limit Sorted 5;


store Top5 into ‘top5sites’;


         Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
Notice how naturally the components of the job translate into Pig Latin.

    Load Users                      Load Pages

                                                              Users = load …
   Filter by age
                                                              Filtered = filter …
                                                              Pages = load …
                   Join on name
                                                              Joined = join …
                   Group on url
                                                              Grouped = group …
                                                              Summed = … count()…
                   Count clicks                               Sorted = order …
                                                              Top5 = limit …
                 Order by clicks

                    Take top 5



                   Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
Notice how naturally the components of the job translate into Pig Latin.

        Load Users                       Load Pages

                                                                   Users = load …
        Filter by age
                                                                   Filtered = filter …
                                                                   Pages = load …
                        Join on name
                                                                   Joined = join …
Job 1
                        Group on url
                                                                   Grouped = group …
        Job 2                                                      Summed = … count()…
                        Count clicks                               Sorted = order …
                                                                   Top5 = limit …
                     Order by clicks
        Job 3
                         Take top 5



                        Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
   Developed at Facebook
   Used for majority of Facebook jobs
   “Relational database” built on Hadoop
    •   Maintains list of table schemas
    •   SQL-like query language (HQL)
    •   Can call Hadoop Streaming scripts from HQL
    •   Supports table partitioning, clustering, complex
        data types, some optimizations
• Find top 5 pages visited by users aged 18-25:
    SELECT p.url, COUNT(1) as clicks
    FROM users u JOIN page_views p ON (u.name = p.user)
    WHERE u.age >= 18 AND u.age <= 25
    GROUP BY p.url
    ORDER BY clicks
    LIMIT 5;


• Filter page views through Python script:
    SELECT TRANSFORM(p.user, p.date)
    USING 'map_script.py'
    AS dt, uid CLUSTER BY dt
    FROM page_views p;
   MapReduce architecture
   Example applications
   Getting started with Hadoop
   Higher-level languages over Hadoop:
    Pig and Hive
   Amazon Elastic MapReduce
   Provides a web-based interface and command-
    line tools for running Hadoop jobs on Amazon
    EC2
   Data stored in Amazon S3
   Monitors job and shuts down machines after use
   Small extra charge on top of EC2 pricing


   If you want more control over how you Hadoop
    runs, you can launch a Hadoop cluster on EC2
    manually using the scripts in src/contrib/ec2
   MapReduce programming model hides the
    complexity of work distribution and fault tolerance
   Principal design philosophies:
    • Make it scalable, so you can throw hardware at problems
    • Make it cheap, lowering hardware, programming and admin
      costs
   MapReduce is not suitable for all problems, but
    when it works, it may save you quite a bit of time
   Cloud computing makes it straightforward to start
    using Hadoop (or other parallel software) at scale
   Hadoop: http://hadoop.apache.org/core/
   Pig: http://hadoop.apache.org/pig
   Hive: http://hadoop.apache.org/hive
   Video tutorials:
    http://www.cloudera.com/hadoop-training
   Amazon Web Services: http://aws.amazon.com/
   Amazon Elastic MapReduce guide:
    http://docs.amazonwebservices.com/ElasticMa
    pReduce/latest/GettingStartedGuide/
   My email: e.vedavyas@gmail.com

Weitere ähnliche Inhalte

Ähnlich wie Cloud computing-with-map reduce-and-hadoop

Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceDr Ganesh Iyer
 
WOOster: A Map-Reduce based Platform for Graph Mining
WOOster: A Map-Reduce based Platform for Graph MiningWOOster: A Map-Reduce based Platform for Graph Mining
WOOster: A Map-Reduce based Platform for Graph Miningaravindan_raghu
 
L19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptL19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptMaruthiPrasad96
 
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)npinto
 
Distributed computing the Google way
Distributed computing the Google wayDistributed computing the Google way
Distributed computing the Google wayEduard Hildebrandt
 
Introduction to MapReduce using Disco
Introduction to MapReduce using DiscoIntroduction to MapReduce using Disco
Introduction to MapReduce using DiscoJim Roepcke
 
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)Matthew Lease
 
Intro to threp
Intro to threpIntro to threp
Intro to threpHong Wu
 
Hadoop Overview & Architecture
Hadoop Overview & Architecture  Hadoop Overview & Architecture
Hadoop Overview & Architecture EMC
 
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and CassandraBrief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and CassandraSomnath Mazumdar
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoopbigdatasyd
 
Hanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aHanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aSchubert Zhang
 
12-BigDataMapReduce.pptx
12-BigDataMapReduce.pptx12-BigDataMapReduce.pptx
12-BigDataMapReduce.pptxShree Shree
 

Ähnlich wie Cloud computing-with-map reduce-and-hadoop (20)

Introduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduceIntroduction to Hadoop and MapReduce
Introduction to Hadoop and MapReduce
 
WOOster: A Map-Reduce based Platform for Graph Mining
WOOster: A Map-Reduce based Platform for Graph MiningWOOster: A Map-Reduce based Platform for Graph Mining
WOOster: A Map-Reduce based Platform for Graph Mining
 
L19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .pptL19CloudMapReduce introduction for cloud computing .ppt
L19CloudMapReduce introduction for cloud computing .ppt
 
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)
[Harvard CS264] 08b - MapReduce and Hadoop (Zak Stone, Harvard)
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
Hadoop data analysis
Hadoop data analysisHadoop data analysis
Hadoop data analysis
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop
HadoopHadoop
Hadoop
 
Distributed computing the Google way
Distributed computing the Google wayDistributed computing the Google way
Distributed computing the Google way
 
Introduction to MapReduce using Disco
Introduction to MapReduce using DiscoIntroduction to MapReduce using Disco
Introduction to MapReduce using Disco
 
Hadoop Overview kdd2011
Hadoop Overview kdd2011Hadoop Overview kdd2011
Hadoop Overview kdd2011
 
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
Lecture 7: Data-Intensive Computing for Text Analysis (Fall 2011)
 
Intro to threp
Intro to threpIntro to threp
Intro to threp
 
Hadoop Overview & Architecture
Hadoop Overview & Architecture  Hadoop Overview & Architecture
Hadoop Overview & Architecture
 
Hadoop
HadoopHadoop
Hadoop
 
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and CassandraBrief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Hanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221aHanborq optimizations on hadoop map reduce 20120221a
Hanborq optimizations on hadoop map reduce 20120221a
 
Hadoop
HadoopHadoop
Hadoop
 
12-BigDataMapReduce.pptx
12-BigDataMapReduce.pptx12-BigDataMapReduce.pptx
12-BigDataMapReduce.pptx
 

Cloud computing-with-map reduce-and-hadoop

  • 1.
  • 2. “Cloud” refers to large Internet services like Google, Yahoo, etc that run on 10,000’s of machines  More recently, “cloud computing” refers to services by these companies that let external customers rent computing cycles on their clusters • Amazon EC2: virtual machines at 10¢/hour, billed hourly • Amazon S3: storage at 15¢/GB/month  Attractive features: • Scale: up to 100’s of nodes • Fine-grained billing: pay only for what you use • Ease of use: sign up with credit card, get root access
  • 3. Simple data-parallel programming model designed for scalability and fault- tolerance  Pioneered by Google • Processes 20 petabytes of data per day  Popularized by open-source Hadoop project • Used at Yahoo!, Facebook, Amazon, …
  • 4. At Google: • Index construction for Google Search • Article clustering for Google News • Statistical machine translation  At Yahoo!: • “Web map” powering Yahoo! Search • Spam detection for Yahoo! Mail  At Facebook: • Data mining • Ad optimization • Spam detection
  • 7. In research: • Astronomical image analysis (Washington) • Bioinformatics (Maryland) • Analyzing Wikipedia conflicts (PARC) • Natural language processing (CMU) • Particle physics (Nebraska) • Ocean climate simulation (Washington) • <Your application here>
  • 8. MapReduce architecture  Example applications  Getting started with Hadoop  Higher-level languages over Hadoop: Pig and Hive  Amazon Elastic MapReduce
  • 9. 1. Scalability to large data volumes: • 1000’s of machines, 10,000’s of disks 1. Cost-efficiency: • Commodity machines (cheap, but unreliable) • Commodity network • Automatic fault-tolerance (fewer administrators) • Easy to use (fewer programmers)
  • 10. Aggregation switch Rack switch  40 nodes/rack, 1000-4000 nodes in cluster  1 Gbps bandwidth within rack, 8 Gbps out of rack  Node specs (Yahoo terasort): 8 x 2GHz cores, 8 GB RAM, 4 disks (= 4 TB?) Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/YahooHadoopIntro-apachecon-us-2008.pdf
  • 12. 1. Cheap nodes fail, especially if you have many • Mean time between failures for 1 node = 3 years • Mean time between failures for 1000 nodes = 1 day • Solution: Build fault-tolerance into system 1. Commodity network = low bandwidth • Solution: Push computation to the data 1. Programming distributed systems is hard • Solution: Data-parallel programming model: users write “map” & “reduce” functions, system distributes work and handles faults
  • 13. Distributed file system (HDFS) • Single namespace for entire cluster • Replicates data 3x for fault-tolerance  MapReduce framework • Executes user jobs specified as “map” and “reduce” functions • Manages work distribution & fault-tolerance
  • 14. Files split into 128MB Namenode blocks File1 11  Blocks replicated across 22 33 several datanodes 44 (usually 3)  Single namenode stores metadata (file names, block locations, etc) 11 22 11 33  Optimized for large files, 22 44 11 33 44 33 22 44 sequential reads Datanodes  Files are append-only
  • 15. Data type: key-value records  Map function: (Kin, Vin)  list(Kinter, Vinter)  Reduce function: (Kinter, list(Vinter))  list(Kout, Vout)
  • 16. def mapper(line): foreach word in line.split(): output(word, 1) def reducer(key, values): output(key, sum(values))
  • 17. Input Map Shuffle & Sort Reduce Output the, 1 brown, 1 the quick fox, 1 brown, 2 Map brown fox fox, 2 Reduce how, 1 the, 1 fox, 1 now, 1 the, 1 the, 3 the fox ate Map the mouse quick, 1 how, 1 ate, 1 ate, 1 now, 1 mouse, 1 brown, 1 Reduce cow, 1 how now mouse, 1 Map cow, 1 brown cow quick, 1
  • 18. Single master controls job execution on multiple slaves  Mappers preferentially placed on same node or same rack as their input block • Minimizes network usage  Mappers save outputs to local disk before serving them to reducers • Allows recovery if a reducer crashes • Allows having more reducers than nodes
  • 19. A combiner is a local aggregation function for repeated keys produced by same map  Works for associative functions like sum, count, max  Decreases size of intermediate data  Example: map-side aggregation for Word Count: def combiner(key, values): output(key, sum(values))
  • 20. Input Map & Combine Shuffle & Sort Reduce Output the, 1 brown, 1 the quick fox, 1 brown, 2 Map brown fox fox, 2 Reduce how, 1 the, 2 now, 1 fox, 1 the, 3 the fox ate Map the mouse quick, 1 how, 1 ate, 1 ate, 1 now, 1 mouse, 1 brown, 1 Reduce cow, 1 how now mouse, 1 Map cow, 1 brown cow quick, 1
  • 21. 1. If a task crashes: • Retry on another node  OK for a map because it has no dependencies  OK for reduce because map outputs are on disk • If the same task fails repeatedly, fail the job or ignore that input block (user-controlled)  Note: For these fault tolerance features to work, your map and reduce tasks must be side-effect-free
  • 22. 2. If a node crashes: • Re-launch its current tasks on other nodes • Re-run any maps the node previously ran  Necessary because their output files were lost along with the crashed node
  • 23. 3. If a task is going slowly (straggler): • Launch second copy of task on another node (“speculative execution”) • Take the output of whichever copy finishes first, and kill the other  Surprisingly important in large clusters • Stragglers occur frequently due to failing hardware, software bugs, misconfiguration, etc • Single straggler may noticeably slow down a job
  • 24. By providing a data-parallel programming model, MapReduce can control job execution in useful ways: • Automatic division of job into tasks • Automatic placement of computation near data • Automatic load balancing • Recovery from failures & stragglers  User focuses on application, not on complexities of distributed computing
  • 25. MapReduce architecture  Example applications  Getting started with Hadoop  Higher-level languages over Hadoop: Pig and Hive  Amazon Elastic MapReduce
  • 26. Input: (lineNumber, line) records  Output: lines matching a given pattern  Map: if(line matches pattern): output(line)  Reduce: identify function • Alternative: no reducer (map-only job)
  • 27. Input: (key, value) records  Output: same records, sorted by key  Map: identity function ant, bee  Reduce: identify function Map Reduce [A-M] zebra aardvark ant  Trick: Pick partitioning cow bee cow function h such that Map pig elephant k1<k2 => h(k1)<h(k2) aardvark, Reduce [N-Z] elephant pig sheep Map sheep, yak yak zebra
  • 28. Input: (filename, text) records  Output: list of files containing each word  Map: foreach word in text.split(): output(word, filename)  Combine: uniquify filenames for each word  Reduce: def reduce(word, filenames): output(word, sort(filenames))
  • 29. hamlet.txt to, hamlet.txt to be or be, hamlet.txt not to be or, hamlet.txt afraid, (12th.txt) not, hamlet.txt be, (12th.txt, hamlet.txt) greatness, (12th.txt) not, (12th.txt, hamlet.txt) of, (12th.txt) be, 12th.txt or, (hamlet.txt) 12th.txt not, 12th.txt to, (hamlet.txt) be not afraid, 12th.txt afraid of of, 12th.txt greatness greatness, 12th.txt
  • 30. Input: (filename, text) records  Output: top 100 words occurring in the most files  Two-stage solution: • Job 1:  Create inverted index, giving (word, list(file)) records • Job 2:  Map each (word, list(file)) to (count, word)  Sort these records by count as in sort job  Optimizations: • Map to (word, 1) instead of (word, file) in Job 1 • Count files in job 1’s reducer rather than job 2’s mapper • Estimate count distribution in advance and drop rare words
  • 31. Input: (start, end) records for sub-ranges to integrate • Easy using custom InputFormat  Output: integral of f(x) dx over entire range  Map: def map(start, end): sum = 0 for(x = start; x < end; x += step): sum += f(x) * step output(“”, sum)  Reduce: def reduce(key, values): output(key, sum(values))
  • 32. MapReduce architecture  Example applications  Getting started with Hadoop  Higher-level languages over Hadoop: Pig and Hive  Amazon Elastic MapReduce
  • 33. Download from hadoop.apache.org  To install locally, unzip and set JAVA_HOME  Details: hadoop.apache.org/core/docs/current/quickstart.html  Three ways to write jobs: • Java API • Hadoop Streaming (for Python, Perl, etc) • Pipes API (C++)
  • 34. public class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable ONE = new IntWritable(1); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> out, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer itr = new StringTokenizer(line); while (itr.hasMoreTokens()) { out.collect(new text(itr.nextToken()), ONE); } } }
  • 35. public class ReduceClass extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> out, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } out.collect(key, new IntWritable(sum)); } }
  • 36. public static void main(String[] args) throws Exception { JobConf conf = new JobConf(WordCount.class); conf.setJobName("wordcount"); conf.setMapperClass(MapClass.class); conf.setCombinerClass(ReduceClass.class); conf.setReducerClass(ReduceClass.class); FileInputFormat.setInputPaths(conf, args[0]); FileOutputFormat.setOutputPath(conf, new Path(args[1])); conf.setOutputKeyClass(Text.class); // out keys are words (strings) conf.setOutputValueClass(IntWritable.class); // values are counts JobClient.runJob(conf); }
  • 37. Mapper.py: import sys for line in sys.stdin: for word in line.split(): print(word.lower() + "t" + 1) Reducer.py: import sys counts = {} for line in sys.stdin: word, count = line.split("t”) dict[word] = dict.get(word, 0) + int(count) for word, count in counts: print(word.lower() + "t" + 1)
  • 38. MapReduce architecture  Example applications  Getting started with Hadoop  Higher-level languages over Hadoop: Pig and Hive  Amazon Elastic MapReduce
  • 39. Many parallel algorithms can be expressed by a series of MapReduce jobs  But MapReduce is fairly low-level: must think about keys, values, partitioning, etc  Can we capture common “job building blocks”?
  • 40. Started at Yahoo! Research  Runs about 30% of Yahoo!’s jobs  Features: • Expresses sequences of MapReduce jobs • Data model: nested “bags” of items • Provides relational (SQL) operators (JOIN, GROUP BY, etc) • Easy to plug in Java functions • Pig Pen development environment for Eclipse
  • 41. Suppose you have Load Users Load Pages user data in one file, page view Filter by age data in another, Join on name and you need to Group on url find the top 5 most visited pages by Count clicks users aged 18 - 25. Order by clicks Take top 5 Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
  • 42. import java.io.IOException; reporter.setStatus("OK"); lp.setOutputKeyClass(Text.class); import java.util.ArrayList; } lp.setOutputValueClass(Text.class); import java.util.Iterator; lp.setMapperClass(LoadPages.class); import java.util.List; // Do the cross product and collect the values FileInputFormat.addInputPath(lp, new for (String s1 : first) { Path("/user/gates/pages")); import org.apache.hadoop.fs.Path; for (String s2 : second) { FileOutputFormat.setOutputPath(lp, import org.apache.hadoop.io.LongWritable; String outval = key + "," + s1 + "," + s2; new Path("/user/gates/tmp/indexed_pages")); import org.apache.hadoop.io.Text; oc.collect(null, new Text(outval)); lp.setNumReduceTasks(0); import org.apache.hadoop.io.Writable; reporter.setStatus("OK"); Job loadPages = new Job(lp); import org.apache.hadoop.io.WritableComparable; } import org.apache.hadoop.mapred.FileInputFormat; } JobConf lfu = new JobConf(MRExample.class); import org.apache.hadoop.mapred.FileOutputFormat; } lfu.setJobName("Load and Filter Users"); import org.apache.hadoop.mapred.JobConf; } lfu.setInputFormat(TextInputFormat.class); import org.apache.hadoop.mapred.KeyValueTextInputFormat; public static class LoadJoined extends MapReduceBase lfu.setOutputKeyClass(Text.class); import org.apache.hadoop.mapred.Mapper; implements Mapper<Text, Text, Text, LongWritable> { lfu.setOutputValueClass(Text.class); import org.apache.hadoop.mapred.MapReduceBase; lfu.setMapperClass(LoadAndFilterUsers.class); import org.apache.hadoop.mapred.OutputCollector; public void map( FileInputFormat.addInputPath(lfu, new import org.apache.hadoop.mapred.RecordReader; Text k, Path("/user/gates/users")); import org.apache.hadoop.mapred.Reducer; Text val, FileOutputFormat.setOutputPath(lfu, import org.apache.hadoop.mapred.Reporter; OutputCollector<Text, LongWritable> oc, new Path("/user/gates/tmp/filtered_users")); import org.apache.hadoop.mapred.SequenceFileInputFormat; Reporter reporter) throws IOException { lfu.setNumReduceTasks(0); import org.apache.hadoop.mapred.SequenceFileOutputFormat; // Find the url Job loadUsers = new Job(lfu); import org.apache.hadoop.mapred.TextInputFormat; String line = val.toString(); import org.apache.hadoop.mapred.jobcontrol.Job; int firstComma = line.indexOf(','); JobConf join = new JobConf(MRExample.class); import org.apache.hadoop.mapred.jobcontrol.JobControl; int secondComma = line.indexOf(',', firstComma); join.setJobName("Join Users and Pages"); import org.apache.hadoop.mapred.lib.IdentityMapper; String key = line.substring(firstComma, secondComma); join.setInputFormat(KeyValueTextInputFormat.class); // drop the rest of the record, I don't need it anymore, join.setOutputKeyClass(Text.class); public class MRExample { // just pass a 1 for the combiner/reducer to sum instead. join.setOutputValueClass(Text.class); public static class LoadPages extends MapReduceBase Text outKey = new Text(key); join.setMapperClass(IdentityMapper.class); implements Mapper<LongWritable, Text, Text, Text> { oc.collect(outKey, new LongWritable(1L)); join.setReducerClass(Join.class); } FileInputFormat.addInputPath(join, new public void map(LongWritable k, Text val, } Path("/user/gates/tmp/indexed_pages")); OutputCollector<Text, Text> oc, public static class ReduceUrls extends MapReduceBase FileInputFormat.addInputPath(join, new Reporter reporter) throws IOException { implements Reducer<Text, LongWritable, WritableComparable, Path("/user/gates/tmp/filtered_users")); // Pull the key out Writable> { FileOutputFormat.setOutputPath(join, new String line = val.toString(); Path("/user/gates/tmp/joined")); int firstComma = line.indexOf(','); public void reduce( join.setNumReduceTasks(50); String key = line.substring(0, firstComma); Text key, Job joinJob = new Job(join); String value = line.substring(firstComma + 1); Iterator<LongWritable> iter, joinJob.addDependingJob(loadPages); Text outKey = new Text(key); OutputCollector<WritableComparable, Writable> oc, joinJob.addDependingJob(loadUsers); // Prepend an index to the value so we know which file Reporter reporter) throws IOException { // it came from. // Add up all the values we see JobConf group = new JobConf(MRExample.class); Text outVal = new Text("1" + value); group.setJobName("Group URLs"); oc.collect(outKey, outVal); long sum = 0; group.setInputFormat(KeyValueTextInputFormat.class); } while (iter.hasNext()) { group.setOutputKeyClass(Text.class); } sum += iter.next().get(); group.setOutputValueClass(LongWritable.class); public static class LoadAndFilterUsers extends MapReduceBase reporter.setStatus("OK"); group.setOutputFormat(SequenceFileOutputFormat.class); implements Mapper<LongWritable, Text, Text, Text> { } group.setMapperClass(LoadJoined.class); group.setCombinerClass(ReduceUrls.class); public void map(LongWritable k, Text val, oc.collect(key, new LongWritable(sum)); group.setReducerClass(ReduceUrls.class); OutputCollector<Text, Text> oc, } FileInputFormat.addInputPath(group, new Reporter reporter) throws IOException { } Path("/user/gates/tmp/joined")); // Pull the key out public static class LoadClicks extends MapReduceBase FileOutputFormat.setOutputPath(group, new String line = val.toString(); implements Mapper<WritableComparable, Writable, LongWritable, Path("/user/gates/tmp/grouped")); int firstComma = line.indexOf(','); Text> { group.setNumReduceTasks(50); String value = line.substring(firstComma + 1); Job groupJob = new Job(group); int age = Integer.parseInt(value); public void map( groupJob.addDependingJob(joinJob); if (age < 18 || age > 25) return; WritableComparable key, String key = line.substring(0, firstComma); Writable val, JobConf top100 = new JobConf(MRExample.class); Text outKey = new Text(key); OutputCollector<LongWritable, Text> oc, top100.setJobName("Top 100 sites"); // Prepend an index to the value so we know which file Reporter reporter) throws IOException { top100.setInputFormat(SequenceFileInputFormat.class); // it came from. oc.collect((LongWritable)val, (Text)key); top100.setOutputKeyClass(LongWritable.class); Text outVal = new Text("2" + value); } top100.setOutputValueClass(Text.class); oc.collect(outKey, outVal); } top100.setOutputFormat(SequenceFileOutputFormat.class); } public static class LimitClicks extends MapReduceBase top100.setMapperClass(LoadClicks.class); } implements Reducer<LongWritable, Text, LongWritable, Text> { top100.setCombinerClass(LimitClicks.class); public static class Join extends MapReduceBase top100.setReducerClass(LimitClicks.class); implements Reducer<Text, Text, Text, Text> { int count = 0; FileInputFormat.addInputPath(top100, new public void reduce( Path("/user/gates/tmp/grouped")); public void reduce(Text key, LongWritable key, FileOutputFormat.setOutputPath(top100, new Iterator<Text> iter, Iterator<Text> iter, Path("/user/gates/top100sitesforusers18to25")); OutputCollector<Text, Text> oc, OutputCollector<LongWritable, Text> oc, top100.setNumReduceTasks(1); Reporter reporter) throws IOException { Reporter reporter) throws IOException { Job limit = new Job(top100); // For each value, figure out which file it's from and limit.addDependingJob(groupJob); store it // Only output the first 100 records // accordingly. while (count < 100 && iter.hasNext()) { JobControl jc = new JobControl("Find top 100 sites for users List<String> first = new ArrayList<String>(); oc.collect(key, iter.next()); 18 to 25"); List<String> second = new ArrayList<String>(); count++; jc.addJob(loadPages); } jc.addJob(loadUsers); while (iter.hasNext()) { } jc.addJob(joinJob); Text t = iter.next(); } jc.addJob(groupJob); String value = t.toString(); public static void main(String[] args) throws IOException { jc.addJob(limit); if (value.charAt(0) == '1') JobConf lp = new JobConf(MRExample.class); jc.run(); first.add(value.substring(1)); lp.setJobName("Load Pages"); } else second.add(value.substring(1)); lp.setInputFormat(TextInputFormat.class); } Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
  • 43. Users = load ‘users’ as (name, age); Filtered = filter Users by age >= 18 and age <= 25; Pages = load ‘pages’ as (user, url); Joined = join Filtered by name, Pages by user; Grouped = group Joined by url; Summed = foreach Grouped generate group, count(Joined) as clicks; Sorted = order Summed by clicks desc; Top5 = limit Sorted 5; store Top5 into ‘top5sites’; Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
  • 44. Notice how naturally the components of the job translate into Pig Latin. Load Users Load Pages Users = load … Filter by age Filtered = filter … Pages = load … Join on name Joined = join … Group on url Grouped = group … Summed = … count()… Count clicks Sorted = order … Top5 = limit … Order by clicks Take top 5 Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
  • 45. Notice how naturally the components of the job translate into Pig Latin. Load Users Load Pages Users = load … Filter by age Filtered = filter … Pages = load … Join on name Joined = join … Job 1 Group on url Grouped = group … Job 2 Summed = … count()… Count clicks Sorted = order … Top5 = limit … Order by clicks Job 3 Take top 5 Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt
  • 46. Developed at Facebook  Used for majority of Facebook jobs  “Relational database” built on Hadoop • Maintains list of table schemas • SQL-like query language (HQL) • Can call Hadoop Streaming scripts from HQL • Supports table partitioning, clustering, complex data types, some optimizations
  • 47. • Find top 5 pages visited by users aged 18-25: SELECT p.url, COUNT(1) as clicks FROM users u JOIN page_views p ON (u.name = p.user) WHERE u.age >= 18 AND u.age <= 25 GROUP BY p.url ORDER BY clicks LIMIT 5; • Filter page views through Python script: SELECT TRANSFORM(p.user, p.date) USING 'map_script.py' AS dt, uid CLUSTER BY dt FROM page_views p;
  • 48. MapReduce architecture  Example applications  Getting started with Hadoop  Higher-level languages over Hadoop: Pig and Hive  Amazon Elastic MapReduce
  • 49. Provides a web-based interface and command- line tools for running Hadoop jobs on Amazon EC2  Data stored in Amazon S3  Monitors job and shuts down machines after use  Small extra charge on top of EC2 pricing  If you want more control over how you Hadoop runs, you can launch a Hadoop cluster on EC2 manually using the scripts in src/contrib/ec2
  • 50.
  • 51.
  • 52.
  • 53.
  • 54. MapReduce programming model hides the complexity of work distribution and fault tolerance  Principal design philosophies: • Make it scalable, so you can throw hardware at problems • Make it cheap, lowering hardware, programming and admin costs  MapReduce is not suitable for all problems, but when it works, it may save you quite a bit of time  Cloud computing makes it straightforward to start using Hadoop (or other parallel software) at scale
  • 55. Hadoop: http://hadoop.apache.org/core/  Pig: http://hadoop.apache.org/pig  Hive: http://hadoop.apache.org/hive  Video tutorials: http://www.cloudera.com/hadoop-training  Amazon Web Services: http://aws.amazon.com/  Amazon Elastic MapReduce guide: http://docs.amazonwebservices.com/ElasticMa pReduce/latest/GettingStartedGuide/  My email: e.vedavyas@gmail.com

Hinweis der Redaktion

  1. My point in putting in the java code isn’t too actually walk through it. It’s just to show that you have to hand code a fair amount of java . That way I can just point and say “look how much java you’d have to write to accomplish this”
  2. Rather than using capital letters, which makes Pig Latin look like SQL, I added Eclipse style highlighting instead. Hopefully this makes clear what are the key words without making it look like a Matisse painting.
  3. No need to think about how many map reduce jobs this decomposes into, or connecting data flows between map reduces jobs, or even that this is taking place on top of map reduce at all.
  4. No need to think about how many map reduce jobs this decomposes into, or connecting data flows between map reduces jobs, or even that this is taking place on top of map reduce at all.