SlideShare ist ein Scribd-Unternehmen logo
1 von 61
Downloaden Sie, um offline zu lesen
Data-Intensive Computing for Text Analysis
                 CS395T / INF385T / LIN386M
            University of Texas at Austin, Fall 2011



                       Lecture 7
                    October 6, 2011

        Jason Baldridge                      Matt Lease
   Department of Linguistics           School of Information
  University of Texas at Austin    University of Texas at Austin
Jasonbaldridge at gmail dot com   ml at ischool dot utexas dot edu




                                                                     1
Acknowledgments
        Course design and slides based on
     Jimmy Lin’s cloud computing courses at
    the University of Maryland, College Park

Some figures courtesy of the following
excellent Hadoop books (order yours today!)
• Chuck Lam’s Hadoop In Action (2010)
• Tom White’s Hadoop: The Definitive Guide,
  2nd Edition (2010)
                                               2
Today’s Agenda
• Hadoop Counters
• Graph Processing in MapReduce
   – Representing/Encoding Graphs
      • Adjacency matrices vs. Lists
   – Example: Single Source Shortest Page
   – Example: PageRank
• Themes
   – No shared memory redundant computation
      • More computational capability overcomes less efficiency
   – Iterate MapReduce computations until convergence
   – Use non-MapReduce driver for over-arching control
      • Not just for pre- and post-processing
      • Opportunity for global synchronization between iterations
• In-class exercise                                                 3
Hadoop Counters
Lam p. 98, White pp. 226-227




                               5
White p. 172


  Hadoop Counters & Global State
• Hadoop’s Counters provide its only means for
  sharing/modifying global distributed state
  – Built-in safeguards for distributed modification
     • e.g. two tasks try to increment a counter simultaneously
  – Lightweight: only long bytes… per counter
  – Limited control
     • create, read, and increment
     • no destroy, arbitrary set, or decrement
• Advertised use: progress tracking and logging
• To what extent might we “abuse” counters for
  tracking/updating interesting shared state?
                                                                   6
How high (and precisely) can you count?
• How precise?
  • Integer representation
  • To approximate fractional values, scale and truncate (Lin & Dyer p. 99)
• How high?
  – “8-byte integers” (Lin & Dyer p. 99 ): really only one byte?
  – Old API: org.apache.hadoop.mapred.Counters
      • long getCounter(…), incrCounter(…, long amount)
  – New API: org.apache.hadoop.mapreduce.Counter
      • long getValue(),   increment(long incr)
• How many?
  – Old API: static int MAX_COUNTER_LIMIT (next slide…)
  – New API: ???? (int countCounters() )                               7
8
9
White p. 173, 227-231
•   incrCounter(…)
•   getCounters(…)
•   getCounter(…)
•   findCounter(…)




• http://developer.yahoo.com/hadoop/tutorial/module5.html#metrics
                                                               10
White p. 172


          Counters and Global State
Counter values are definitive only once a job has successfully completed
       - White p. 227

What about while a job is running?
• If a task reports progress, it sets a JobTracker flag to indicate a status
  change should be sent to the TaskTracker
    – The flag is checked in a separate thread every 3s, and if set, the
      TaskTracker is notified
    – What about counter updates?
• The TaskTracker sends heartbeats to the JobTracker (at least every 5s)
  which include the status of all tasks being run by the TaskTracker...
    – Counters (which can be relatively larger) are sent less frequently
• JobClient receives the latest status by polling the jobtracker every 1s
• Clients can call JobClient’s getJob() to obtain a RunningJob instance
  with the latest status information (at time of the call?)
                                                                              11
Representing Graphs
What’s a graph?
   Graphs are ubiquitous
       The Web (pages and hyperlink structure)
       Computer networks (computers and connections)
       Highways and railroads (cities and roads/tracks)
       Social networks
   G = (V,E), where
       V: the set of vertices (nodes)
       E: the set of edges (links)
       Either/Both may contain additional information
         • e.g. edge weights (e.g. cost, time, distance)
         • e.g. node values (e.g. PageRank)
   Graph types
       Directed vs. undirected
       Cyclic vs. acyclic
Some Graph Problems
   Finding shortest paths
       Routing Internet traffic and UPS trucks
   Finding minimum spanning trees
       Telco laying down fiber
   Finding Max Flow
       Airline scheduling
   Identify “special” nodes and communities
       Breaking up terrorist cells, spread of avian flu
   Bipartite matching
       Monster.com, Match.com
   And of course... PageRank
Graphs and MapReduce
   MapReduce graph processing typically involves
       Performing computations at each node
         • e.g. using node features, edge features, and local link structure
       Propagating computations
         • “traversing” the graph
   Key questions
       How do you represent graph data in MapReduce?
       How do you traverse a graph in MapReduce?
Graph Representation
   How do we encode graph structure suitably for
       computation
       propagation
   Two common approaches
       Adjacency matrix                        2

       Adjacency list
                                     1

                                                    3




                                            4
Adjacency Matrices
Represent a graph as an |V| x |V| square matrix M
      Mjk = w  directed edge of weight w from node j to node k
        • w=0  no edge exists
        • Mii: main diagonal gives self-loop weights from node i to itself


      If undirected, use only top-right of matrix (symmetry)

                                                                  2
                 1     2     3     4
            1    0     1     0     1               1

                                                                             3
            2    1     0     1     1
            3    1     0     0     0
            4    1     0     1     0                         4
Adjacency Matrices: Critique
   Advantages:
       Amenable to mathematical manipulation
       Easy iteration for computation over out-links and in-links
         • Mj* column over all out-links from node j
         • M*k row over all in-links to node k


   Disadvantages
       Sparsity: wasted computations, wasted space
Adjacency Lists
Take adjacency matrices… and throw away all the zeros
   Hmm… look familiar…?



           1   2   3   4
       1   0   1   0   1             1: 2, 4
       2   1   0   1   1             2: 1, 3, 4
       3   1   0   0   0             3: 1
                                     4: 1, 3
       4   1   0   1   0
Inverted Index: Boolean Retrieval
 Doc 1                    Doc 2                 Doc 3            Doc 4
 one fish, two fish       red fish, blue fish   cat in the hat   green eggs and ham



                 1    2    3      4

         blue         1                                   blue       2

          cat              1                              cat        3

         egg                      1                       egg        4

         fish    1    1                                   fish       1   2

         green                    1                      green       4

         ham                      1                       ham        4

          hat              1                              hat        3

         one     1                                        one        1

          red         1                                   red        2

         two     1                                        two        1
Adjacency Lists: Critique
    Vs. Adjacency matrix
        Sparsity: More compact, fewer wasted computations
        Easy to compute over out-links
        What about computation over in-links?




             1   2    3   4
         1   0   1    0   1                  1: 2, 4
         2   1   0    1   1                  2: 1, 3, 4
         3   1   0    0   0                  3: 1
                                             4: 1, 3
         4   1   0    1   0
Single Source Shortest Path
Problem
   Find shortest path from a source node to one or more
    target nodes
       Shortest may mean lowest weight or cost, etc.
   Classic approach
       Dijkstra’s Algorithm
         • Maintain a global priority queue over all (node, distance) pairs
             • Sort queue by min distance to reach each node from the source node
             • Initialization: distance to source node = 0, all others = 
         • Visit nodes in order of (monotonically) increasing path length
             • Whenever node visited, no shorter path exists
         • For each node is visited
             • update its neighbours in the queue
             • Remove the node from the queue
Edsger W. Dijkstra
    May 11, 1930 – August 6, 2002
    Received the 1972 Turing Award
    Schlumberger Centennial Chair of Computer Science at
     UT Austin (1984-2000)


    http://en.wikipedia.org/wiki/Dijkstra’s_algorithm
        Wikipedia has nice animation of it in action
Dijkstra’s Algorithm
   Maintain global priority queue over all (node, distance) pairs
       Sort queue by min distance to reach each node from the source node
   Initialization
       distance to source node = 0
       distance to all other nodes = 
   While queue not empty
       visit next node (i.e. the node with shortest path length in the queue)
         • Output distance to it if desired
         • Update distance to each of its neighbours in the queue
         • Remove it from the queue
Dijkstra’s Algorithm Example


                                        1
                                                   

                       10


                   0        2       3   9       4       6



                       5                    7


                                                   
                                        2




Example from CLR
Dijkstra’s Algorithm Example


                                         1
                                10                   

                       10


                   0        2        3   9       4       6



                       5                     7


                                5                    
                                         2




Example from CLR
Dijkstra’s Algorithm Example


                                        1
                                8                   14

                       10


                   0        2       3   9       4        6



                       5                    7


                                5                   7
                                        2




Example from CLR
Dijkstra’s Algorithm Example


                                        1
                                8                   13

                       10


                   0        2       3   9       4        6



                       5                    7


                                5                   7
                                        2




Example from CLR
Dijkstra’s Algorithm Example


                                        1
                                        1
                                8                   9

                       10


                   0        2       3   9       4       6



                       5                    7


                                5                   7
                                        2




Example from CLR
Dijkstra’s Algorithm Example


                                        1
                                8                   9

                       10


                   0        2       3   9       4       6



                       5                    7


                                5                   7
                                        2




Example from CLR
Problem
   Find shortest path from a source node to one or more
    target nodes
       Shortest may mean lowest weight or cost, etc.
   Classic approach
       Dijkstra’s Algorithm
Problem
   Find shortest path from a source node to one or more
    target nodes
       Shortest may mean lowest weight or cost, etc.
   Classic approach
       Dijkstra’s Algorithm
   MapReduce approach
       Parallel Breadth-First Search (BFS)
Finding the Shortest Path
   Assume unweighted graph (for now…)
   General Inductive Approach
       Initialization
         • DISTANCETO(source s) = 0
         • For any node n connected to s, DISTANCETO(n) = 1
         • Else DISTANCETO(any other node p) = 
       For each iteration
         • For every node n
              • For every neighbor m  M(n),
                  DISTANCETO(m) = 1 + min( DISTANCETO(n) )
                                                                       d1 m1
                                                     …
                                                                 d2
                                            s                …                   n
                                                                  m2


                                                         …             d3
                                                                            m3
Visualizing Parallel BFS

                                                  n7
                   n0              n1




         n3                  n2
                                        n6



                              n5
              n4
                                             n8




                        n9
From Intuition to Algorithm
    Representation
        Key: node n
        Value: d (distance from start)
          • Also: adjacency list (list of nodes reachable from n)
        Initialization: d =  for all nodes except start node
    Mapper
        m  adjacency list: emit (m, d + 1)
    Sort/Shuffle
        Groups distances by reachable nodes
    Reducer
        Selects minimum distance path for each reachable node
        Additional bookkeeping needed to keep track of actual path
BFS Pseudo-Code




              What type should we use for the values?
Multiple Iterations Needed
    Each iteration advances the “frontier” by one hop
        Subsequent iterations find more reachable nodes
        Multiple iterations are needed to explore entire graph
    Preserving graph structure
        Problem: Where did the adjacency list go?
        Solution: mapper emits (n, adjacency list) s well
Stopping Criterion
    How many iterations are needed?
    Convince yourself: when a node is first “discovered”,
     we’ve found the shortest path
    Now answer the question...
        Six degrees of separation?
    Practicalities of implementation in MapReduce
Comparison to Dijkstra
   Dijkstra’s algorithm is more efficient
       At any step it only pursues edges from the minimum-cost path
        inside the frontier
   MapReduce explores all paths in parallel
       Lots of “waste”
       Useful work is only done at the “frontier”
   Why can’t we do better using MapReduce?
Weighted Edges
   Now consider non-unit, positive edge weights
       Why can’t edge weights be negative?
   Adjacency list now includes a weight w for each edge
       In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m
   Is that all?
Stopping Criterion
    How many iterations are needed in parallel BFS (positive
     edge weight case)?
    Convince yourself: when a node is first “discovered”,
     we’ve found the shortest path
Additional Complexities



                                                                      1
       search frontier                              1
                                           n6                    n7                 1
                                                                               n8
        r                             10
                                                        1                               n9
                                                                          n5
                             n1
  s                               1                                   1
                         q
            p                                                1                 n4
                                      n2        1
                                                            n3
Stopping Criterion
    How many iterations are needed in parallel BFS (positive
     edge weight case)?
    Practicalities of implementation in MapReduce
        Unrelated to stopping… where have we seen min/max before?
In General: Graphs and MapReduce
   Graph algorithms typically involve
       Performing computations at each node: based on node features,
        edge features, and local link structure
       Propagating computations: “traversing” the graph
   Generic recipe
       Represent graphs as adjacency lists
       Perform local computations in mapper
       Pass along partial results via outlinks, keyed by destination node
       Perform aggregation in reducer on inlinks to a node
       Iterate until convergence: controlled by external “driver”
       Don’t forget to pass the graph structure between iterations
PageRank
Random Walks Over the Web
   Random surfer model
       User starts at a random Web page
       User randomly clicks on links, surfing from page to page
   PageRank
       Characterizes the amount of time spent on any given page
       Mathematically, a probability distribution over pages
   PageRank captures notions of page importance
       Correspondence to human intuition?
       One of thousands of features used in web search
       Note: query-independent
PageRank: Defined
Given page x with inlinks t1…tn, where
      C(t) is the out-degree of t
       is probability of random jump
      N is the total number of nodes in the graph

                    1             n
                                       PR (ti )
        PR ( x)      (1   )
                    N           i 1 C (ti )




                         t1

                                                  X


                              t2

                                   …
                                            tn
Computing PageRank
   Properties of PageRank
       Can be computed iteratively
       Effects at each iteration are local
   Sketch of algorithm:
       Start with seed PRi values
       Each page distributes PRi “credit” to all pages it links to
       Each target page adds up “credit” from multiple in-bound links to
        compute PRi+1
       Iterate until values converge
Simplified PageRank
   First, tackle the simple case:
       No random jump factor
       No dangling links
   Then, factor in these complexities…
       Why do we need the random jump?
       Where do dangling links come from?
Sample PageRank Iteration (1)



  Iteration 1                      n2 (0.2)                                          n2 (0.166)

                                     0.1
    n1 (0.2) 0.1             0.1                               n1 (0.066)

      0.1
                               0.066
                   0.066         0.066
                           n5 (0.2)                                           n5 (0.3)
                                                    n3 (0.2)                                      n3 (0.166)
             0.2                              0.2


        n4 (0.2)                                                   n4 (0.3)
Sample PageRank Iteration (2)



  Iteration 2                    n2 (0.166)                                       n2 (0.133)

                 0.033               0.083
    n1 (0.066)           0.083                              n1 (0.1)

    0.033
                             0.1
                   0.1         0.1
                         n5 (0.3)                                          n5 (0.383)
                                               n3 (0.166)                                      n3 (0.183)
             0.3                       0.166

        n4 (0.3)                                                n4 (0.2)
PageRank in MapReduce


                n1 [n2, n4]           n2 [n3, n5]           n3 [n4]         n4 [n5]          n5 [n1, n2, n3]



   Map
                n2          n4        n3          n5            n4                n5    n1        n2      n3




           n1          n2        n2          n3        n3             n4     n4         n5               n5

 Reduce


          n1 [n2, n4] n2 [n3, n5]           n3 [n4]               n4 [n5]              n5 [n1, n2, n3]
PageRank Pseudo-Code
Complete PageRank
   Two additional complexities
       What is the proper treatment of dangling nodes?
       How do we factor in the random jump factor?
   Solution:
       Second pass to redistribute “missing PageRank mass” and
        account for random jumps
                 1          m    
         p'      (1   )  p 
                G           G    
                                 
       p is PageRank value from before, p' is updated PageRank value
       |G| is the number of nodes in the graph
       m is the missing PageRank mass
   How to perform bookkeeping for dangling nodes?
   How to implement this 2nd pass in Hadoop?
PageRank Convergence
   Alternative convergence criteria
       Iterate until PageRank values don’t change
       Iterate until PageRank rankings don’t change
       Fixed number of iterations
   Convergence for web graphs?
Local Aggregation                                              d1 m1

                                                         d2
   Use combiners                                         m2
                                                                         n

       BFS uses min, PageRank uses sum
                                                               d3
         • associative and commutative                              m3

       In-mapper combining design pattern also applicable
       Opportunity for aggregation when mapper sees multiple nodes
        with out-links to same destination node
   How do we maximize opportunities for local aggregation?
       Partition the dataset into clusters with many internal and few
        external links
       Chicken-and-egg problem: don’t we need MapReduce to do this?
         • Use cheap heuristics
             • e.g. social network: zip code or school
             • e.g. for web: language or domain name
             • etc.
Limitations of MapReduce
   Amount of intermediate data (to shuffle) is proportional to
    number of edges in graph
   We have considered sparse graphs (i.e. with few edges),
    minimizing such intermediate data
   For dense graphs with O(n^2) edges, runtime would be
    dominated by copying intermediate data
   Consequently, MapReduce algorithms are often
    impractical on large, dense graphs
   But isn’t data-intensive computing exactly what
    MapReduce is supposed to help us with??
   See (Lin and Dyer, p. 101)
In-class Exercise:

 All Pairs PBFS
1: class Mapper                               1: class Mapper
2: method Map( Node N )                       2: method Map( sid s, Node N )
3:      d = N.Distance                        3:      d = N[s].Distance
4:      Emit( N.id, N )                       4:      Emit( Pair(sid, N.id), N )
5:      for all (nid m in N.AdjacencyList) do 5:      for all (nid m in N.AdjacencyList) do
6:        Emit( m, d + 1)                     6:        Emit( Pair(sid, m), d + 1)

1: class Reducer                              1: class Reducer
2: method Reduce(nid m, [d1, d2, ...])        2: method Reduce( Pair(sid s,nid m), [d1,
3: dmin = 1                                   d2, ...] )
4: Node M = null                              3: dmin = 1
5: for all d in counts [d1, d2, ...] do       4: M = null
6: if IsNode(d) then                          5: for all d in counts [d1, d2, ...] do
7:     M=d                                    6: if IsNode(d) then
8: else if d < dmin then                      7:      M=d
9:     dmin = d                               8: else if d < dmin then
10: M.Distance = dmin                         9:      dmin = d
11: Emit( M )                                 10: M[s].Distance = dmin
                                              11: Emit( M )
1: class Mapper                                 1: class Mapper
2: method Map( sid s, Node N )                  2: method Map( sid s, Node N )
3:      d = N[s].Distance                       3:      d = N[s].Distance4:
4:      Emit( Pair(sid, N.id), N )              4:      if sid=0 then
5:      for all (nid m in N.AdjacencyList) do   5:         Emit( Pair(sid, N.id), N )
6:        Emit( Pair(sid, m), d + 1)            6:      for all (nid m in N.AdjacencyList) do
                                                7:         Emit( Pair(sid, m), d + 1)
1: class Reducer
2: method Reduce( Pair(sid s,nid m), [d1,       Partition: all pairs with same 2nd nid to same
d2, ...] )                                      reducer
3: dmin = 1                                     KeyComp: order by sid, the nid, sort sid=0
4: M = null                                     first
5: for all d in counts [d1, d2, ...] do
6: if IsNode(d) then                            1: class Reducer
7:      M=d                                     2: M = null
8: else if d < dmin then                        3: method Reduce( Pair(sid s,nid m), [d1,
9:      dmin = d                                d2, ...] )
10: M[s].Distance = dmin                        4: dmin = 1
11: Emit( M )                                   5: for all d in counts [d1, d2, ...] do
                                                6: if IsNode(d) then
                                                7:      M=d
                                                8: else if d < dmin then
                                                9:      dmin = d
                                                10: M[s].Distance = dmin
                                                11: Emit( M )

Weitere ähnliche Inhalte

Ähnlich wie Data-Intensive Computing for Text Analysis

Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012Ted Dunning
 
Data streaming algorithms
Data streaming algorithmsData streaming algorithms
Data streaming algorithmsSandeep Joshi
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraJason Riedy
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford MapR Technologies
 
Approximate nearest neighbor methods and vector models – NYC ML meetup
Approximate nearest neighbor methods and vector models – NYC ML meetupApproximate nearest neighbor methods and vector models – NYC ML meetup
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
 
Approximate Nearest Neighbors and Vector Models by Erik Bernhardsson
Approximate Nearest Neighbors and Vector Models by Erik BernhardssonApproximate Nearest Neighbors and Vector Models by Erik Bernhardsson
Approximate Nearest Neighbors and Vector Models by Erik BernhardssonHakka Labs
 
Nearest Neighbor Customer Insight
Nearest Neighbor Customer InsightNearest Neighbor Customer Insight
Nearest Neighbor Customer InsightMapR Technologies
 
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...huguk
 
2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3Open Analytics
 
Online statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithmsOnline statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithmsSimon Belak
 
A look inside pandas design and development
A look inside pandas design and developmentA look inside pandas design and development
A look inside pandas design and developmentWes McKinney
 
Moving Toward Deep Learning Algorithms on HPCC Systems
Moving Toward Deep Learning Algorithms on HPCC SystemsMoving Toward Deep Learning Algorithms on HPCC Systems
Moving Toward Deep Learning Algorithms on HPCC SystemsHPCC Systems
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화NAVER Engineering
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Big Data Spain
 

Ähnlich wie Data-Intensive Computing for Text Analysis (20)

ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
 
Data streaming algorithms
Data streaming algorithmsData streaming algorithms
Data streaming algorithms
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear Algebra
 
Realtime Analytics
Realtime AnalyticsRealtime Analytics
Realtime Analytics
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford
 
Approximate nearest neighbor methods and vector models – NYC ML meetup
Approximate nearest neighbor methods and vector models – NYC ML meetupApproximate nearest neighbor methods and vector models – NYC ML meetup
Approximate nearest neighbor methods and vector models – NYC ML meetup
 
Approximate Nearest Neighbors and Vector Models by Erik Bernhardsson
Approximate Nearest Neighbors and Vector Models by Erik BernhardssonApproximate Nearest Neighbors and Vector Models by Erik Bernhardsson
Approximate Nearest Neighbors and Vector Models by Erik Bernhardsson
 
Nearest Neighbor Customer Insight
Nearest Neighbor Customer InsightNearest Neighbor Customer Insight
Nearest Neighbor Customer Insight
 
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
Sean Kandel - Data profiling: Assessing the overall content and quality of a ...
 
2013 open analytics_countingv3
2013 open analytics_countingv32013 open analytics_countingv3
2013 open analytics_countingv3
 
Enar short course
Enar short courseEnar short course
Enar short course
 
Data structures
Data structuresData structures
Data structures
 
Online statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithmsOnline statistical analysis using transducers and sketch algorithms
Online statistical analysis using transducers and sketch algorithms
 
Editors l21 l24
Editors l21 l24Editors l21 l24
Editors l21 l24
 
A look inside pandas design and development
A look inside pandas design and developmentA look inside pandas design and development
A look inside pandas design and development
 
Moving Toward Deep Learning Algorithms on HPCC Systems
Moving Toward Deep Learning Algorithms on HPCC SystemsMoving Toward Deep Learning Algorithms on HPCC Systems
Moving Toward Deep Learning Algorithms on HPCC Systems
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
 
Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015
 

Mehr von Matthew Lease

Automated Models for Quantifying Centrality of Survey Responses
Automated Models for Quantifying Centrality of Survey ResponsesAutomated Models for Quantifying Centrality of Survey Responses
Automated Models for Quantifying Centrality of Survey ResponsesMatthew Lease
 
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...Matthew Lease
 
Explainable Fact Checking with Humans in-the-loop
Explainable Fact Checking with Humans in-the-loopExplainable Fact Checking with Humans in-the-loop
Explainable Fact Checking with Humans in-the-loopMatthew Lease
 
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...Matthew Lease
 
AI & Work, with Transparency & the Crowd
AI & Work, with Transparency & the Crowd AI & Work, with Transparency & the Crowd
AI & Work, with Transparency & the Crowd Matthew Lease
 
Designing Human-AI Partnerships to Combat Misinfomation
Designing Human-AI Partnerships to Combat Misinfomation Designing Human-AI Partnerships to Combat Misinfomation
Designing Human-AI Partnerships to Combat Misinfomation Matthew Lease
 
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...Matthew Lease
 
But Who Protects the Moderators?
But Who Protects the Moderators?But Who Protects the Moderators?
But Who Protects the Moderators?Matthew Lease
 
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...Matthew Lease
 
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...Matthew Lease
 
Fact Checking & Information Retrieval
Fact Checking & Information RetrievalFact Checking & Information Retrieval
Fact Checking & Information RetrievalMatthew Lease
 
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Matthew Lease
 
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...Matthew Lease
 
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesDeep Learning for Information Retrieval: Models, Progress, & Opportunities
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
 
Systematic Review is e-Discovery in Doctor’s Clothing
Systematic Review is e-Discovery in Doctor’s ClothingSystematic Review is e-Discovery in Doctor’s Clothing
Systematic Review is e-Discovery in Doctor’s ClothingMatthew Lease
 
The Rise of Crowd Computing (July 7, 2016)
The Rise of Crowd Computing (July 7, 2016)The Rise of Crowd Computing (July 7, 2016)
The Rise of Crowd Computing (July 7, 2016)Matthew Lease
 
The Rise of Crowd Computing - 2016
The Rise of Crowd Computing - 2016The Rise of Crowd Computing - 2016
The Rise of Crowd Computing - 2016Matthew Lease
 
The Rise of Crowd Computing (December 2015)
The Rise of Crowd Computing (December 2015)The Rise of Crowd Computing (December 2015)
The Rise of Crowd Computing (December 2015)Matthew Lease
 
Toward Better Crowdsourcing Science
 Toward Better Crowdsourcing Science Toward Better Crowdsourcing Science
Toward Better Crowdsourcing ScienceMatthew Lease
 
Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms
Beyond Mechanical Turk: An Analysis of Paid Crowd Work PlatformsBeyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms
Beyond Mechanical Turk: An Analysis of Paid Crowd Work PlatformsMatthew Lease
 

Mehr von Matthew Lease (20)

Automated Models for Quantifying Centrality of Survey Responses
Automated Models for Quantifying Centrality of Survey ResponsesAutomated Models for Quantifying Centrality of Survey Responses
Automated Models for Quantifying Centrality of Survey Responses
 
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...
Key Challenges in Moderating Social Media: Accuracy, Cost, Scalability, and S...
 
Explainable Fact Checking with Humans in-the-loop
Explainable Fact Checking with Humans in-the-loopExplainable Fact Checking with Humans in-the-loop
Explainable Fact Checking with Humans in-the-loop
 
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...
Adventures in Crowdsourcing : Toward Safer Content Moderation & Better Suppor...
 
AI & Work, with Transparency & the Crowd
AI & Work, with Transparency & the Crowd AI & Work, with Transparency & the Crowd
AI & Work, with Transparency & the Crowd
 
Designing Human-AI Partnerships to Combat Misinfomation
Designing Human-AI Partnerships to Combat Misinfomation Designing Human-AI Partnerships to Combat Misinfomation
Designing Human-AI Partnerships to Combat Misinfomation
 
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...
Designing at the Intersection of HCI & AI: Misinformation & Crowdsourced Anno...
 
But Who Protects the Moderators?
But Who Protects the Moderators?But Who Protects the Moderators?
But Who Protects the Moderators?
 
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...
Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact...
 
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...
Mix and Match: Collaborative Expert-Crowd Judging for Building Test Collectio...
 
Fact Checking & Information Retrieval
Fact Checking & Information RetrievalFact Checking & Information Retrieval
Fact Checking & Information Retrieval
 
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
Your Behavior Signals Your Reliability: Modeling Crowd Behavioral Traces to E...
 
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...
What Can Machine Learning & Crowdsourcing Do for You? Exploring New Tools for...
 
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesDeep Learning for Information Retrieval: Models, Progress, & Opportunities
Deep Learning for Information Retrieval: Models, Progress, & Opportunities
 
Systematic Review is e-Discovery in Doctor’s Clothing
Systematic Review is e-Discovery in Doctor’s ClothingSystematic Review is e-Discovery in Doctor’s Clothing
Systematic Review is e-Discovery in Doctor’s Clothing
 
The Rise of Crowd Computing (July 7, 2016)
The Rise of Crowd Computing (July 7, 2016)The Rise of Crowd Computing (July 7, 2016)
The Rise of Crowd Computing (July 7, 2016)
 
The Rise of Crowd Computing - 2016
The Rise of Crowd Computing - 2016The Rise of Crowd Computing - 2016
The Rise of Crowd Computing - 2016
 
The Rise of Crowd Computing (December 2015)
The Rise of Crowd Computing (December 2015)The Rise of Crowd Computing (December 2015)
The Rise of Crowd Computing (December 2015)
 
Toward Better Crowdsourcing Science
 Toward Better Crowdsourcing Science Toward Better Crowdsourcing Science
Toward Better Crowdsourcing Science
 
Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms
Beyond Mechanical Turk: An Analysis of Paid Crowd Work PlatformsBeyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms
Beyond Mechanical Turk: An Analysis of Paid Crowd Work Platforms
 

Kürzlich hochgeladen

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 

Kürzlich hochgeladen (20)

Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 

Data-Intensive Computing for Text Analysis

  • 1. Data-Intensive Computing for Text Analysis CS395T / INF385T / LIN386M University of Texas at Austin, Fall 2011 Lecture 7 October 6, 2011 Jason Baldridge Matt Lease Department of Linguistics School of Information University of Texas at Austin University of Texas at Austin Jasonbaldridge at gmail dot com ml at ischool dot utexas dot edu 1
  • 2. Acknowledgments Course design and slides based on Jimmy Lin’s cloud computing courses at the University of Maryland, College Park Some figures courtesy of the following excellent Hadoop books (order yours today!) • Chuck Lam’s Hadoop In Action (2010) • Tom White’s Hadoop: The Definitive Guide, 2nd Edition (2010) 2
  • 3. Today’s Agenda • Hadoop Counters • Graph Processing in MapReduce – Representing/Encoding Graphs • Adjacency matrices vs. Lists – Example: Single Source Shortest Page – Example: PageRank • Themes – No shared memory redundant computation • More computational capability overcomes less efficiency – Iterate MapReduce computations until convergence – Use non-MapReduce driver for over-arching control • Not just for pre- and post-processing • Opportunity for global synchronization between iterations • In-class exercise 3
  • 5. Lam p. 98, White pp. 226-227 5
  • 6. White p. 172 Hadoop Counters & Global State • Hadoop’s Counters provide its only means for sharing/modifying global distributed state – Built-in safeguards for distributed modification • e.g. two tasks try to increment a counter simultaneously – Lightweight: only long bytes… per counter – Limited control • create, read, and increment • no destroy, arbitrary set, or decrement • Advertised use: progress tracking and logging • To what extent might we “abuse” counters for tracking/updating interesting shared state? 6
  • 7. How high (and precisely) can you count? • How precise? • Integer representation • To approximate fractional values, scale and truncate (Lin & Dyer p. 99) • How high? – “8-byte integers” (Lin & Dyer p. 99 ): really only one byte? – Old API: org.apache.hadoop.mapred.Counters • long getCounter(…), incrCounter(…, long amount) – New API: org.apache.hadoop.mapreduce.Counter • long getValue(), increment(long incr) • How many? – Old API: static int MAX_COUNTER_LIMIT (next slide…) – New API: ???? (int countCounters() ) 7
  • 8. 8
  • 9. 9
  • 10. White p. 173, 227-231 • incrCounter(…) • getCounters(…) • getCounter(…) • findCounter(…) • http://developer.yahoo.com/hadoop/tutorial/module5.html#metrics 10
  • 11. White p. 172 Counters and Global State Counter values are definitive only once a job has successfully completed - White p. 227 What about while a job is running? • If a task reports progress, it sets a JobTracker flag to indicate a status change should be sent to the TaskTracker – The flag is checked in a separate thread every 3s, and if set, the TaskTracker is notified – What about counter updates? • The TaskTracker sends heartbeats to the JobTracker (at least every 5s) which include the status of all tasks being run by the TaskTracker... – Counters (which can be relatively larger) are sent less frequently • JobClient receives the latest status by polling the jobtracker every 1s • Clients can call JobClient’s getJob() to obtain a RunningJob instance with the latest status information (at time of the call?) 11
  • 13. What’s a graph?  Graphs are ubiquitous  The Web (pages and hyperlink structure)  Computer networks (computers and connections)  Highways and railroads (cities and roads/tracks)  Social networks  G = (V,E), where  V: the set of vertices (nodes)  E: the set of edges (links)  Either/Both may contain additional information • e.g. edge weights (e.g. cost, time, distance) • e.g. node values (e.g. PageRank)  Graph types  Directed vs. undirected  Cyclic vs. acyclic
  • 14. Some Graph Problems  Finding shortest paths  Routing Internet traffic and UPS trucks  Finding minimum spanning trees  Telco laying down fiber  Finding Max Flow  Airline scheduling  Identify “special” nodes and communities  Breaking up terrorist cells, spread of avian flu  Bipartite matching  Monster.com, Match.com  And of course... PageRank
  • 15. Graphs and MapReduce  MapReduce graph processing typically involves  Performing computations at each node • e.g. using node features, edge features, and local link structure  Propagating computations • “traversing” the graph  Key questions  How do you represent graph data in MapReduce?  How do you traverse a graph in MapReduce?
  • 16. Graph Representation  How do we encode graph structure suitably for  computation  propagation  Two common approaches  Adjacency matrix 2  Adjacency list 1 3 4
  • 17. Adjacency Matrices Represent a graph as an |V| x |V| square matrix M  Mjk = w  directed edge of weight w from node j to node k • w=0  no edge exists • Mii: main diagonal gives self-loop weights from node i to itself  If undirected, use only top-right of matrix (symmetry) 2 1 2 3 4 1 0 1 0 1 1 3 2 1 0 1 1 3 1 0 0 0 4 1 0 1 0 4
  • 18. Adjacency Matrices: Critique  Advantages:  Amenable to mathematical manipulation  Easy iteration for computation over out-links and in-links • Mj* column over all out-links from node j • M*k row over all in-links to node k  Disadvantages  Sparsity: wasted computations, wasted space
  • 19. Adjacency Lists Take adjacency matrices… and throw away all the zeros  Hmm… look familiar…? 1 2 3 4 1 0 1 0 1 1: 2, 4 2 1 0 1 1 2: 1, 3, 4 3 1 0 0 0 3: 1 4: 1, 3 4 1 0 1 0
  • 20. Inverted Index: Boolean Retrieval Doc 1 Doc 2 Doc 3 Doc 4 one fish, two fish red fish, blue fish cat in the hat green eggs and ham 1 2 3 4 blue 1 blue 2 cat 1 cat 3 egg 1 egg 4 fish 1 1 fish 1 2 green 1 green 4 ham 1 ham 4 hat 1 hat 3 one 1 one 1 red 1 red 2 two 1 two 1
  • 21. Adjacency Lists: Critique  Vs. Adjacency matrix  Sparsity: More compact, fewer wasted computations  Easy to compute over out-links  What about computation over in-links? 1 2 3 4 1 0 1 0 1 1: 2, 4 2 1 0 1 1 2: 1, 3, 4 3 1 0 0 0 3: 1 4: 1, 3 4 1 0 1 0
  • 23. Problem  Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc.  Classic approach  Dijkstra’s Algorithm • Maintain a global priority queue over all (node, distance) pairs • Sort queue by min distance to reach each node from the source node • Initialization: distance to source node = 0, all others =  • Visit nodes in order of (monotonically) increasing path length • Whenever node visited, no shorter path exists • For each node is visited • update its neighbours in the queue • Remove the node from the queue
  • 24. Edsger W. Dijkstra  May 11, 1930 – August 6, 2002  Received the 1972 Turing Award  Schlumberger Centennial Chair of Computer Science at UT Austin (1984-2000)  http://en.wikipedia.org/wiki/Dijkstra’s_algorithm  Wikipedia has nice animation of it in action
  • 25. Dijkstra’s Algorithm  Maintain global priority queue over all (node, distance) pairs  Sort queue by min distance to reach each node from the source node  Initialization  distance to source node = 0  distance to all other nodes =   While queue not empty  visit next node (i.e. the node with shortest path length in the queue) • Output distance to it if desired • Update distance to each of its neighbours in the queue • Remove it from the queue
  • 26. Dijkstra’s Algorithm Example 1   10 0 2 3 9 4 6 5 7   2 Example from CLR
  • 27. Dijkstra’s Algorithm Example 1 10  10 0 2 3 9 4 6 5 7 5  2 Example from CLR
  • 28. Dijkstra’s Algorithm Example 1 8 14 10 0 2 3 9 4 6 5 7 5 7 2 Example from CLR
  • 29. Dijkstra’s Algorithm Example 1 8 13 10 0 2 3 9 4 6 5 7 5 7 2 Example from CLR
  • 30. Dijkstra’s Algorithm Example 1 1 8 9 10 0 2 3 9 4 6 5 7 5 7 2 Example from CLR
  • 31. Dijkstra’s Algorithm Example 1 8 9 10 0 2 3 9 4 6 5 7 5 7 2 Example from CLR
  • 32. Problem  Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc.  Classic approach  Dijkstra’s Algorithm
  • 33. Problem  Find shortest path from a source node to one or more target nodes  Shortest may mean lowest weight or cost, etc.  Classic approach  Dijkstra’s Algorithm  MapReduce approach  Parallel Breadth-First Search (BFS)
  • 34. Finding the Shortest Path  Assume unweighted graph (for now…)  General Inductive Approach  Initialization • DISTANCETO(source s) = 0 • For any node n connected to s, DISTANCETO(n) = 1 • Else DISTANCETO(any other node p) =   For each iteration • For every node n • For every neighbor m  M(n), DISTANCETO(m) = 1 + min( DISTANCETO(n) ) d1 m1 … d2 s … n m2 … d3 m3
  • 35. Visualizing Parallel BFS n7 n0 n1 n3 n2 n6 n5 n4 n8 n9
  • 36. From Intuition to Algorithm  Representation  Key: node n  Value: d (distance from start) • Also: adjacency list (list of nodes reachable from n)  Initialization: d =  for all nodes except start node  Mapper  m  adjacency list: emit (m, d + 1)  Sort/Shuffle  Groups distances by reachable nodes  Reducer  Selects minimum distance path for each reachable node  Additional bookkeeping needed to keep track of actual path
  • 37. BFS Pseudo-Code What type should we use for the values?
  • 38. Multiple Iterations Needed  Each iteration advances the “frontier” by one hop  Subsequent iterations find more reachable nodes  Multiple iterations are needed to explore entire graph  Preserving graph structure  Problem: Where did the adjacency list go?  Solution: mapper emits (n, adjacency list) s well
  • 39. Stopping Criterion  How many iterations are needed?  Convince yourself: when a node is first “discovered”, we’ve found the shortest path  Now answer the question...  Six degrees of separation?  Practicalities of implementation in MapReduce
  • 40. Comparison to Dijkstra  Dijkstra’s algorithm is more efficient  At any step it only pursues edges from the minimum-cost path inside the frontier  MapReduce explores all paths in parallel  Lots of “waste”  Useful work is only done at the “frontier”  Why can’t we do better using MapReduce?
  • 41. Weighted Edges  Now consider non-unit, positive edge weights  Why can’t edge weights be negative?  Adjacency list now includes a weight w for each edge  In mapper, emit (m, d + wp) instead of (m, d + 1) for each node m  Is that all?
  • 42. Stopping Criterion  How many iterations are needed in parallel BFS (positive edge weight case)?  Convince yourself: when a node is first “discovered”, we’ve found the shortest path
  • 43. Additional Complexities 1 search frontier 1 n6 n7 1 n8 r 10 1 n9 n5 n1 s 1 1 q p 1 n4 n2 1 n3
  • 44. Stopping Criterion  How many iterations are needed in parallel BFS (positive edge weight case)?  Practicalities of implementation in MapReduce  Unrelated to stopping… where have we seen min/max before?
  • 45. In General: Graphs and MapReduce  Graph algorithms typically involve  Performing computations at each node: based on node features, edge features, and local link structure  Propagating computations: “traversing” the graph  Generic recipe  Represent graphs as adjacency lists  Perform local computations in mapper  Pass along partial results via outlinks, keyed by destination node  Perform aggregation in reducer on inlinks to a node  Iterate until convergence: controlled by external “driver”  Don’t forget to pass the graph structure between iterations
  • 47. Random Walks Over the Web  Random surfer model  User starts at a random Web page  User randomly clicks on links, surfing from page to page  PageRank  Characterizes the amount of time spent on any given page  Mathematically, a probability distribution over pages  PageRank captures notions of page importance  Correspondence to human intuition?  One of thousands of features used in web search  Note: query-independent
  • 48. PageRank: Defined Given page x with inlinks t1…tn, where  C(t) is the out-degree of t   is probability of random jump  N is the total number of nodes in the graph 1 n PR (ti ) PR ( x)      (1   ) N i 1 C (ti ) t1 X t2 … tn
  • 49. Computing PageRank  Properties of PageRank  Can be computed iteratively  Effects at each iteration are local  Sketch of algorithm:  Start with seed PRi values  Each page distributes PRi “credit” to all pages it links to  Each target page adds up “credit” from multiple in-bound links to compute PRi+1  Iterate until values converge
  • 50. Simplified PageRank  First, tackle the simple case:  No random jump factor  No dangling links  Then, factor in these complexities…  Why do we need the random jump?  Where do dangling links come from?
  • 51. Sample PageRank Iteration (1) Iteration 1 n2 (0.2) n2 (0.166) 0.1 n1 (0.2) 0.1 0.1 n1 (0.066) 0.1 0.066 0.066 0.066 n5 (0.2) n5 (0.3) n3 (0.2) n3 (0.166) 0.2 0.2 n4 (0.2) n4 (0.3)
  • 52. Sample PageRank Iteration (2) Iteration 2 n2 (0.166) n2 (0.133) 0.033 0.083 n1 (0.066) 0.083 n1 (0.1) 0.033 0.1 0.1 0.1 n5 (0.3) n5 (0.383) n3 (0.166) n3 (0.183) 0.3 0.166 n4 (0.3) n4 (0.2)
  • 53. PageRank in MapReduce n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5] n5 [n1, n2, n3] Map n2 n4 n3 n5 n4 n5 n1 n2 n3 n1 n2 n2 n3 n3 n4 n4 n5 n5 Reduce n1 [n2, n4] n2 [n3, n5] n3 [n4] n4 [n5] n5 [n1, n2, n3]
  • 55. Complete PageRank  Two additional complexities  What is the proper treatment of dangling nodes?  How do we factor in the random jump factor?  Solution:  Second pass to redistribute “missing PageRank mass” and account for random jumps  1  m  p'      (1   )  p  G G       p is PageRank value from before, p' is updated PageRank value  |G| is the number of nodes in the graph  m is the missing PageRank mass  How to perform bookkeeping for dangling nodes?  How to implement this 2nd pass in Hadoop?
  • 56. PageRank Convergence  Alternative convergence criteria  Iterate until PageRank values don’t change  Iterate until PageRank rankings don’t change  Fixed number of iterations  Convergence for web graphs?
  • 57. Local Aggregation d1 m1 d2  Use combiners m2 n  BFS uses min, PageRank uses sum d3 • associative and commutative m3  In-mapper combining design pattern also applicable  Opportunity for aggregation when mapper sees multiple nodes with out-links to same destination node  How do we maximize opportunities for local aggregation?  Partition the dataset into clusters with many internal and few external links  Chicken-and-egg problem: don’t we need MapReduce to do this? • Use cheap heuristics • e.g. social network: zip code or school • e.g. for web: language or domain name • etc.
  • 58. Limitations of MapReduce  Amount of intermediate data (to shuffle) is proportional to number of edges in graph  We have considered sparse graphs (i.e. with few edges), minimizing such intermediate data  For dense graphs with O(n^2) edges, runtime would be dominated by copying intermediate data  Consequently, MapReduce algorithms are often impractical on large, dense graphs  But isn’t data-intensive computing exactly what MapReduce is supposed to help us with??  See (Lin and Dyer, p. 101)
  • 60. 1: class Mapper 1: class Mapper 2: method Map( Node N ) 2: method Map( sid s, Node N ) 3: d = N.Distance 3: d = N[s].Distance 4: Emit( N.id, N ) 4: Emit( Pair(sid, N.id), N ) 5: for all (nid m in N.AdjacencyList) do 5: for all (nid m in N.AdjacencyList) do 6: Emit( m, d + 1) 6: Emit( Pair(sid, m), d + 1) 1: class Reducer 1: class Reducer 2: method Reduce(nid m, [d1, d2, ...]) 2: method Reduce( Pair(sid s,nid m), [d1, 3: dmin = 1 d2, ...] ) 4: Node M = null 3: dmin = 1 5: for all d in counts [d1, d2, ...] do 4: M = null 6: if IsNode(d) then 5: for all d in counts [d1, d2, ...] do 7: M=d 6: if IsNode(d) then 8: else if d < dmin then 7: M=d 9: dmin = d 8: else if d < dmin then 10: M.Distance = dmin 9: dmin = d 11: Emit( M ) 10: M[s].Distance = dmin 11: Emit( M )
  • 61. 1: class Mapper 1: class Mapper 2: method Map( sid s, Node N ) 2: method Map( sid s, Node N ) 3: d = N[s].Distance 3: d = N[s].Distance4: 4: Emit( Pair(sid, N.id), N ) 4: if sid=0 then 5: for all (nid m in N.AdjacencyList) do 5: Emit( Pair(sid, N.id), N ) 6: Emit( Pair(sid, m), d + 1) 6: for all (nid m in N.AdjacencyList) do 7: Emit( Pair(sid, m), d + 1) 1: class Reducer 2: method Reduce( Pair(sid s,nid m), [d1, Partition: all pairs with same 2nd nid to same d2, ...] ) reducer 3: dmin = 1 KeyComp: order by sid, the nid, sort sid=0 4: M = null first 5: for all d in counts [d1, d2, ...] do 6: if IsNode(d) then 1: class Reducer 7: M=d 2: M = null 8: else if d < dmin then 3: method Reduce( Pair(sid s,nid m), [d1, 9: dmin = d d2, ...] ) 10: M[s].Distance = dmin 4: dmin = 1 11: Emit( M ) 5: for all d in counts [d1, d2, ...] do 6: if IsNode(d) then 7: M=d 8: else if d < dmin then 9: dmin = d 10: M[s].Distance = dmin 11: Emit( M )