This talk will give an overview of Apache Nutch, its main components, how it fits with other Apache projects and its latest developments.
Apache Nutch was started exactly 10 years ago and was the starting point for what later became Apache Hadoop and also Apache Tika. Nutch is nowadays the tool of reference for large scale web crawling.
In this talk I will give an overview of Apache Nutch and describe its main components and how Nutch fits with other Apache projects such as Hadoop, SOLR or Tika.
The second part of the presentation will be focused on the latest developments in Nutch and the changes introduced by the 2.x branch with the use of Apache GORA as a front end to various NoSQL datastores.
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Â
Large scale crawling with Apache Nutch
1. Large Scale Crawling with
Apache
Julien Nioche
julien@digitalpebble.com
ApacheCon Europe 2012
2. About myself
ï§ DigitalPebble Ltd, Bristol (UK)
ï§ Specialised in Text Engineering
â Web Crawling
â Natural Language Processing
â Information Retrieval
â Data Mining
ï§ Strong focus on Open Source & Apache ecosystem
ï§ Apache Nutch VP
ï§ Apache Tika committer
ï§ User | Contributor
â SOLR, Lucene
â GATE, UIMA
â Mahout
â Behemoth
2 / 37
3. Objectives
ï§ Overview of the project
ï§ Nutch in a nutshell
ï§ Nutch 2.x
ï§ Future developments
3 / 37
4. Nutch?
ï§ âDistributed framework for large scale web crawlingâ
â but does not have to be large scale at all
â or even on the web (file-protocol)
ï§ Apache TLP since May 2010
ï§ Based on Apache Hadoop
ï§ Indexing and Search
4 / 37
5. Short history
ï§ 2002/2003 : Started By Doug Cutting & Mike Caffarella
ï§ 2004 : sub-project of Lucene @Apache
ï§ 2005 : MapReduce implementation in Nutch
â 2006 : Hadoop sub-project of Lucene @Apache
ï§ 2006/7 : Parser and MimeType in Tika
â 2008 : Tika sub-project of Lucene @Apache
ï§ May 2010 : TLP project at Apache
ï§ June 2012 : Nutch 1.5.1
ï§ Oct 2012 : Nutch 2.1
5 / 37
7. Community
ï§ 6 active committers / PMC members
â 4 within the last 18 months
ï§ Constant stream of new contributions & bug reports
ï§ Steady numbers of mailing list subscribers and traffic
ï§ Nutch is a very healthy 10-year old
9 / 37
8. Why use Nutch?
ï§ Usual reasons
â Mature, business-friendly license, community, ...
ï§ Scalability
â Tried and tested on very large scale
â Hadoop cluster : installation and skills
ï§ Features
â e.g. Index with SOLR
â PageRank implementation
â Can be extended with plugins
10 / 37
9. Not the best option when ...
ï§ Hadoop based == batch processing == high latency
â No guarantee that a page will be fetched / parsed / indexed within X
minutes|hours
ï§ Javascript / Ajax not supported (yet)
11 / 37
10. Use cases
ï§ Crawl for IR
â Generic or vertical
â Index and Search with SOLR
â Single node to large clusters on Cloud
ï§ âŠ but also
â Data Mining
â NLP (e.g.Sentiment Analysis)
â ML
â MAHOUT / UIMA / GATE
â Use Behemoth as glueware (https://github.com/DigitalPebble/behemoth)
12 / 37
11. Customer cases
Specificity (Verticality)
Usecase : BetterJobs.com
â Single server
â Aggregates content from job portals
â Extracts and normalizes structure (description,
requirements, locations)
â ~1M pages total
â Feeds SOLR index
Usecase : SimilarPages.com
â Large cluster on Amazon EC2 (up to 400
nodes)
â Fetched & parsed 3 billion pages
â 10+ billion pages in crawlDB (~100TB data)
â 200+ million lists of similarities
â No indexing / search involved
Scale
13 / 37
12. Typical Nutch Steps
ï§ Same in 1.x and 2.x
ï§ Sequence of batch operations
1) Inject â populates CrawlDB from seed list
2) Generate â Selects URLS to fetch in segment
3) Fetch â Fetches URLs from segment
4) Parse â Parses content (text + metadata)
5) UpdateDB â Updates CrawlDB (new URLs, new status...)
6) InvertLinks â Build Webgraph
7) SOLRIndex â Send docs to SOLR
8) SOLRDedup â Remove duplicate docs based on signature
ï§ Repeat steps 2 to 8
ï§ Or use the all-in-one crawl script
14 / 37
13. Main steps
Seed
List CrawlDB Segment
/
/crawl_fetch/
crawl_generate/
/content/
/crawl_parse/
/parse_data/
/parse_text/
LinkDB
15 / 37
14. Frontier expansion
ï§ Manual âdiscoveryâ
â Adding new URLs by
hand, âseedingâ
ï§ Automatic discovery
of new resources
(frontier expansion)
â Not all outlinks are
equally useful - control seed
â Requires content
i=1
parsing and link
extraction
i=2
i=3
[Slide courtesy of A. Bialecki]
16 / 37
15. An extensible framework
ï§ Plugins
â Activated with parameter 'plugin.includes'
â Implement one or more endpoints
ï§ Endpoints
â Protocol
â Parser
â HtmlParseFilter (ParseFilter in Nutch 2.x)
â ScoringFilter (used in various places)
â URLFilter (ditto)
â URLNormalizer (ditto)
â IndexingFilter
17 / 37
16. Features
ï§ Fetcher
â Multi-threaded fetcher
â Follows robots.txt
â Groups URLs per hostname / domain / IP
â Limit the number of URLs for round of fetching
â Default values are polite but can be made more aggressive
ï§ Crawl Strategy
â Breadth-first but can be depth-first
â Configurable via custom scoring plugins
ï§ Scoring
â OPIC (On-line Page Importance Calculation) by default
â LinkRank
18 / 37
18. Features (cont.)
ï§ Parsing with Apache Tika
â Hundreds of formats supported
â But some legacy parsers as well
ï§ Other plugins
â CreativeCommons
â Feeds
â Language Identification
â Rel tags
â Arbitrary Metadata
ï§ Indexing to SOLR
â Bespoke schema
20 / 37
19. Data Structures in 1.x
ï§ MapReduce jobs => I/O : Hadoop [Sequence|Map]Files
ï§ CrawlDB => status of known pages
MapFile : <Text,CrawlDatum>
byte status; [fetched? Unfetched? Failed? Redir?]
long fetchTime;
byte retries;
CrawlDB int fetchInterval;
float score = 1.0f;
byte[] signature = null;
long modifiedTime;
org.apache.hadoop.io.MapWritable metaData;
ï§ Input of : generate - index
ï§ Output of : inject - update
21 / 37
20. Data Structures 1.x
ï§ Segment => round of fetching
ï§ Identified by a timestamp
Segment
/crawl_generate/ â SequenceFile<Text,CrawlDatum>
/crawl_fetch/ â MapFile<Text,CrawlDatum>
/content/ â MapFile<Text,Content>
/crawl_parse/ â SequenceFile<Text,CrawlDatum>
/parse_data/ â MapFile<Text,ParseData>
/parse_text/ â MapFile<Text,ParseText>
ï§ Can have multiple versions of a page in different
segments
22 / 37
21. Data Structures â 1.x
ï§ linkDB => storage for Web Graph
MapFile : <Text,Inlinks>
Inlinks : HashSet <Inlink>
LinkDB Inlink :
String fromUrl
String anchor
ï§ Output of : invertlinks
ï§ Input of : SOLRIndex
23 / 37
22. NUTCH 2.x
ï§ 2.0 released in July 2012
ï§ 2.1 in October 2012
ï§ Common features as 1.x
â delegation to SOLR, TIKA, MapReduce etc...
ï§ Moved to table-based architecture
â Wealth of NoSQL projects in last few years
ï§ Abstraction over storage layer â Apache GORA
24 / 37
23. Apache GORA
ï§ http://gora.apache.org/
ï§ ORM for NoSQL databases
â and limited SQL support + file based storage
ï§ 0.2.1 released in August 2012
ï§ DataStore implementations
â Accumulo â Avro
â Cassandra â DynamoDB (soon)
â HBase â SQL
ï§ Serialization with Apache AVRO
ï§ Object-to-datastore mappings (backend-specific)
25 / 37
27. GORA in Nutch
ï§ AVRO schema provided and java code pre-generated
ï§ Mapping files provided for backends
â can be modified if necessary
ï§ Need to rebuild to get dependencies for backend
â No binary distribution of Nutch 2.x
ï§ http://wiki.apache.org/nutch/Nutch2Tutorial
29 / 37
28. Benefits
ï§ Storage still distributed and replicated
ï§ but one big table
â status, metadata, content, text â one place
ï§ Simplified logic in Nutch
â Simpler code for updating / merging information
ï§ More efficient (?)
â No need to read / write entire structure to update records
â No comparison available yet + early days for GORA
ï§ Easier interaction with other resources
â Third-party code just need to use GORA and schema
30 / 37
29. Drawbacks
ï§ More stuff to install and configure :-)
ï§ Not as stable as Nutch 1.x
ï§ Dependent on success of Gora
31 / 37
30. 2.x Work in progress
ï§ Stabilise backend implementations
â GORA-Hbase most reliable
ï§ Synchronize features with 1.x
â e.g. has ElasticSearch but missing LinkRank equivalent
ï§ Filter enabled scans (GORA-119)
â Don't need to de-serialize the whole dataset
32 / 37
31. Future
ï§ Both 1.x and 2.x in parallel
â but more frequent releases for 2.x
ï§ New functionalities
â Support for SOLRCloud
â Sitemap (from Crawler Commons library)
â Canonical tag
â More indexers (e.g. ElasticSearch) + pluggable indexers?
33 / 37
32. More delegation
ï§ Great deal done in recent years (SOLR, Tika)
ï§ Share code with crawler-commons
(http://code.google.com/p/crawler-commons/)
â Fetcher / protocol handling
â Robots.txt parsing
â URL normalisation / filtering
ï§ PageRank-like computations to graph library
â e.g. Apache Giraph
â Should be more efficient as well
34 / 37
33. Where to find out more?
ï§ Project page : http://nutch.apache.org/
ï§ Wiki : http://wiki.apache.org/nutch/
ï§ Mailing lists :
â user@nutch.apache.org
â dev@nutch.apache.org
ï§ Chapter in 'Hadoop the Definitive Guide' (T. White)
â Understanding Hadoop is essential anyway...
ï§ Support / consulting :
â http://wiki.apache.org/nutch/Support
35 / 37
I'll be talking about large scale document processing and more specifically about Behemoth which is an open source project based on Hadoop
A few words about myself just before I start... What I mean by Text Engineering is a variety of activities ranging from .... What makes the identity of DP is The main projects I am involved in are âŠ
Note that I mention crawling and not web search â used not only for search + used to do indexing and search using Lucene but now delegate this to SOLR
Endpoints are called in various places URL filters and normalisers in a lot of places Same for Soring Filters
Main steps in Nutch More actions available Shell Wrappers around hadoop commands
Main steps in Nutch More actions available Shell Wrappers around hadoop commands
Endpoints are called in various places URL filters and normalisers in a lot of places Same for Soring Filters