SlideShare ist ein Scribd-Unternehmen logo
1 von 27
Downloaden Sie, um offline zu lesen
Scaling search at Trovit with Solr
                     and Hadoop
                                 Marc Sturlese, Trovit
                            marc@trovit.com, 19 October 2011




Wednesday, October 19, 11
My Background
          Marc Sturlese
          Trovit
          Software engineer focused on R&D
          Responsible for search and scalability




                                   2



Wednesday, October 19, 11
Agenda
          What is Trovit? Why Solr and Hadoop?
          The data workflow
          Distributed indexing strategy
          Moving indexing features out of Solr
            • Text analysis
            • Deduplication
       Performance
       Questions



                                3



Wednesday, October 19, 11
What is Trovit?
       Search engine for classified ads
       Tech company located in Barcelona
       Started 5 years ago just in a single country
       Now it’s in 33 countries and 4 business
        categories
       Main purpose is to serve good quality results to
        the end user as fast as possible




                                     4



Wednesday, October 19, 11
What is Trovit?




                                    5



Wednesday, October 19, 11
Why Solr and Hadoop?
       Start as custom Lucene search server
       Solr is very extensible and has a great
        community so we made the move
       Datastore was MySQL and custom Data Import
        Handler for indexing
       Scale up was not the way!
       Sharded MySQL strategies are hard to maintain
       Hadoop seemed a good feed




                                6



Wednesday, October 19, 11
The data workflow
       Documents are crunched by a pipeline of
        MapReduce jobs
       Stats are saved for each pipeline phase to keep
        track of what happen every moment
       Hive is used for those stats generation




                                      7



Wednesday, October 19, 11
The data workflow
       Pipeline overview

                                                    t-1



   Incoming          Ad
                                        Diff              Expiration        Deduplication Indexing
    data             processor


                                    t           t                       t              t




                            stats       stats                   stats          stats        stats




                                                            8



Wednesday, October 19, 11
The data workflow
       Deployment overview
                                                                Multicast    Slave


                                       HDFS
                                       to local
   Incoming                                                                  Slave
                            Pipeline              Index
    data
                                                  repository




                                                               Rsync index   Slave
                                        Fresh updates          updates
                                        with a “minimal
                                        data processor”




                                                   9



Wednesday, October 19, 11
The data workflow
       Index constantly built from scratch. Keep
        desired number of segments for good search
        performance
       “Minimal data processor” allows fresh data
        appear in the search results
       HDFS makes backups really convenient
       Multicast system allows to send indexes to all
        slaves at the same time. The only limit is your
        bandwidth



                                     10



Wednesday, October 19, 11
Distributed indexing strategy
      First looked at SOLR-1301
      Extends InputFormat allowing just an index per
       reducer
      Good to generate huge indexes building a
       shard per reduce
      To achieve the goal with minimal time, shards
       should have very similar size
      Reduce side indexing seemed the way but...
       indexes differ a lot of size depending on the
       country and vertical

                              11



Wednesday, October 19, 11
Distributed indexing strategy
      Single monolithic indexes or shards
      Another approach, 2 phases indexing (2
       sequential MapReduce jobs)
           • Partial indexing: Generate lots of “micro
             indexes” per each monolithic or sharded index
           • Merge: Groups all the “micro indexes” and
             merge them to get the production data.




                                  12



Wednesday, October 19, 11
Distributed indexing strategy
       2 phases indexing overview
                               HDFS serialized data




                            Partial indexer: Micro indexes




                            Merger: Production indexes



                                          13



Wednesday, October 19, 11
Distributed indexing - Partial
                      generation
       Map reads serialized data and emit grouping by
        micro index
       Reduce receives ads grouped as “micro index”
        and builds it
       Embedded Solr Server for indexing and
        optimize
       Solr cores configuration is stored in HDFS.
       Indexes are build in local disk and then
        uploaded to HDFS



                              14



Wednesday, October 19, 11
Distributed indexing - Partial
                      generation
                            Input          K: id; V: Ad

     MAP                    Code          Assign micro index to Ad

                            Output         K: microIndexId; V: Ad


                            Input         K: microIndexId; V:AdList<>

    REDUCE                  Code          Build index

                            Output         K: Null; V: Message

                                     15



Wednesday, October 19, 11
Distributed indexing - Merge
                         phase
       Merging is done in plain Lucene
       Map reads a list of the paths of the micro
        indexes and emits grouping per shard or
        monolithic
       Reduce receives the list and does the proper
        merge
       Partial indexes are downloaded to local disk,
        merged to a single one and uploaded back to
        HDFS
       Since Lucene 3.1 addIndexes(Directory) uses
        copy, merge can be very fast
                              16



Wednesday, October 19, 11
Distributed indexing – Merge
                         phase
                            Input         K: lineNum; V: MicroIndexPath


                            Code          Get index name
     MAP
                            Output        K: indexName; V: MicroIndexPath


                                          K: indexName;
                                          V: MicroIndexPathList<>
                            Input

    REDUCE                  Code          Merge micro indexes


                            Output        K: Null; V: Message


                                     17



Wednesday, October 19, 11
Distributed indexing strategy
      Pros:
           • Highly scalable
           • Allows indexes with very different size keeping
             good performance
           • Easy to manage
      Cons:
           • Time uploading and downloading from HDFS
             before it gets into production




                                   18



Wednesday, October 19, 11
Moving features out of Solr
      Useful when you have to deal with lots of data
      Text processing with Solr and Hadoop
      Distributing Solr Deduplication




                              19



Wednesday, October 19, 11
Text processing with Solr and
                      Hadoop
       Solr has many powerful analyzers already
        implemented
       Mahout tokenizes text using plain Lucene and
        Hadoop
       The setUp method on a Map instantiates the
        Analyzer
       A Map receives serialized data and that is
        processed using Solr analyzers
       Analyzer can receive configuration parameters
        from a job-site.xml file


                              20



Wednesday, October 19, 11
Text processing with Solr and
                      Hadoop
          //init Solr analyzer

          final List<TokenFilterFactory> filters = new ArrayList<TokenFilterFactory>();

               final TokenFilterFactory wordDelimiter = new WordDelimiterFilterFactory();

               Map<String, String> args = new HashMap<String, String>();

               args.put("generateWordParts", conf.get(WORD_PARTS));

               args.put("splitOnNumerics", conf.get(NUMERIC_SPLIT);

               wordDelimiter.init(args);

               final TokenFilterFactory accent = new ISOLatin1AccentFilterFactory();

               final TokenFilterFactory lowerCase = new LowerCaseFilterFactory();

               filters.add(wordDelimiter);

               filters.add(accent);

               filters.add(lowerCase);

               final TokenizerFactory tokenizer = new StandardTokenizerFactory();

              analyzer = new TokenizerChain(null, tokenizer, filters.toArray(new
          TokenFilterFactory[filters.size()]));



                                                   21



Wednesday, October 19, 11
Text processing with Solr and
                      Hadoop
               //Tokenizing text

               ...

              HashSet<String> tokens = new HashSet<String>();

              TokenStream stream = analyzer.reusableTokenStream(fieldName, new
          StringReader(fieldValue));

               TermAttribute termAtt = (TermAttribute) stream.addAttribute(TermAttribute.class);

               while (stream.incrementToken()) {

                   tokens.add(termAtt.term());

               }

               return tokens;




                                                   22



Wednesday, October 19, 11
Distributed deduplication
       Compute near duplicates in a distributed
        environment
       Map receives serialized ads and emit building
        the key using Solr’s TextProfileSignature
       Reduce receives dup ads grouped. There, you
        decide
       Field names to compute the signature received
        as configuration parameters from a job-site.xml
        file



                               23



Wednesday, October 19, 11
Distributed deduplication

                            Input         K: id; V: Ad

     MAP                    Code          Build signature

                            Output        K: signature; V: Ad


                            Input         K: signature; V: AdList<>

    REDUCE                  Code          Dups logic

                            Output        K: id; V: Ad

                                     24



Wednesday, October 19, 11
Performance: Setting the
                         merge factor
       Used in LogByteMergePolicy and older
       When indexing, tells Lucene how many
        segments can be created before a merge
        happen
       Very low value will keep the index almost
        optimized. Good for search performance but
        indexing will be slower
       High value will generate lots, of files. Indexing
        will be faster but not search requests
       New versions of Solr default to
        TieredMergePolicy which don’t use it
                                25



Wednesday, October 19, 11
?
                            26



Wednesday, October 19, 11
Contact
       Thank you for your attention
       Marc Sturlese
            • marc@trovit.com
            • www.trovit.com




                                27



Wednesday, October 19, 11

Weitere ähnliche Inhalte

Mehr von lucenerevolution

Building Client-side Search Applications with Solr
Building Client-side Search Applications with SolrBuilding Client-side Search Applications with Solr
Building Client-side Search Applications with Solrlucenerevolution
 
Integrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationsIntegrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationslucenerevolution
 
Scaling Solr with SolrCloud
Scaling Solr with SolrCloudScaling Solr with SolrCloud
Scaling Solr with SolrCloudlucenerevolution
 
Administering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud ClustersAdministering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud Clusterslucenerevolution
 
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Implementing a Custom Search Syntax using Solr, Lucene, and ParboiledImplementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiledlucenerevolution
 
Using Solr to Search and Analyze Logs
Using Solr to Search and Analyze Logs Using Solr to Search and Analyze Logs
Using Solr to Search and Analyze Logs lucenerevolution
 
Enhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchEnhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchlucenerevolution
 
Real-time Inverted Search in the Cloud Using Lucene and Storm
Real-time Inverted Search in the Cloud Using Lucene and StormReal-time Inverted Search in the Cloud Using Lucene and Storm
Real-time Inverted Search in the Cloud Using Lucene and Stormlucenerevolution
 
Solr's Admin UI - Where does the data come from?
Solr's Admin UI - Where does the data come from?Solr's Admin UI - Where does the data come from?
Solr's Admin UI - Where does the data come from?lucenerevolution
 
Schemaless Solr and the Solr Schema REST API
Schemaless Solr and the Solr Schema REST APISchemaless Solr and the Solr Schema REST API
Schemaless Solr and the Solr Schema REST APIlucenerevolution
 
High Performance JSON Search and Relational Faceted Browsing with Lucene
High Performance JSON Search and Relational Faceted Browsing with LuceneHigh Performance JSON Search and Relational Faceted Browsing with Lucene
High Performance JSON Search and Relational Faceted Browsing with Lucenelucenerevolution
 
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
Text Classification with Lucene/Solr, Apache Hadoop and LibSVMText Classification with Lucene/Solr, Apache Hadoop and LibSVM
Text Classification with Lucene/Solr, Apache Hadoop and LibSVMlucenerevolution
 
Faceted Search with Lucene
Faceted Search with LuceneFaceted Search with Lucene
Faceted Search with Lucenelucenerevolution
 
Recent Additions to Lucene Arsenal
Recent Additions to Lucene ArsenalRecent Additions to Lucene Arsenal
Recent Additions to Lucene Arsenallucenerevolution
 
Turning search upside down
Turning search upside downTurning search upside down
Turning search upside downlucenerevolution
 
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...lucenerevolution
 
Shrinking the haystack wes caldwell - final
Shrinking the haystack   wes caldwell - finalShrinking the haystack   wes caldwell - final
Shrinking the haystack wes caldwell - finallucenerevolution
 
The First Class Integration of Solr with Hadoop
The First Class Integration of Solr with HadoopThe First Class Integration of Solr with Hadoop
The First Class Integration of Solr with Hadooplucenerevolution
 
A Novel methodology for handling Document Level Security in Search Based Appl...
A Novel methodology for handling Document Level Security in Search Based Appl...A Novel methodology for handling Document Level Security in Search Based Appl...
A Novel methodology for handling Document Level Security in Search Based Appl...lucenerevolution
 

Mehr von lucenerevolution (20)

Search at Twitter
Search at TwitterSearch at Twitter
Search at Twitter
 
Building Client-side Search Applications with Solr
Building Client-side Search Applications with SolrBuilding Client-side Search Applications with Solr
Building Client-side Search Applications with Solr
 
Integrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationsIntegrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applications
 
Scaling Solr with SolrCloud
Scaling Solr with SolrCloudScaling Solr with SolrCloud
Scaling Solr with SolrCloud
 
Administering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud ClustersAdministering and Monitoring SolrCloud Clusters
Administering and Monitoring SolrCloud Clusters
 
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Implementing a Custom Search Syntax using Solr, Lucene, and ParboiledImplementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
 
Using Solr to Search and Analyze Logs
Using Solr to Search and Analyze Logs Using Solr to Search and Analyze Logs
Using Solr to Search and Analyze Logs
 
Enhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic searchEnhancing relevancy through personalization & semantic search
Enhancing relevancy through personalization & semantic search
 
Real-time Inverted Search in the Cloud Using Lucene and Storm
Real-time Inverted Search in the Cloud Using Lucene and StormReal-time Inverted Search in the Cloud Using Lucene and Storm
Real-time Inverted Search in the Cloud Using Lucene and Storm
 
Solr's Admin UI - Where does the data come from?
Solr's Admin UI - Where does the data come from?Solr's Admin UI - Where does the data come from?
Solr's Admin UI - Where does the data come from?
 
Schemaless Solr and the Solr Schema REST API
Schemaless Solr and the Solr Schema REST APISchemaless Solr and the Solr Schema REST API
Schemaless Solr and the Solr Schema REST API
 
High Performance JSON Search and Relational Faceted Browsing with Lucene
High Performance JSON Search and Relational Faceted Browsing with LuceneHigh Performance JSON Search and Relational Faceted Browsing with Lucene
High Performance JSON Search and Relational Faceted Browsing with Lucene
 
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
Text Classification with Lucene/Solr, Apache Hadoop and LibSVMText Classification with Lucene/Solr, Apache Hadoop and LibSVM
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
 
Faceted Search with Lucene
Faceted Search with LuceneFaceted Search with Lucene
Faceted Search with Lucene
 
Recent Additions to Lucene Arsenal
Recent Additions to Lucene ArsenalRecent Additions to Lucene Arsenal
Recent Additions to Lucene Arsenal
 
Turning search upside down
Turning search upside downTurning search upside down
Turning search upside down
 
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
 
Shrinking the haystack wes caldwell - final
Shrinking the haystack   wes caldwell - finalShrinking the haystack   wes caldwell - final
Shrinking the haystack wes caldwell - final
 
The First Class Integration of Solr with Hadoop
The First Class Integration of Solr with HadoopThe First Class Integration of Solr with Hadoop
The First Class Integration of Solr with Hadoop
 
A Novel methodology for handling Document Level Security in Search Based Appl...
A Novel methodology for handling Document Level Security in Search Based Appl...A Novel methodology for handling Document Level Security in Search Based Appl...
A Novel methodology for handling Document Level Security in Search Based Appl...
 

Kürzlich hochgeladen

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 

Kürzlich hochgeladen (20)

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 

Scaling search at Trovit with Solr and Hadoop - Marc Sturlese

  • 1. Scaling search at Trovit with Solr and Hadoop Marc Sturlese, Trovit marc@trovit.com, 19 October 2011 Wednesday, October 19, 11
  • 2. My Background  Marc Sturlese  Trovit  Software engineer focused on R&D  Responsible for search and scalability 2 Wednesday, October 19, 11
  • 3. Agenda  What is Trovit? Why Solr and Hadoop?  The data workflow  Distributed indexing strategy  Moving indexing features out of Solr • Text analysis • Deduplication  Performance  Questions 3 Wednesday, October 19, 11
  • 4. What is Trovit?  Search engine for classified ads  Tech company located in Barcelona  Started 5 years ago just in a single country  Now it’s in 33 countries and 4 business categories  Main purpose is to serve good quality results to the end user as fast as possible 4 Wednesday, October 19, 11
  • 5. What is Trovit? 5 Wednesday, October 19, 11
  • 6. Why Solr and Hadoop?  Start as custom Lucene search server  Solr is very extensible and has a great community so we made the move  Datastore was MySQL and custom Data Import Handler for indexing  Scale up was not the way!  Sharded MySQL strategies are hard to maintain  Hadoop seemed a good feed 6 Wednesday, October 19, 11
  • 7. The data workflow  Documents are crunched by a pipeline of MapReduce jobs  Stats are saved for each pipeline phase to keep track of what happen every moment  Hive is used for those stats generation 7 Wednesday, October 19, 11
  • 8. The data workflow  Pipeline overview t-1 Incoming Ad Diff Expiration Deduplication Indexing data processor t t t t stats stats stats stats stats 8 Wednesday, October 19, 11
  • 9. The data workflow  Deployment overview Multicast Slave HDFS to local Incoming Slave Pipeline Index data repository Rsync index Slave Fresh updates updates with a “minimal data processor” 9 Wednesday, October 19, 11
  • 10. The data workflow  Index constantly built from scratch. Keep desired number of segments for good search performance  “Minimal data processor” allows fresh data appear in the search results  HDFS makes backups really convenient  Multicast system allows to send indexes to all slaves at the same time. The only limit is your bandwidth 10 Wednesday, October 19, 11
  • 11. Distributed indexing strategy  First looked at SOLR-1301  Extends InputFormat allowing just an index per reducer  Good to generate huge indexes building a shard per reduce  To achieve the goal with minimal time, shards should have very similar size  Reduce side indexing seemed the way but... indexes differ a lot of size depending on the country and vertical 11 Wednesday, October 19, 11
  • 12. Distributed indexing strategy  Single monolithic indexes or shards  Another approach, 2 phases indexing (2 sequential MapReduce jobs) • Partial indexing: Generate lots of “micro indexes” per each monolithic or sharded index • Merge: Groups all the “micro indexes” and merge them to get the production data. 12 Wednesday, October 19, 11
  • 13. Distributed indexing strategy  2 phases indexing overview HDFS serialized data Partial indexer: Micro indexes Merger: Production indexes 13 Wednesday, October 19, 11
  • 14. Distributed indexing - Partial generation  Map reads serialized data and emit grouping by micro index  Reduce receives ads grouped as “micro index” and builds it  Embedded Solr Server for indexing and optimize  Solr cores configuration is stored in HDFS.  Indexes are build in local disk and then uploaded to HDFS 14 Wednesday, October 19, 11
  • 15. Distributed indexing - Partial generation Input K: id; V: Ad MAP Code Assign micro index to Ad Output K: microIndexId; V: Ad Input K: microIndexId; V:AdList<> REDUCE Code Build index Output K: Null; V: Message 15 Wednesday, October 19, 11
  • 16. Distributed indexing - Merge phase  Merging is done in plain Lucene  Map reads a list of the paths of the micro indexes and emits grouping per shard or monolithic  Reduce receives the list and does the proper merge  Partial indexes are downloaded to local disk, merged to a single one and uploaded back to HDFS  Since Lucene 3.1 addIndexes(Directory) uses copy, merge can be very fast 16 Wednesday, October 19, 11
  • 17. Distributed indexing – Merge phase Input K: lineNum; V: MicroIndexPath Code Get index name MAP Output K: indexName; V: MicroIndexPath K: indexName; V: MicroIndexPathList<> Input REDUCE Code Merge micro indexes Output K: Null; V: Message 17 Wednesday, October 19, 11
  • 18. Distributed indexing strategy  Pros: • Highly scalable • Allows indexes with very different size keeping good performance • Easy to manage  Cons: • Time uploading and downloading from HDFS before it gets into production 18 Wednesday, October 19, 11
  • 19. Moving features out of Solr  Useful when you have to deal with lots of data  Text processing with Solr and Hadoop  Distributing Solr Deduplication 19 Wednesday, October 19, 11
  • 20. Text processing with Solr and Hadoop  Solr has many powerful analyzers already implemented  Mahout tokenizes text using plain Lucene and Hadoop  The setUp method on a Map instantiates the Analyzer  A Map receives serialized data and that is processed using Solr analyzers  Analyzer can receive configuration parameters from a job-site.xml file 20 Wednesday, October 19, 11
  • 21. Text processing with Solr and Hadoop //init Solr analyzer final List<TokenFilterFactory> filters = new ArrayList<TokenFilterFactory>(); final TokenFilterFactory wordDelimiter = new WordDelimiterFilterFactory(); Map<String, String> args = new HashMap<String, String>(); args.put("generateWordParts", conf.get(WORD_PARTS)); args.put("splitOnNumerics", conf.get(NUMERIC_SPLIT); wordDelimiter.init(args); final TokenFilterFactory accent = new ISOLatin1AccentFilterFactory(); final TokenFilterFactory lowerCase = new LowerCaseFilterFactory(); filters.add(wordDelimiter); filters.add(accent); filters.add(lowerCase); final TokenizerFactory tokenizer = new StandardTokenizerFactory(); analyzer = new TokenizerChain(null, tokenizer, filters.toArray(new TokenFilterFactory[filters.size()])); 21 Wednesday, October 19, 11
  • 22. Text processing with Solr and Hadoop //Tokenizing text ... HashSet<String> tokens = new HashSet<String>(); TokenStream stream = analyzer.reusableTokenStream(fieldName, new StringReader(fieldValue)); TermAttribute termAtt = (TermAttribute) stream.addAttribute(TermAttribute.class); while (stream.incrementToken()) { tokens.add(termAtt.term()); } return tokens; 22 Wednesday, October 19, 11
  • 23. Distributed deduplication  Compute near duplicates in a distributed environment  Map receives serialized ads and emit building the key using Solr’s TextProfileSignature  Reduce receives dup ads grouped. There, you decide  Field names to compute the signature received as configuration parameters from a job-site.xml file 23 Wednesday, October 19, 11
  • 24. Distributed deduplication Input K: id; V: Ad MAP Code Build signature Output K: signature; V: Ad Input K: signature; V: AdList<> REDUCE Code Dups logic Output K: id; V: Ad 24 Wednesday, October 19, 11
  • 25. Performance: Setting the merge factor  Used in LogByteMergePolicy and older  When indexing, tells Lucene how many segments can be created before a merge happen  Very low value will keep the index almost optimized. Good for search performance but indexing will be slower  High value will generate lots, of files. Indexing will be faster but not search requests  New versions of Solr default to TieredMergePolicy which don’t use it 25 Wednesday, October 19, 11
  • 26. ? 26 Wednesday, October 19, 11
  • 27. Contact  Thank you for your attention  Marc Sturlese • marc@trovit.com • www.trovit.com 27 Wednesday, October 19, 11