Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Benchmarking the Effectiveness of Associating Chains of Links for Exploratory Semantic Search

Linked Data offers an entity-based infrastructure to resolve indirect relations between resources, expressed as chains of links. If we could benchmark how effective retrieving chains of links from these sources is, we can motivate why they are a reliable addition for exploratory search interfaces. A vast number of applications could reap the benefits from encouraging insights in this field. Especially all kinds of knowledge discovery tasks related for instance to ad-hoc
decision support and digital assistance systems. In this paper, we explain a benchmark model for evaluating the effectiveness of associating chains of links with keyword-based queries. We illustrate the benchmark model with an example case using academic library and conference metadata where we measured precision involving targeted expert users and directed it towards search effectiveness. This kind of typical semantic search engine evaluation focusing on information
retrieval metrics such as precision is typically biased towards the final result only. However, in an exploratory search scenario, the dynamics of the intermediary links that could lead to potentially relevant discoveries are not to be neglected.

  • Loggen Sie sich ein, um Kommentare anzuzeigen.

Benchmarking the Effectiveness of Associating Chains of Links for Exploratory Semantic Search

  1. 1. Benchmarking the Effectiveness of Associating Chains of Links for Exploratory Semantic Search Laurens DeVocht Selver Softic, RubenVerborgh, Erik Mannens, Martin Ebner, RikVan de Walle
  2. 2. :Paris ? 2
  3. 3. :Paris :Anne_Hidalgo :mayor 3
  4. 4. :Paris :Anne_Hidalgo :mayor :Bethlehem,_PA ? 4
  5. 5. :Anne_Hidalgo ? 5
  6. 6. :Anne_Hidalgo ? :Bethlehem,_PA
  7. 7. :Anne_Hidalgo :Bethlehem,_PA Exploratory Semantic Search Engine 7
  8. 8. ? 8
  9. 9. :Paris 9 :mayor :Anne_Hidalgo < :birthPlace :San_Fernando,_Caldiz 9 :country :Spain < :birthPlace :Edward_Ferrero 9 :battle :Battle_of_Roanoke_Island < :battle :Charles_Adam_Heckman 9 :birthPlace :Easton,_Pennsylvania 9 :mouthMountain :Lehigh_River 9 :city :Bethlehem,_Pennsylvania A 9
  10. 10. :Paris < :capital :France < :citizenship :Cyril_Bourlon_de_Rouvre 9 :education :Aerospace_engineering < :occupation :Dick_Johnson_(glider_pilot) 9 :almaMater :Mississippi_State_University < :almaMater :Clara_Southmayd_Ludlow 9 :birthPlace :Easton,_Pennsylvania < :mouthMountain :Lehigh_River 9 :city :Bethlehem,_Pennsylvania B 10
  11. 11. BA ?
  12. 12. How effective does an exploratory semantic search engine reveal initially hidden associations, as chains of links between interlinked resources?
  13. 13. Introduction Exploratory Search Benchmark Model Motivating Example Discussion and Conclusion
  14. 14. Introduction Exploratory Search Benchmark Model Motivating Example Discussion and Conclusion
  15. 15. [EXPLORATORY SEARCH: FROM FINDINGTO UNDERSTANDING, Machionini, 2006] Lookup Learn Investigation Exploratory Search `Learning searches involve multiple iterations and return sets of objects that require cognitive processing and interpretation’ `Searches that support investigation involve multiple iterations that take place over perhaps very long periods of time and may return results that are critically assessed before being integrated into personal and professional knowledge bases’ Definition 15
  16. 16. 1. Lookup 2. Relate/Expand Lookup and learn: interpretation 16
  17. 17. lookup expand relate An exploratory semantic search engine 17
  18. 18. lookup :Paris Paris 18
  19. 19. expand :Paris :Paris 19
  20. 20. relate 20
  21. 21. Introduction Exploratory Search Benchmark Model Motivating Example Discussion and Conclusion
  22. 22. Iterative Exploratory Queries Exploratory Semantic Search Engine Datasets Baseline Effectiveness 22
  23. 23. Effectiveness The effectiveness E indicates the overall perception of the results by the users taking into account expert-user feedback. # user marked relevant objects E = # retrieved objects Note: E can be interpreted as precision in traditional IR. Typical IR examine both precision and recall. 23 [TALKEXPLORER,Verbert et al., 2013]
  24. 24. Introduction Exploratory Search Benchmark Model Motivating Example Discussion and Conclusion
  25. 25. Motivating Example ResXplorer.org Everything Is Connected Engine Virtuoso User Study Extracted Queries 25
  26. 26. User Study Extracted Queries 1. lookup; 2. expand; 3. relate 26
  27. 27. LDOW P(0) P(1) P(2) P(3) Effectiveness : Interpretation 27
  28. 28. Sample Results Everything Is Connected Engine 28
  29. 29. Sample Results Virtuoso 29
  30. 30. Introduction Exploratory Search Benchmark Model Motivating Example Discussion and Conclusion
  31. 31. Limitations  Only indicate comparisons to baseline within the same use case.  Not possible to use the benchmark as a leverage to compare different approaches across use cases  Could better demonstrate in which aspects an exploratory approach excels traditional systems. 31
  32. 32. Future Work  Put the results in perspective by indicating the nuances among different expert user ratings. Especially when there is expert disagreement or inconsistencies.  Facilitate generalization of the preliminary search context, so results for engines can be reusable across datasets: avoiding that a certain engine’s results differ strongly when changing the data and queries.  Make sure that the approach is generic and can be applied to other search contexts with different data and use cases. 32
  33. 33. Benefits  Compare exploratory search engines to a baseline:  show use cases when the baseline can be outperformed;  for which queries the ‘engine under test’ is relatively more effective.  Sensitive to initial query keywords as inputted by the user: when there are inconsistencies or vague terms, even mismatches in the query context, or when expert users disagreed. 33
  34. 34. Contact @laurens_d_v laurens.devocht@ugent.be http://slideshare.net/laurensdv http://semweb.mmlab.be/

×