Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Data legend dh_benelux_2017.key

24 Aufrufe

Veröffentlicht am

datalegend.net presentation on linking historical datasets using Linked Open Data.

Veröffentlicht in: Wissenschaft
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Data legend dh_benelux_2017.key

  1. 1. dataLegend A CLARIAH work package DH BeNeLux: Data Integration Challenges Utrecht University Utrecht, the Netherlands July 3, 2017 @rlzijdeman #dhb2017
  2. 2. Data Preparation Common Motifs in Scientific Workflows: An Empirical Analysis Daniel Garijo⇤, Pinar Alper †, Khalid Belhajjame†, Oscar Corcho⇤, Yolanda Gil‡, Carole Goble† ⇤Ontology Engineering Group, Universidad Polit´ecnica de Madrid. {dgarijo, ocorcho}@fi.upm.es †School of Computer Science, University of Manchester. {alperp, khalidb, carole.goble}@cs.manchester.ac.uk ‡Information Sciences Institute, Department of Computer Science, University of Southern California. gil@isi.edu Abstract—While workflow technology has gained momentum in the last decade as a means for specifying and enacting compu- tational experiments in modern science, reusing and repurposing existing workflows to build new scientific experiments is still a daunting task. This is partly due to the difficulty that scientists experience when attempting to understand existing workflows, which contain several data preparation and adaptation steps in addition to the scientifically significant analysis steps. One way to tackle the understandability problem is through providing abstractions that give a high-level view of activities undertaken within workflows. As a first step towards abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna and Wings systems. Our analysis has resulted in a set of scientific workflow motifs that outline i) the kinds of data intensive activities that are observed in workflows (data oriented motifs), and ii) the different manners in which activities are implemented within workflows (workflow oriented motifs). These motifs can be useful to inform workflow designers on the good and bad practices for workflow development, to inform the design of automated tools for the generation of workflow abstractions, etc. I. INTRODUCTION Scientific workflows have been increasingly used in the last decade as an instrument for data intensive scientific analysis. In these settings, workflows serve a dual function: first as detailed documentation of the method (i. e. the input sources and processing steps taken for the derivation of a certain data item) and second as re-usable, executable artifacts for data-intensive analysis. Workflows stitch together a variety of data manipulation activities such as data movement, data transformation or data visualization to serve the goals of the scientific study. The stitching is realized by the constructs made available by the workflow system used and is largely shaped by the environment in which the system operates and the function undertaken by the workflow. A variety of workflow systems are in use [10] [3] [7] [2] serving several scientific disciplines. A workflow is a software [14] and CrowdLabs [8] have made publishing and finding workflows easier, but scientists still face the challenges of re- use, which amounts to fully understanding and exploiting the available workflows/fragments. One difficulty in understanding workflows is their complex nature. A workflow may contain several scientifically-significant analysis steps, combined with various other data preparation activities, and in different implementation styles depending on the environment and context in which the workflow is executed. The difficulty in understanding causes workflow developers to revert to starting from scratch rather than re-using existing fragments. Through an analysis of the current practices in scientific workflow development, we could gain insights on the creation of understandable and more effectively re-usable workflows. Specifically, we propose an analysis with the following objec- tives: 1) To reverse-engineer the set of current practices in work- flow development through an analysis of empirical evi- dence. 2) To identify workflow abstractions that would facilitate understandability and therefore effective re-use. 3) To detect potential information sources and heuristics that can be used to inform the development of tools for creating workflow abstractions. In this paper we present the result of an empirical analysis performed over 177 workflow descriptions from Taverna [10] and Wings [3]. Based on this analysis, we propose a catalogue of scientific workflow motifs. Motifs are provided through i) a characterization of the kinds of data-oriented activities that are carried out within workflows, which we refer to as data- oriented motifs, and ii) a characterization of the different man- ners in which those activity motifs are realized/implemented within workflows, which we refer to as workflow-oriented motifs. It is worth mentioning that, although important, motifs Fig. 3. Distribution of Data-Oriented Motifs per domain Fig. 3. Distribution of Data-Oriented Motifs per domain Fig. 5. Data Preparation Motifs in the Genomics Workflows
  3. 3. We do this repeatedly for the same datasets!
  4. 4. “Throwaway Science” Rinke Hoekstra (VU Amsterdam)
  5. 5. the problem: “long tail of big data”
  6. 6. Big datasets… • NAPP, Mosaic, IPUMS etc. solve this for large datasets • But this is very expensive • And the results are not mutually compatible • Or worse… the compatibility is contested
  7. 7. The problem we’re trying to solve… • Many interesting datasets are messy, incomplete and incorrect • Data analysis requires clean data • Cleaning data involves careful interpretation and study • Values and variables in the data are replaced with (more) standard terms (coding) • Cross-dataset analyses requires a further data harmonization step • This ‘data preparation’ step can take up to 60% of the total work
  8. 8. Our aims • simplify linking datasets • broaden connectivity of datasets • simplify replication of research
  9. 9. aye B see da be c di A B C D
  10. 10. aye B see da be c di A B C D
  11. 11. An example Variable name: • gender • sex • geslacht Values: • 1,0 • male, female • m, v Values: • sdmx-code:sex-v • sdmx-code:sex-m Variable name: • sdmx-dimension:sex
  12. 12. aye B see da be c di A B C D
  13. 13. aye B see da be c di A B C D ?
  14. 14. source: http://www.smallpc.net/wp-content/uploads/2012/10/worldmap-net2.jpg A B C D
  15. 15. create - view - query - store & serve Linked Data tools
  16. 16. creating Linked Data QBer CoW
  17. 17. viewing Linked Data brwsr inspector
  18. 18. querying Linked Data grlc.io SPARQL2git sparql2git.com
  19. 19. storing & serving Linked Data DRUID
  20. 20. broadening Linked Data vocabularies HISCO LICR
  21. 21. broadening Linked Data datasets Catasto CEDAR CLIO-INFRA CMGPD Gemeentegeschiedenis.nl HSN (demo) HISCO foreign languages HISCAM HISCLASS Thombos
  22. 22. so what we can do now? Get me the data to answer this question: What’s the influence of someone’s religion on social position, controlled for a country’s economic position and time?
  23. 23. https://github.com/CLARIAH/wp4-queries/
  24. 24. questions? @rlzijdeman #dhb2017

×