Open Annotation Rollout, Manchester, 2013-06-25
See also PDF version: http://www.slideshare.net/soilandreyes/2013-0624annotatingr-osopenannotationmeeting-23289491
2013 06-24 Wf4Ever: Annotating research objects (PPTX)
1. Wf4Ever: Annotating
research objects
Stian Soiland-Reyes, Sean BechHofer
myGrid, University of Manchester
Open Annotation Rollout, Manchester, 2013-06-24
This work is licensed under a
Creative Commons Attribution 3.0 Unported License
2. Motivation: Scientific workflows
Coordinated execution of
services and linked resources
Dataflow between services
Web services (SOAP, REST)
Command line tools
Scripts
User interactions
Components (nested workflows)
Method becomes:
Documented visually
Shareable as single definition
Reusable with new inputs
Repurposable other services
Reproducible?
http://www.myexperiment.org/workflows/3355
http://www.taverna.org.uk/
http://www.biovel.eu/
3. But workflows are complex machines
Outputs
Inputs
Configuration
Components
http://www.myexperiment.org/workflows/3355
ā¢ Will it still work after a year? 10 years?
ā¢ Expanding components, we see a workflow involves a
series of specific tools and services which
ā¢ Depend on datasets, software libraries, other tools
ā¢ Are often poorly described or understood
ā¢ Over time evolve, change, break or are replaced
ā¢ User interactions are not reproducible
ā¢ But can be tracked and replayed
4. Electronic Paper Not Enough
Hypothesis Experiment
Result
Analysis
Conclusions
Investigation
Data Data
Electronic
paper
Publish
http://www.force11.org/beyondthepdf2http://figshare.com/
Open Research movement: Openly share the data of your experiments
http://datadryad.org/
6. What is in a research object?
A Research Object bundles and relates digital
resources of a scientific experiment or
investigation:
Data used and results produced in
experimental study
Methods employed to produce and analyse
that data
Provenance and settings for the experiments
People involved in the investigation
Annotations about these resources, that are
essential to the understanding and
interpretation of the scientific outcomes
captured by a research object
http://www.researchobject.org/
7. Gathering everything
Research Objects (RO) aggregate related resources, their
provenance and annotations
Conveys āeverything you need to knowā about a
study/experiment/analysis/dataset/workflow
Shareable, evolvable, contributable, citable
ROs have their own provenance and lifecycles
8. Why Research Objects?
i. To share your research materials
(RO as a social object)
ii. To facilitate reproducibility and reuse of methods
iii. To be recognized and cited
(even for constituent resources)
iv. To preserve results and prevent decay
(curation of workflow definition;
using provenance for partial rerun)
13. Annotations in research objects
Types: āThis document contains an hypothesisā
Relations: āThese datasets are consumed by that toolā
Provenance: āThese results came from this workflow runā
Descriptions: āPurpose of this step is to filter out invalid dataā
Comments: āThis method looks useful, but how do I install it?ā
Examples: āThis is how you could use itā
15. What is provenance?
By Dr Stephen Dann
licensed under Creative Commons Attribution-ShareAlike 2.0 Generic
http://www.flickr.com/photos/stephendann/3375055368/
Attribution
who did it?
Derivation
how did it change?
Activity
what happens to it?
Licensing
can I use it?
Attributes
what is it?
Origin
where is it from?
Annotations
what do others say about it?
Aggregation
what is it part of?
Date and tool
when was it made?
using what?
16. Attribution
Who collected this sample? Who helped?
Which lab performed the sequencing?
Who did the data analysis?
Who curated the results?
Who produced the raw data this analysis is based on?
Who wrote the analysis workflow?
Why do I need this?
i. To be recognized for my work
ii. Who should I give credits to?
iii. Who should I complain to?
iv. Can I trust them?
v. Who should I make friends with?
prov:wasAttributedTo
prov:actedOnBehalfOf
dct:creator
dct:publisher
pav:authoredBy
pav:contributedBy
pav:curatedBy
pav:createdBy
pav:importedBy
pav:providedBy
...
Roles
Person
Organization
SoftwareAgent
Agent types
Alice
The
lab
Data
wasAttributedTo
actedOnBehalfOf
http://practicalprovenance.wordpress.com/
17. Derivation
Which sample was this metagenome sequenced from?
Which meta-genomes was this sequence extracted from?
Which sequence was the basis for the results?
What is the previous revision of the new results?
Why do I need this?
i. To verify consistency (did I use
the correct sequence?)
ii. To find the latest revision
iii. To backtrack where a diversion
appeared after a change
iv. To credit work I depend on
v. Auditing and defence for peer review
wasDerivedFrom
wasQuotedFrom
Sequence
New
results
wasDerivedFrom
Sample
Meta -
genome
Old
results
wasRevisionOf
wasInfluencedBy
18. Activities
What happened? When? Who?
What was used and generated?
Why was this workflow started?
Which workflow ran? Where?
Why do I need this?
i. To see which analysis was performed
ii. To find out who did what
iii. What was the metagenome
used for?
iv. To understand the whole process
āmake me a Methods sectionā
v. To track down inconsistencies
used
wasGeneratedBy
wasStartedAt
"2012-06-21"
Metagenome
Sample
wasAssociatedWith
Workflow
server
wasInformedBy
wasStartedBy
Workflow
run
wasGeneratedBy
Results
Sequencing
wasAssociatedWith
Alice
hadPlan
Workflow
definition
hadRole
Lab
technician
Results
20. Provenance of what?
Who made the (content of) research object? Who maintains it?
Who wrote this document? Who uploaded it?
Which CSV was this Excel file imported from?
Who wrote this description? When? How did we get it?
What is the state of this RO? (Live or Published?)
What did the research object look like before? (Revisions) ā are
there newer versions?
Which research objects are derived from this RO?
21. Research object model at a glance
Research
Object
Resource
Resource
Resource
Annotation
Annotation
Annotation
oa:hasTarget
Resource
Resource
Annotation graph
oa:hasBody
ore:aggregates
Ā«ore:AggregationĀ»
Ā«ro:ResearchObjectĀ»
Ā«oa:AnnotationĀ»
Ā«ro:AggregatedAnnotationĀ»
Ā«trig:GraphĀ»
Ā«ore:AggregatedResourceĀ»
Ā«ro:ResourceĀ»
Manifest
Ā«ore:ResourceMapĀ»
Ā«ro:ManifestĀ»
22. Wf4Ever architecture
Blob store
Graph
store
Resource
Uploaded to
Manifest
Annotation
graph
Research
object
AnnotationORE Proxy
External
reference
Redirects to
If RDF, import as named graph
SPARQL
REST resources
http://www.wf4ever-project.org/wiki/display/docs/RO+API+6
23. Where do RO annotations come
from?
Imported from uploaded resources, e.g. embedded in
workflow-specific format (creator: unknown!)
Created by users filling in Title, Description etc. on website
By automatically invoked software agents, e.g.:
A workflow transformation service extracts the workflow
structure as RDF from the native workflow format
Provenance trace from a workflow run, which describes the
origin of aggregated output files in the research object
24. How we are using the OA model
Multiple oa:Annotation contained within the manifest RDF and
aggregated by the RO.
Provenance (PAV, PROV) on oa:Annotation (who made the link)
and body resource (who stated it)
Typically a single oa:hasTarget, either the RO or an aggregated
resource.
oa:hasBody to a trig:Graph resource (read: RDF file) with the
āactualā annotation as RDF:
<workflow1> dct:title "The wonderful workflow" .
Multiple oa:hasTarget for relationships, e.g. graph body:
<workflow1> roterms:inputSelected <input2.txt> .
25. What should we also be using?
Motivations
myExperiment: commenting, describing, moderating, questioning,
replying, tagging ā made our own vocabulary as OA did not exist
Selectors on compound resources
E.g. description on processors within a workflow in a workflow
definition. How do you find this if you only know the workflow
definition file?
Currently: Annotations on separate URIs for each component,
described in workflow structure graph, which is body of annotation
targeting the workflow definition file
Importing/referring to annotations from other OA systems
(how to discover those?)
26. What is the benefit of OA for us?
Existing vocabulary ā no need for our project to try to
specify and agree on our own way of tracking
annotations.
Potential interoperability with third-party annotation
tools
E.g. We want to annotate a figure in a paper and
relate it to a dataset in a research object ā donāt
want to write another tool for that!
Existing annotations (pre research object) in Taverna
and myExperiment map easily to OA model
27. History lesson (AO/OAC/OA)
When forming the Wf4Ever Research Object model, we found:
Open Annotation Collaboration (OAC)
Annotation Ontology (AO)
What was the difference?
Technically, for Wf4Everās purposes: They are equivalent
Political choice: AO ā supported by Utopia (Manchester)
We encouraged the formation of W3C Open Annotation
Community Group and a joint model
Next: Research Object model v0.2 and RO Bundle will use the
OA model ā since we only used 2 properties, mapping is 1:1
http://www.wf4ever-project.org/wiki/display/docs/2011-09-26+Annotation+model+considerations
28. Saving a research object:
RO bundle
Single, transferrable research object
Self-contained snapshot
Which files in ZIP, which are URIs? (Up to user/application)
Regular ZIP file, explored and unpacked with standard tools
JSON manifest is programmatically accessible without RDF
understanding
Works offline and in desktop applications ā no REST API
access required
Basis for RO-enabled file formats, e.g. Taverna run bundle
Exchanged with myExperiment and RO tools
30. RO Bundle
What is aggregated? File In ZIP or external URI
Who made the RO? When?
Who?
External URIs placed in folders
Embedded annotation
External annotation, e.g. blogpost
JSON-LD context ļ RDF
RO provenance
.ro/manifest.json
Format
Note: JSON "quotes" not shown above for brevity
http://json-ld.org/
http://orcid.org/
https://w3id.org/bundle
31. http://mayor2.dia.fi.upm.es/oeg-upm/files/dgarijo/motifAnalysisSite/
<h3 property="dc:title">Common Motifs in Scientific Workflows:
<br>An Empirical Analysis</h3>
<body resource="http://www.oeg-upm.net/files/dgarijo/motifAnalysisSite/"
typeOf="ore:Aggregation ro:ResearchObject">
Research Object as RDFa
http://www.oeg-upm.net/files/dgarijo/motifAnalysisSite/
<li><a property="ore:aggregates" href="t2_workflow_set_eSci2012.v.0.9_FGCS.xls"
typeOf="ro:Resource">Analytics for Taverna workflows</a></li>
<li><a property="ore:aggregates" href="WfCatalogue-AdditionalWingsDomains.xlsxā
typeOf="ro:Resource">Analytics for Wings workflows</a></li>
<span property="dc:creator prov:wasAttributedTo"
resource="http://delicias.dia.fi.upm.es/members/DGarijo/#me"></span>
Most of the user-contributed content in a research object is recorded as annotations
Typing of resources and relating them to each-other are individual annotations
This shows the quality assessment of a research object ā a Minimal Information Model forms a Checklist which indicates how preservability of a research object.Here, many of the checklist items indicate the existence of particular annotations, while others are checking availability of HTTP resources.Annotations are additionally picked up for UI (title/description) ā this highlights how the importance of annotation model. This was implemented before OA, and the format of the description was not recorded, hence the HTML is rendered as plain text.
Quality metrics are monitored over time, so as the research object evolves (say gaining additional annotations), the health of the annotations can be indicated graphically.
The annotation framework basically allows āany annotationā, so we had to write guidelines on which annotation properties we are going to recommend and ānativelyā understand. Reused existing vocabularies like Dublin Core Terms, PROV and PAV, but also had to make our own more specific vocabularies.
When I first heard about Provenance, I thought it was something French, like Provance. Provenance is classically understood as where something is coming from (Origin); like in this example ā are the shallots from Holland or France? Was there some kind of Derivation that changed their nationality? Obviously if we are going to talk about somethingsā provenance, we have to be clear about what that thing is.. The shallots? The sign? The picture? Or this Flickr page?Provenance also covers other aspects, mainly Attribution (who did it), Dates (when?), and Activities (what happened). There are Attributes to describe the state of the thing. Perhaps not always considered provenance, but anyway relevant, are aggregations (one thing is part of another), Licensing (Can I use it?) and of course Annotations ā what do others say about it?
Letās take an example of a biomedical lab that sequences genome data. There would be lots of questions relating to attributions ā different people play different roles, even act on behalf of others. We can call these Agents ā things that can perform stuff. People are obvious agents, Organizations (like The Lab), but also Software can be active agents.
When we talk about things, or entities, we might want to relate them to each other. An extracted genome can be said to be derived from the sample. The sequence we select from the genome is a kind of quote. The result we get from analysing this is derived from the sequence, and is a revision of the old result ā which again has its own chain of influences which might differ.
Activities is what is happening ā typically using existing entities and generating new ones, somewhat under control by one or more agents. Taken together, you can describe a whole lineage of activities that generate and consume each otherās entities.
So these three classes are what is at the core of the W3C PROV model, which we have helped build. The Entity is derived from other entities, and attributed to an Agent. An Activity use one entity and generates another, and is associated with an agent.
So for Wf4Ever, what kind of provenance are we talking about? It is a bit more down to earth as we are generally just talking about files and documents, but still, as you form a research object combining many different resources, the different dimensions of provenance become increasingly important to track.
So letās have a look at what a Research Object looks like. The core is the concept of the Research Object itself, which you may also known as an ORE aggregation. This is described by the manifest, which is simply an RDF file. The RO aggregates a series of resources ā in Linked Data these could be anywhere in the world. Additionally it aggregates a set of annotations, which we know is the link between a target resource (here aggregated in the RO), and an body resource. In Wf4Ever we typically provide the body as a separate RDF Graph, so that we can use existing vocabularies to describe and relate the resources.
So how is this model exposed in the Wf4Ever architecture? In the back end there is a Blob store which can store any odd file, and a Graph store, where we import any RDF file as a named graph. This is exposed as a SPARQL endpoint. The manifest is one of those files, so you can query Research Object aggregations, their annotations and the annotation bodies in a single SPARQL query.Then each of the different types are exposed as a REST resource. Compound resources are simply PUT or GET almost directly into the Blob store. The ORE Proxy is a mechanism for āuploadingā a URL, this is how we keep external references. The annotation graphs (the bodies) are also simply uploaded as files ā and then related to the resource(s) it describe by making an Annotation within the manifest. So the Manifest is the key RDF document, as it contains both the aggregation and the annotation ā but importantly not the body of those annotations (Which is typically either 1 line or 1000s of lines of RDF) as those are separate files.
So where do our annotations come from? For one we are importing existing annotations from existing formats ā like extracting annotations from within workflow definitions.Secondly, users annotate research objects by interacting with the website, filling in title, description, etc.Thirdly we have software agents that are automatically invoked, for instance we have a transformation service that extracts the workflow structure as an RDF graph by parsing the native workflow format. Provenance traces of workflow runs can be inspected to relate files in the RO with generated outputs in the workflow run.
So which aspects of the OA model are we using? Well, in comes down to actually just oa:hasTarget andoa:hasBody. We use the oa:Annotation as the anchor for provenance about who made the relation, and keep separate provenance on the body resource ā as the person stating it could differ.
myExperiment, a social networking site for sharing workflows, already have the aspects of commenting, etc.. ā it does have an RDF interface, but using our own vocabulary. As we upgrade myExperiment to be based on research objects, we will be moving to use the OA model for motivatons ā the proposed OA motivatoins largely cover what we already do.We also have an issue in that we have many annotations on compound resources. For instance within the Taverna workbench it is possible to annotate individual steps of a workflow with descriptions, examples and so on. Within a research object these are referred to by their full URI ā but how do you find this URI, as the compound resources of the workflow definition are not aggregated. Currently we have to climb through the generated workflow structure graph to find the processors, which we find through a separate annotation on the workflow definition file. If we use the OA selector mechanism we should be able to also relate those deep annotations as SpecificResource on the workflow definition.
Now I have to be fair, we started on this quite a bit before OA existed. In fact, when we started there seemed to be two competing models. But as we inspected OAC and AO and wondered about how we would use them, we came to the conclusion that, at least for our purposes, they are technically equivalent. So we just went with a political choice, AO was already supported by Utopia, which is made in Manchested, so we went with that. However, our enquiries about this also helped trigger the formaton of the OA community group, which we have also participated in. We are looking forward to use the OA model as we now revise the RO model ā the transition is not hard as we only used the 2 properties for target and body, which maps 1:1.In the Research Object Bundle we have already adapted OA, which is a way to save a whole research object as a single ZIP file.
So not everyone have access to set up a RESTful semantic web servers, in particular weāve run into this with desktop applications ā users just want to save files and then they decide where they are stored. So we decided to write a serialization format for Research Object, which we call the RO Bundle.We wanted this to be accessible for applicaton developers, so weāve adopted ZIP and JSON, and in a way this would let you create research objects and make annotations without ever seing any RDF.
This is how we represent a workflow run as a Workflow Results RO Bundle. We aggregate the workflowoutputs, , workflow definition, the inputs used for execution, a description of the execution environment, external URI references (such as the project homepage) and attribution to scientists who contributed to the bundle. This effectively forms a Research Object, all tied together by the RO Bundle Manifest, which is in JSON-LD format. (normal JSON that is also valid RDF).
This shows how the JSON manifest focuses on the most common aspect of a research object ā who made it? When? What is aggregated ā files in the ZIP but also external URIs ā up to the application or person making the bundle to decide what is to be included in the ZIP. Annotations are included at the bottom here, we see that thereās an annotation āaboutā (target) the analysis JPEG, and the content (the body) is within the annotations/ folder. Similarly, the next annotations relates the external resource (a blog post) with our aggregation of a resource.This is processable as JSON-LD ā so it is not just JSON, it is also RDF, and out comes normal ORE aggregations and OA annotations.
Hereās another example of light-weight usage of RDFa to turn a normal index.html into a research object. Here the author is given as a creator of the RO, and the excel files that helped form this analysis are aggregated by the research object. This way of using the Research Object model requires not infrastructure or special packaging ā and we have augmented this page to also have a downloadable RO Bundle so you can get all the aggregated resources in a one-go operation.
So we have recently formed a W3C Community Group for Research Object, which has gathered significant interest, 75 participant. As you see, I am one of the chairs, and so is Rob which you already know from OA group. We are just starting up, and our focus in the RO community group is rather how to practically use Research Objects as a concept than to specify a new model ā weāll refer to existing models where itās appropriate, but also explore other models which could be described as research objects.