Metadata standards and ontologies are important for digital humanities research. Key points from the document include:
- Standards help ensure consistency, reliability, and interoperability. They are developed through an open process involving interested parties.
- The standards landscape includes formats, technical protocols, descriptive standards for libraries, archives, and museums. Dublin Core is commonly used for discovery.
- Ontologies provide rules for describing context and relationships through semantic web technologies like RDF. They help link and integrate data.
- Standards and ontologies in digital cultural heritage include BIBFRAME, CIDOC-CRM, SKOS, and others to represent information for discovery, interpretation, and reuse.
3.24.15 Slides, “New Possibilities: Developments with DSpace and ORCID”DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 1: “New Possibilities: Developments with DSpace and ORCID”
Tuesday, March 24, 2015
Curated by Josh Brown, ORCID
Presented by: Bram Luyten, Co-Founder, @mire - Andrea Bollini, CRIS Solution Product Manager, CINECA - Michele Mennielli, International Relations Manager, CINECA - João Moreira, Head of Scientific Information, FCT-FCCN - Paulo Graça, RCAAP Team Member
The document discusses scaling web data at low cost. It begins by presenting Javier D. Fernández and providing context about his work in semantic web, open data, big data management, and databases. It then discusses techniques for compressing and querying large RDF datasets at low cost using binary RDF formats like HDT. Examples of applications using these techniques include compressing and sharing datasets, fast SPARQL querying, and embedding systems. It also discusses efforts to enable web-scale querying through projects like LOD-a-lot that integrate billions of triples for federated querying.
Presentation done* at the 13th International Semantic Web Conference (ISWC) in which we approach a compressed format to represent RDF Data Streams. See the original article at: http://dataweb.infor.uva.es/wp-content/uploads/2014/07/iswc14.pdf
* Presented by Alejandro Llaves (http://www.slideshare.net/allaves)
LDP4j: A framework for the development of interoperable read-write Linked Da...Nandana Mihindukulasooriya
This presentation introduces LDP4j, an open source Java-based framework for the development of read-write Linked Data applications based on the W3C Linked Data Platform 1.0 (LDP) specification and available under the Apache 2.0 license. This was presented in the ISWC 2014 Developer Woskshop.
http://www.ldp4j.org/
This document discusses the DICOM standard and how it relates to FHIR for medical imaging. It provides an overview of key DICOM concepts like the image hierarchy and metadata tags. It also demonstrates how to use common DICOM tools and requests like C-FIND queries. Finally, it shows how FHIR resources like ImagingStudy can be used to represent DICOM studies and link to images accessible via DICOMweb services like WADO-RS.
The W3C Linked Data Platform (LDP) specification describes a set of best practices and simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using the RDF data model. This presentation provides a set of simple examples that illustrates how an LDP client can interact with an LDP server in the context of a read-write Linked Data application i.e. how to use the LDP protocol for retrieving, updating, creating and deleting Linked Data resources.
The document discusses different types of metadata schemas used for digital collections, including Dublin Core (DC), Qualified Dublin Core (QDC), MARC, MARCXML, MODS, VRA Core, CDWA Lite, GEM, LOM, TEI, and EAD. It provides information on the purpose, content standards, limitations, and best uses of each schema. The document is intended as a workshop on metadata for digital collections.
DSpace-CRIS: a CRIS enhanced repository platformAndrea Bollini
International Conference on Economics and Business Information 19 to 20 April 2016 in Berlin
This presentation introduces you to the version 5.5.0 of the DSpace-CRIS extension. With such extension you can capture the full picture of the research activities conduct in your institution and their context. It enables to showcase the experts, the facilities, the services and much more to attract funding, facilitate collaborations and curate the scientific reputation of your Institution.
3.24.15 Slides, “New Possibilities: Developments with DSpace and ORCID”DuraSpace
Hot Topics: The DuraSpace Community Webinar Series
Series 11: Integrating ORCID Persistent Identifiers with DSpace, Fedora and VIVO
Webinar 1: “New Possibilities: Developments with DSpace and ORCID”
Tuesday, March 24, 2015
Curated by Josh Brown, ORCID
Presented by: Bram Luyten, Co-Founder, @mire - Andrea Bollini, CRIS Solution Product Manager, CINECA - Michele Mennielli, International Relations Manager, CINECA - João Moreira, Head of Scientific Information, FCT-FCCN - Paulo Graça, RCAAP Team Member
The document discusses scaling web data at low cost. It begins by presenting Javier D. Fernández and providing context about his work in semantic web, open data, big data management, and databases. It then discusses techniques for compressing and querying large RDF datasets at low cost using binary RDF formats like HDT. Examples of applications using these techniques include compressing and sharing datasets, fast SPARQL querying, and embedding systems. It also discusses efforts to enable web-scale querying through projects like LOD-a-lot that integrate billions of triples for federated querying.
Presentation done* at the 13th International Semantic Web Conference (ISWC) in which we approach a compressed format to represent RDF Data Streams. See the original article at: http://dataweb.infor.uva.es/wp-content/uploads/2014/07/iswc14.pdf
* Presented by Alejandro Llaves (http://www.slideshare.net/allaves)
LDP4j: A framework for the development of interoperable read-write Linked Da...Nandana Mihindukulasooriya
This presentation introduces LDP4j, an open source Java-based framework for the development of read-write Linked Data applications based on the W3C Linked Data Platform 1.0 (LDP) specification and available under the Apache 2.0 license. This was presented in the ISWC 2014 Developer Woskshop.
http://www.ldp4j.org/
This document discusses the DICOM standard and how it relates to FHIR for medical imaging. It provides an overview of key DICOM concepts like the image hierarchy and metadata tags. It also demonstrates how to use common DICOM tools and requests like C-FIND queries. Finally, it shows how FHIR resources like ImagingStudy can be used to represent DICOM studies and link to images accessible via DICOMweb services like WADO-RS.
The W3C Linked Data Platform (LDP) specification describes a set of best practices and simple approach for a read-write Linked Data architecture, based on HTTP access to web resources that describe their state using the RDF data model. This presentation provides a set of simple examples that illustrates how an LDP client can interact with an LDP server in the context of a read-write Linked Data application i.e. how to use the LDP protocol for retrieving, updating, creating and deleting Linked Data resources.
The document discusses different types of metadata schemas used for digital collections, including Dublin Core (DC), Qualified Dublin Core (QDC), MARC, MARCXML, MODS, VRA Core, CDWA Lite, GEM, LOM, TEI, and EAD. It provides information on the purpose, content standards, limitations, and best uses of each schema. The document is intended as a workshop on metadata for digital collections.
DSpace-CRIS: a CRIS enhanced repository platformAndrea Bollini
International Conference on Economics and Business Information 19 to 20 April 2016 in Berlin
This presentation introduces you to the version 5.5.0 of the DSpace-CRIS extension. With such extension you can capture the full picture of the research activities conduct in your institution and their context. It enables to showcase the experts, the facilities, the services and much more to attract funding, facilitate collaborations and curate the scientific reputation of your Institution.
HydraDAM2: Repository Challenges and Solutions for Large Media FilesJon W. Dunn
Karen Cariani and Jon W. Dunn presentation, Open Repositories 2016, Dublin, June 2016. https://www.conftool.com/or2016/index.php?page=browseSessions&form_session=141#paperID104
Why do they call it Linked Data when they want to say...?Oscar Corcho
The four Linked Data publishing principles established in 2006 seem to be quite clear and well understood by people inside and outside the core Linked Data and Semantic Web community. However, not only when discussing with outsiders about the goodness of Linked Data but also when reviewing papers for the COLD workshop series, I find myself, in many occasions, going back again to the principles in order to see whether some approach for Web data publication and consumption is actually Linked Data or not. In this talk we will review some of the current approaches that we have for publishing data on the Web, and we will reflect on why it is sometimes so difficult to get into an agreement on what we understand by Linked Data. Furthermore, we will take the opportunity to describe yet another approach that we have been working on recently at the Center for Open Middleware, a joint technology center between Banco Santander and Universidad Politécnica de Madrid, in order to facilitate Linked Data consumption.
The document discusses profiling with clinFHIR. It outlines the agenda which includes reviewing key FHIR elements, clinical models, and resources. It then discusses the need to adapt FHIR to specific contexts through profiling. Profiling involves constraining resources, changing coded element bindings, and adding extensions. It provides examples of how profiling can limit names to one, change value sets, and require identifiers. The talk will cover structured and coded data including value sets and extensions. It will demonstrate creating extension definitions and profiles using clinFHIR.
The webinar discussed FAIRDOM services that can help applicants to the ERACoBioTech call with their data management plans and requirements. FAIRDOM offers webinars on developing data management plans, and their platform and tools can help with organizing, storing, sharing, and publishing research data and models in a FAIR manner by utilizing metadata standards. Different levels of support are available, from general community resources through their hub, to premium customized support for individual projects. Consortia can include FAIRDOM as a subcontractor within the guidelines of the ERACoBioTech call.
Detecting Good Practices and Pitfalls when Publishing Vocabularies on the Web María Poveda Villalón
The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated showing the results of analysing the vocabularies gathered in LOV repository. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built.
FHIR can be represented in RDF format. Resources are serialized as directed graphs using URIs, properties, and values. FHIR defines a metadata vocabulary for use in RDF, and a FHIR resource catalog provides the URIs for standard FHIR resources and properties. Shape expressions (ShEx) schemas validate FHIR RDF according to resource definitions. Together, these components allow FHIR data to be queried and manipulated using RDF techniques while maintaining compatibility with the JSON format. Tools exist for converting between FHIR JSON and RDF formats.
DSpace-CRIS_An open source solution for Research_EDU15Michele Mennielli
The research area is a complex world to manage. It involves collecting data, supporting researchers and administrators, monitoring results, allocating resources efficiently, enhancing visibility, and strengthening national and international collaborations. RIMs manage these activities, but they might be too expensive. This is why Cineca developed DSpace-CRIS, and released it in open source.
Keepit Course 5: Tools for Assessing Trustworthy RepositoriesJISC KeepIt project
This presentation provides a quick overview of two key, and complementary, tools used to measure trust of digital repositories. First it focusses on Trustworthy Repositories Audit and Certification (TRAC), leading towards another tool, DRAMBORA, that is applied more extensively in the next presentation. The presentation was given as part of the final module of a 5-module course on digital preservation tools for repository managers, presented by the JISC KeepIt project. For more on this and other presentations in this course look for the tag ’KeepIt course’ in the project blog http://blogs.ecs.soton.ac.uk/keepit/
The document discusses the Organisation and Repository Identification (ORI) system and Repository Junction Broker (RJ Broker). ORI aggregates information about organisations and repositories to provide identification and RJ Broker aims to automate deposition of research outputs to multiple repositories using ORI identifiers. It seeks to increase open access deposits by minimizing effort for depositors and repository managers. Key challenges include supporting various stakeholders, file formats, and standards while providing an automated, scalable solution for processing and depositing research outputs across repositories.
Fhir dev days_basic_fhir_terminology_servicesDevDays
The document provides an overview of basic FHIR terminology services including:
- CodeSystem resources which define collections of terms and can be used for lookups and subsumption checks.
- ValueSet resources which define sets of codes drawn from CodeSystems and can be expanded or used to validate codes.
- ConceptMap resources which define mappings between ValueSets and can be used for translation.
- Common terminology operations like $expand, $lookup, $validate-code and $translate.
- Examples of integrating terminology in applications and some tips for using terminology services efficiently.
Leverage DSpace for an enterprise, mission critical platformAndrea Bollini
Conference: Open Repository, Indianapolis, 8-12 June 2015
Presenters: Andrea Bollini, Michele Mennielli
Cineca, Italy
We would like to share with the DSpace Community some useful tips, starting from how to embed DSpace into a larger IT ecosystem that can provide additional value to the information managed. We will then show how publication data in DSpace - enriched with a proper use of the authority framework - can be combined with information coming from the HR system. Thanks to this, the system can provide rich and detailed reports and analysis through a business intelligence solution based on the Pentaho’s Mondrian OLAP open source data integration tools.
We will also present other use cases related to the management of publication information for reporting purpose: publication record has an extended lifecycle compared to the one in a basic IR; system load is much bigger, especially in writing, since the researchers need to be able to make changes to enrich data when new requirements come from the government or the university researcher office; data quality requires the ability to make distributed changes to the publication also after the conclusion of a validation workflow.
Finally we intend to present our direct experience and the challenges we faced to make DSpace easily and rapidly deployable to more than 60 sites.
Dev days 2017 questionnaires (brian postlethwaite)DevDays
FHIR questionnaires can be used to define structured data capture forms and surveys. Questionnaires are defined using the Questionnaire resource and submitted data is contained in the QuestionnaireResponse resource. Questionnaires support validation rules, pre-population of data, mapping responses to other FHIR resources, and more advanced features like scoring. Questionnaires provide a standards-based way to define and capture structured data in healthcare and other domains.
DSpace-CRIS: an open source solution - Cineca euroCRIS membership meeting Por...Andrea Bollini
The idea of DSpace-CRIS has its origin in 2009 when the Hong Kong University decided to extend the information exposed in their DSpace IR adding information (people/projects) coming from other systems already in use (mainly) for administrative purpose: a CRIS.
One year ago, November 2012, DSpace-CRIS was released as an open source solution to enrich DSpace (1.8.2). After highlighting the important steps made by the DSpace Community in 2013, that will bring to the final release of DSpace 4.0 in December, Cineca focused its presentation on what DSpace-CRIS is today.
The most important announcement was that DSpace-CRIS is now compatible and compliant with the CERIF standard and that an export feature in CERIF XML will be available in the DSpace-CRIS 4.0 version. Indeed the key components of the CERIF data model are supported natively: UUID, timestamped relation, semantic characterization.
In addition to that, the dynamic, flexible and not hardcoded approach of DSpace-CRIS data model makes it very easy to create new entities (besides the few predefined ones) and configure instances compliant with CERIF.
There are several advantages that DSpace-CRIS brings to Institutional Repositories and to the DSpace community overall:
- CRIS entities as authority for Item metadata values;
- DSpace Items can be linked and displayed in the detail page of any CRIS entities;
- Ability to display selected publications (or any other related entities) in the researcher profile;
- It is possible to create lists of selected publications (or any other related entities);
- CRIS entity detailed page visit;
- Global & Top related CERIF Entity views & downloads referencing the CRIS entity (projects for researchers, researchers for OrgUnits, etc.);
- Global & Top item views & downloads referencing the CRIS entity;
- email and RSS alerts;
- Article level metrics for PubMed (extensible):
- Cited-by count in the item page
- Number of articles for researcher
- Total citations for researcher (only items in local DSpace database will be counted)
Clipper is a web annotation toolkit created by a consortium including City of Glasgow College and The Open University to enable annotation and analysis of audiovisual media without copying large files. It allows users to create "virtual clips" from media sources and annotate them using text and share the clips via URIs. Clipper aims to make audiovisual media as easy to work with as text. It demonstrates potential benefits for research including open data, reproducibility, collaboration, and impact. The toolkit is built using MEAN stack technologies and aligns with emerging W3C annotation standards.
A collaborative approach to "filling the digital preservation gap" for Resear...Jenny Mitcham
A presentation given by Jenny Mitcham at the Northern Collaboration Conference on 10th September 2015 at Leeds. It describes work underway in the "Filling the Digital Preservation Gap" project using Archivematica to preserve research data
The role of annotation in reproducibility (Empirical 2014)Oscar Corcho
This document discusses the role of annotation in increasing reproducibility in scientific experiments. It analyzes laboratory protocols from various life science journals and finds they often lack important details. The use of annotation and formal modeling can help address this by capturing minimal information about materials, methods, and results. The document also discusses scientific workflows and how motif detection and abstraction can help with understanding and reusing workflows. Formalizing workflow representations, templates, provenance, and linking them can support reproducibility.
The document discusses open data and challenges with publishing open data. It introduces Entryscape Catalog as a solution for easily, explicitly, and quickly publishing open data through intuitive interfaces with minimum manual work. Entryscape Catalog allows describing data through standard-based forms, publishing data one item at a time or all at once, uploading existing non-open data, and creating APIs from tabular data with a click.
NISO Two-Part Webinar: Sustainable Information
Part 2: Digital Preservation of Audio-Visual Content
December 17, 2014
AXF: Finally a Storage and Preservation Standard for the Ages
Brian Campanotti, Chief Technical Officer, Front Porch Digital
DataCite and its DOI infrastructure - IASSIST 2013Frauke Ziedorn
- DataCite is an international consortium that aims to make research data citable and accessible by establishing a system for minting DOIs (Digital Object Identifiers) for research data.
- DataCite has grown to include 17 member organizations from 12 countries that work with the Technical Information Library (TIB) to register over 1.5 million DOIs for research data.
- The DataCite metadata schema, based on Dublin Core, requires core metadata for DOI registration and encourages linking related publications, data, and other research objects to facilitate discovery and access.
HydraDAM2: Repository Challenges and Solutions for Large Media FilesJon W. Dunn
Karen Cariani and Jon W. Dunn presentation, Open Repositories 2016, Dublin, June 2016. https://www.conftool.com/or2016/index.php?page=browseSessions&form_session=141#paperID104
Why do they call it Linked Data when they want to say...?Oscar Corcho
The four Linked Data publishing principles established in 2006 seem to be quite clear and well understood by people inside and outside the core Linked Data and Semantic Web community. However, not only when discussing with outsiders about the goodness of Linked Data but also when reviewing papers for the COLD workshop series, I find myself, in many occasions, going back again to the principles in order to see whether some approach for Web data publication and consumption is actually Linked Data or not. In this talk we will review some of the current approaches that we have for publishing data on the Web, and we will reflect on why it is sometimes so difficult to get into an agreement on what we understand by Linked Data. Furthermore, we will take the opportunity to describe yet another approach that we have been working on recently at the Center for Open Middleware, a joint technology center between Banco Santander and Universidad Politécnica de Madrid, in order to facilitate Linked Data consumption.
The document discusses profiling with clinFHIR. It outlines the agenda which includes reviewing key FHIR elements, clinical models, and resources. It then discusses the need to adapt FHIR to specific contexts through profiling. Profiling involves constraining resources, changing coded element bindings, and adding extensions. It provides examples of how profiling can limit names to one, change value sets, and require identifiers. The talk will cover structured and coded data including value sets and extensions. It will demonstrate creating extension definitions and profiles using clinFHIR.
The webinar discussed FAIRDOM services that can help applicants to the ERACoBioTech call with their data management plans and requirements. FAIRDOM offers webinars on developing data management plans, and their platform and tools can help with organizing, storing, sharing, and publishing research data and models in a FAIR manner by utilizing metadata standards. Different levels of support are available, from general community resources through their hub, to premium customized support for individual projects. Consortia can include FAIRDOM as a subcontractor within the guidelines of the ERACoBioTech call.
Detecting Good Practices and Pitfalls when Publishing Vocabularies on the Web María Poveda Villalón
The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated showing the results of analysing the vocabularies gathered in LOV repository. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built.
FHIR can be represented in RDF format. Resources are serialized as directed graphs using URIs, properties, and values. FHIR defines a metadata vocabulary for use in RDF, and a FHIR resource catalog provides the URIs for standard FHIR resources and properties. Shape expressions (ShEx) schemas validate FHIR RDF according to resource definitions. Together, these components allow FHIR data to be queried and manipulated using RDF techniques while maintaining compatibility with the JSON format. Tools exist for converting between FHIR JSON and RDF formats.
DSpace-CRIS_An open source solution for Research_EDU15Michele Mennielli
The research area is a complex world to manage. It involves collecting data, supporting researchers and administrators, monitoring results, allocating resources efficiently, enhancing visibility, and strengthening national and international collaborations. RIMs manage these activities, but they might be too expensive. This is why Cineca developed DSpace-CRIS, and released it in open source.
Keepit Course 5: Tools for Assessing Trustworthy RepositoriesJISC KeepIt project
This presentation provides a quick overview of two key, and complementary, tools used to measure trust of digital repositories. First it focusses on Trustworthy Repositories Audit and Certification (TRAC), leading towards another tool, DRAMBORA, that is applied more extensively in the next presentation. The presentation was given as part of the final module of a 5-module course on digital preservation tools for repository managers, presented by the JISC KeepIt project. For more on this and other presentations in this course look for the tag ’KeepIt course’ in the project blog http://blogs.ecs.soton.ac.uk/keepit/
The document discusses the Organisation and Repository Identification (ORI) system and Repository Junction Broker (RJ Broker). ORI aggregates information about organisations and repositories to provide identification and RJ Broker aims to automate deposition of research outputs to multiple repositories using ORI identifiers. It seeks to increase open access deposits by minimizing effort for depositors and repository managers. Key challenges include supporting various stakeholders, file formats, and standards while providing an automated, scalable solution for processing and depositing research outputs across repositories.
Fhir dev days_basic_fhir_terminology_servicesDevDays
The document provides an overview of basic FHIR terminology services including:
- CodeSystem resources which define collections of terms and can be used for lookups and subsumption checks.
- ValueSet resources which define sets of codes drawn from CodeSystems and can be expanded or used to validate codes.
- ConceptMap resources which define mappings between ValueSets and can be used for translation.
- Common terminology operations like $expand, $lookup, $validate-code and $translate.
- Examples of integrating terminology in applications and some tips for using terminology services efficiently.
Leverage DSpace for an enterprise, mission critical platformAndrea Bollini
Conference: Open Repository, Indianapolis, 8-12 June 2015
Presenters: Andrea Bollini, Michele Mennielli
Cineca, Italy
We would like to share with the DSpace Community some useful tips, starting from how to embed DSpace into a larger IT ecosystem that can provide additional value to the information managed. We will then show how publication data in DSpace - enriched with a proper use of the authority framework - can be combined with information coming from the HR system. Thanks to this, the system can provide rich and detailed reports and analysis through a business intelligence solution based on the Pentaho’s Mondrian OLAP open source data integration tools.
We will also present other use cases related to the management of publication information for reporting purpose: publication record has an extended lifecycle compared to the one in a basic IR; system load is much bigger, especially in writing, since the researchers need to be able to make changes to enrich data when new requirements come from the government or the university researcher office; data quality requires the ability to make distributed changes to the publication also after the conclusion of a validation workflow.
Finally we intend to present our direct experience and the challenges we faced to make DSpace easily and rapidly deployable to more than 60 sites.
Dev days 2017 questionnaires (brian postlethwaite)DevDays
FHIR questionnaires can be used to define structured data capture forms and surveys. Questionnaires are defined using the Questionnaire resource and submitted data is contained in the QuestionnaireResponse resource. Questionnaires support validation rules, pre-population of data, mapping responses to other FHIR resources, and more advanced features like scoring. Questionnaires provide a standards-based way to define and capture structured data in healthcare and other domains.
DSpace-CRIS: an open source solution - Cineca euroCRIS membership meeting Por...Andrea Bollini
The idea of DSpace-CRIS has its origin in 2009 when the Hong Kong University decided to extend the information exposed in their DSpace IR adding information (people/projects) coming from other systems already in use (mainly) for administrative purpose: a CRIS.
One year ago, November 2012, DSpace-CRIS was released as an open source solution to enrich DSpace (1.8.2). After highlighting the important steps made by the DSpace Community in 2013, that will bring to the final release of DSpace 4.0 in December, Cineca focused its presentation on what DSpace-CRIS is today.
The most important announcement was that DSpace-CRIS is now compatible and compliant with the CERIF standard and that an export feature in CERIF XML will be available in the DSpace-CRIS 4.0 version. Indeed the key components of the CERIF data model are supported natively: UUID, timestamped relation, semantic characterization.
In addition to that, the dynamic, flexible and not hardcoded approach of DSpace-CRIS data model makes it very easy to create new entities (besides the few predefined ones) and configure instances compliant with CERIF.
There are several advantages that DSpace-CRIS brings to Institutional Repositories and to the DSpace community overall:
- CRIS entities as authority for Item metadata values;
- DSpace Items can be linked and displayed in the detail page of any CRIS entities;
- Ability to display selected publications (or any other related entities) in the researcher profile;
- It is possible to create lists of selected publications (or any other related entities);
- CRIS entity detailed page visit;
- Global & Top related CERIF Entity views & downloads referencing the CRIS entity (projects for researchers, researchers for OrgUnits, etc.);
- Global & Top item views & downloads referencing the CRIS entity;
- email and RSS alerts;
- Article level metrics for PubMed (extensible):
- Cited-by count in the item page
- Number of articles for researcher
- Total citations for researcher (only items in local DSpace database will be counted)
Clipper is a web annotation toolkit created by a consortium including City of Glasgow College and The Open University to enable annotation and analysis of audiovisual media without copying large files. It allows users to create "virtual clips" from media sources and annotate them using text and share the clips via URIs. Clipper aims to make audiovisual media as easy to work with as text. It demonstrates potential benefits for research including open data, reproducibility, collaboration, and impact. The toolkit is built using MEAN stack technologies and aligns with emerging W3C annotation standards.
A collaborative approach to "filling the digital preservation gap" for Resear...Jenny Mitcham
A presentation given by Jenny Mitcham at the Northern Collaboration Conference on 10th September 2015 at Leeds. It describes work underway in the "Filling the Digital Preservation Gap" project using Archivematica to preserve research data
The role of annotation in reproducibility (Empirical 2014)Oscar Corcho
This document discusses the role of annotation in increasing reproducibility in scientific experiments. It analyzes laboratory protocols from various life science journals and finds they often lack important details. The use of annotation and formal modeling can help address this by capturing minimal information about materials, methods, and results. The document also discusses scientific workflows and how motif detection and abstraction can help with understanding and reusing workflows. Formalizing workflow representations, templates, provenance, and linking them can support reproducibility.
The document discusses open data and challenges with publishing open data. It introduces Entryscape Catalog as a solution for easily, explicitly, and quickly publishing open data through intuitive interfaces with minimum manual work. Entryscape Catalog allows describing data through standard-based forms, publishing data one item at a time or all at once, uploading existing non-open data, and creating APIs from tabular data with a click.
NISO Two-Part Webinar: Sustainable Information
Part 2: Digital Preservation of Audio-Visual Content
December 17, 2014
AXF: Finally a Storage and Preservation Standard for the Ages
Brian Campanotti, Chief Technical Officer, Front Porch Digital
DataCite and its DOI infrastructure - IASSIST 2013Frauke Ziedorn
- DataCite is an international consortium that aims to make research data citable and accessible by establishing a system for minting DOIs (Digital Object Identifiers) for research data.
- DataCite has grown to include 17 member organizations from 12 countries that work with the Technical Information Library (TIB) to register over 1.5 million DOIs for research data.
- The DataCite metadata schema, based on Dublin Core, requires core metadata for DOI registration and encourages linking related publications, data, and other research objects to facilitate discovery and access.
PIDs and DOI registration with DataCite - IATUL Workshop 2013Frauke Ziedorn
This document discusses DataCite, an international organization that provides persistent identifiers (PIDs) and metadata services to help researchers publish and cite research data. It describes DataCite's role in establishing standards and best practices for research data management. Key services mentioned include DOI registration, a metadata store containing registered metadata, an OAI-PMH data provider, and content negotiation to access different formats of registered objects.
DataCite – Bridging the gap and helping to find, access and reuse data – Herb...OpenAIRE
OpenAIRE Interoperability Workshop (8 Feb. 2013).
DataCite – Bridging the gap and helping to find, access and reuse data – Herbert Gruttemeier, INIST-CNRS
A set of slides that provides a high-level overview of the W3C Linked Data Platform specification presented at the 4th Linked Data in Architecture and Construction Workshop.
For more detailed and technical version of the presentation, please refer to
http://www.slideshare.net/nandana/learning-w3c-linked-data-platform-with-examples
LDAC 2016 programme
http://smartcity.linkeddata.es/LDAC2016/#programme
Slides accompanying a talk delivered by Dan Gillean at PASIG 2016, held at the Museum of Modern Art in New York, NY October 26-28, 2016.
These slides explore the roles that standards play in digital preservation, and introduce some of the key standards that Archivematica was designed with in mind, and which the system uses to help you capture technical, preservation, and administrative metadata when generating Archival Information Packages (AIPs) and Dissemination Information Packages (DIPs).
For more information about Archivematica, see: https://www.archivematica.org
OpenAIRE and Eudat services and tools to support FAIR DMP implementation Research Data Alliance
The document provides an overview of the Open Research Data Pilot, the data management plan, and OPENAIRE tools and services to support implementation of FAIR data management plans. It discusses the aims of the Open Research Data Pilot, which Horizon 2020 projects are required to participate, and the types of data that must be deposited. It also covers topics like creating a data management plan, selecting a repository, making data FAIR, and OPENAIRE support resources like briefing papers, webinars, and the Zenodo repository.
OpenAIRE and Eudat services and tools to support FAIR DMP implementation Research Data Alliance
The document provides an overview of the Open Research Data Pilot, the data management plan, and OPENAIRE tools and services to support implementation of FAIR data management plans. It discusses the aims of the Open Research Data Pilot, which Horizon 2020 projects are required to participate, and the types of data that must be deposited. It also covers topics like creating a data management plan, selecting a repository, making data FAIR, and OPENAIRE support resources like briefing papers, webinars, and the Zenodo repository.
OpenAIRE webinar: Principles of Research Data Management, with S. Venkatarama...OpenAIRE
The 2019 International Open Access Week will be held October 21-27, 2019. This year’s theme, “Open for Whom? Equity in Open Knowledge,” builds on the groundwork laid during last year’s focus of “Designing Equitable Foundations for Open Knowledge.”
As has become a tradition of sorts, OpenAIRE organises a series of webinars during this week, highlighting OpenAIRE activities, services and tools, and reach out to the wider community with relevant talks on many aspects of Open Science.
Application of recently developed FAIR metrics to the ELIXIR Core Data ResourcesPistoia Alliance
The FAIR (Findable, Accessible, Interoperable and Reusable) principles aim to maximize the discovery and reuse of digital resources. Using recently developed software and metrics to assess FAIRness and supported through an ELIXIR Implementation Study, Michel worked with a subset of ELIXIR Core Data Resources to apply these technologies. In this webinar, he will discuss their approach, findings, and lessons learned towards the understanding and promotion of the FAIR principles.
Researchers require infrastructures that ensure a maximum of accessibility, stability and reliability to facilitate working with and sharing of research data. Such infrastructures are being increasingly summarised under the term Research Data Repositories (RDR). The project re3data.org – Registry of Research Data Repositories – began to index research data repositories in 2012 and offers researchers, funding organisations, libraries and publishers an overview of the heterogeneous research data repository landscape. In December 2014 re3data.org listed more than 1,030 research data repositories, which are described in detail using the re3data.org schema (http://dx.doi.org/10.2312/re3.003). Information icons help researchers to identify easily an adequate repository for the storage and reuse of their data. This talk describes the heterogeneous RDR landscape and presents a typology of institutional, disciplinary, multidisciplinary and project-specific RDR. Further, it outlines the features of re3data. org and it shows current developments for integration into data management planning tools and other services.
By the end of 2015 re3data.org and Databib (Purdue University, USA) will merge their services, which will then be managed under the auspices of DataCite. The aim of this merger is to reduce duplication of effort and to serve the research community better with a single, sustainable registry of research data repositories. The talk will present this organisational development as a best practice example for the development of international research information services.
Slides from our tutorial on Linked Data generation in the energy domain, presented at the Sustainable Places 2014 conference on October 2nd in Nice, France
2021 04 Introduction to FAIRsharing - cinecaAllyson Lister
Part of the The “How FAIR are you” webinar series and hackathon, which aim at increasing and facilitating the uptake of FAIR approaches into software, training materials and cohort data, to facilitate responsible and ethical data and resource sharing and implementation of federated applications for data analysis.
More information at
* the webinar page: https://www.cineca-project.eu/news-events-all/how-fair-are-you-hackathon
* the recording of the talk: https://www.youtube.com/watch?v=UdGZOynyuGo
"Data in Context" IG sessions @ RDA 3rd PlenaryBrigitte Jörg
The Data in Context Interest Group at the 3rd RDA Plenary in Dublin discussed developing standards and requirements for representing data context through the data lifecycle. They reviewed several existing data lifecycle and metadata models, as well as relevant standards organizations. Their work plan involves creating an overview of context-aware standards by month 6 and a prioritized requirements list by month 12. The long-term goal is to establish a Working Group to implement standardized profiles and enable automated transformation between standards to represent data context.
Data in Context Interest Group Sessions @ RDA 3rd Plenary, Dublin (March 26-2...Brigitte Jörg
The Data in Context Interest Group at the 3rd RDA Plenary in Dublin discussed developing a common understanding of data context and lifecycles. They reviewed several existing data lifecycle models and standards that address contextual metadata. Their goals are to provide an overview of relevant standardization work, prioritize requirements, and establish a working group to further develop standardized profiles and facilitate transformation between standards to represent data context. The group plans initial deliverables reviewing contextual standards work and prioritizing requirements, to inform establishing a working group.
This document provides an overview of digital libraries, including definitions, benefits, limitations, components, standards, and challenges. It defines a digital library as a collection of information stored and accessed electronically, extending the functions of a traditional library digitally. Benefits include improved access and searchability, easier information sharing and preservation. Emerging technologies discussed include metadata standards, XML, and protocols like OAI-PMH for metadata harvesting. Common digital library software includes DSpace, Greenstone, and EPrints. Challenges involve digitization, description, legal issues, presentation of heterogeneous resources, and economic sustainability.
This document provides an overview of digital libraries, including definitions, benefits, limitations, components, standards, and challenges. It defines a digital library as a collection of information stored and accessed electronically, extending the functions of a traditional library digitally. Benefits include improved access, information sharing, and preservation, while limitations include technological obsolescence and rights management. Key components discussed include digital objects, metadata, and tools like DSpace and Greenstone for developing digital libraries. Emerging standards around identifiers, encoding, and metadata are also summarized.
Ähnlich wie Introduction to Digital Humanities: Metadata standards and ontologies (20)
Deze presentatie werd gegeven door Veerle Kerstens tijdens de LIBIS gebruikersdag van 7 juni. In deze sessie gaven we een overzicht van waar we staan in de overgang naar Primo VE.
In mei hebben alle instellingen de nieuwe Limo versie kunnen testen en feedback geven. Het resultaat van deze testen is besproken: wat de openstaande issues zijn en hoe we verder gaan met de implementatie.
Ook de verschillen met de huidige Limo en enkele nieuwe mogelijkheden zijn aan bod gekomen. Tenslotte was er ruimte voor vragen & antwoorden zodat iedereen na deze sessie helemaal mee was met de ins en outs van Primo VE.
Presentatie gegeven door Sanne Van Poppel (KU Leuven Bibliotheken - campus KULAK) tijdens de LIBIS gebruikersdag van 7 juni 2022. Om de ‘User Experience’ van de backoffice verder te verbeteren en optimaliseren werd, in navolging van de metadata editor, momenteel de ‘Circulation Desk’ onder de loep genomen. De eerste fase van het traject, nl. gebruikersstudies, zijn achter de rug . In deze sessie werd toegelicht welke rol LIBIS, meer bepaald LIBISnet, hierin speelt. Eveneens kreeg men een eerste glimp van het nieuwe design te zien.
Presentatie gegeven door Gijs Noels tijdens de LIBIS gebruikersdag van 7 juni 2022. Alma digital & copyright? Elke bibliotheek worstelt bij momenten met de vraag of een digitale kopie al dan niet vrij toegankelijk gemaakt mag worden binnen de instelling. In deze sessie werden deprincipes van CDL: Controlled Digital Lending voorgesteld. Binnen CDL kunnen bibliotheken digitale documenten ‘beschouwen’ en ‘behandelen’ als fysieke exemplaren om deze bronnen toegankelijk (uitleenbaar) te maken.
Presentatie gegeven door Vangelis Palaskas tijdens de LIBIS gebruikersdag van 7 juni 2022. Met het Alma Cloud Apps framework kunnen we de productiviteit van bibliotheken verhogen door de functionaliteit van Alma-diensten uit te breiden. Alma Cloud-apps zijn namelijk snel, ‘user-friendly’ en gemakkelijk toegankelijk binnen Alma.
Tijdens een webinar in april hebben we de Alma Cloud app “CSV user load” voorgesteld, hiermee is het nu mogelijk om Alma user accounts efficiënter te beheren. We willen verder gaan op dit elan en ‘SpineOMatic’ introduceren, een Alma cloud app om labels & barcodes af te drukken!
Toegang tot digitale objecten - viewers en ResolverLIBIS
Presentatie gegeven door DIrk Kinnaes tijdens de LIBIS gebruikersdag van 7 juni 2022. Als je in Limo, scopeQuery, een Omeka website of elders een verwijzing vindt naar een digitaal document in Teneo dan kom je automatisch terecht in een of andere "viewer" die het gevraagde object weergeeft. In deze sessie tonen we welke viewers beschikbaar zijn in Teneo, en welke factoren bepalen welke viewer je standaard te zien krijgt. Bij het invoeren van dergelijke verwijzingen is het belangrijk om een persistente systeemonafhankelijke URL te gebruiken: de "resolver" URL. Tijdens deze sessie werd getoond hoe de LIBIS resolver gebruikt kan worden om te verwijzen naar gerelateerde services die toelaten de digitale objecten op allerlei manieren te gebruiken, zoals een andere viewer gebruiken, een thumbnail opvragen of een opname in een bepaalde kwaliteit downloaden.
Vlaamse ErfgoedBibliotheken: collectiewijzer en MMFC LIBIS
Presentatie gegeven door Jeroen Cortvriendt op de LIBIS gebruikersdag van 7 juni 2022.
In samenwerking met de Vlaamse Erfgoedbibliotheken kwamen twee platforms tot stand: MMFC en Collectiewijzer. MMFC is een uniek platform waar middeleeuwse handschriften uit Vlaamse collecties worden samengebracht.
Op Collectiewijzer vind je alle bibliothecaire erfgoedcollecties die bewaard worden in instellingen en organisaties in Vlaanderen en Brussel. De registratie in beide projecten gebeurt via CollectiveAccess en Omeka S zorgt voor het online toegankelijk maken van de collecties. In deze sessie werd uitgeleg hoe we een zeer rijk en hiërarchisch gestructureerd datamodel kunnen ontsluiten en doorzoekbaar kunnen maken aan de bezoekerskant.
Wetenschappelijke Collecties en Ergoed & Kunstpatrimonium - Blendeff LIBIS
Presentatie gegeven op de LIBIS gebruikersdag door Nathalie Poot van de dienst "Academisch en historisch patrimonium " KU Leuven. Via het online platform van Blendeff is het sinds kort mogelijk te grasduinen doorheen het wetenschappelijk erfgoed en de kunstverzamelingen van de KU Leuven. De wetenschappelijke collecties worden sinds augustus 2020 beschreven in CollectiveAccess. Het volledige proces van collectieregistratie in CollectiveAccess tot online ontsluiting op Blendeff wordt toegelicht. Daarnaast wordt er dieper ingegaan in op de gebruikte standaardontologie, de online thesauri, persistente uri’s en IIIF.
Presentatie gegeven door Gijs Noels op de LIBIS gebruikersdag van 7 juni 2022: wat hebben we gerealiseerd het voorbije jaar, maar vooral, waar zijn we mee bezig en hoe ziet de (nabije) toekomst eruit voor de verschillende toepassingen.
Presentatie gegeven door Veerle Kerstens op de LIBIS gebruikersdag van 7 juni 2022: wat hebben we gerealiseerd het voorbije jaar, maar vooral, waar zijn we mee bezig en hoe ziet de (nabije) toekomst eruit voor de verschillende toepassingen.
Presentatie gegeven door Valérie Adriaens op de LIBIS gebruikersdag van 7 juni 2022: wat hebben we gerealiseerd het voorbije jaar, maar vooral, waar zijn we mee bezig en hoe ziet de (nabije) toekomst eruit voor de verschillende toepassingen.
Presentatie gegeven door Gijs Noels op de LIBIS gebruikersdag van 7 juni 2022: wat hebben we gerealiseerd het voorbije jaar, maar vooral, waar zijn we mee bezig en hoe ziet de (nabije) toekomst eruit voor de verschillende toepassingen.
Digitale Collecties in de kijker zetten met Limo en OmekaLIBIS
Presentatie gegevens tijdens de LIBIS gebruikersdag van 7 juni 2022 door Valérie Adriaens en Veerle Kerstens. Limo en Omeka bieden beide heel wat mogelijkheden om digitale collecties online toegankelijk te maken en te visualiseren. Aan de hand van heldere praktijkvoorbeelden is er dieper ingegaan op de gelijkenissen maar vooral op de verschillen tussen Limo en Omeka.
LIBISnet gebruikersdag 02062020 - Personalization of Alma and LimoLIBIS
Engelstalige presentatie gegeven door Josh Weisman (VP Development - Resources Management - Ex Libris) tijdens de LIBISnet gebruikersdag. Omwille van omstandigheden vond de gebruikersdag plaats onder de vorm van een webinar. In de deze presentatie over Alma en Limo werd naast de personalisatie features voor Alma en Limo ook de Ex Libris Clouds Apps a.d.h.v. usecases en een demo toegelicht.
Presentatie gegeven door Veerle Kerstens (LIBIS @KU Leuven) tijdens de LIBISnet gebruikersdag. Omwille van omstandigheden vond de gebruikersdag plaats onder de vorm van een webinar. In de deze presentatie over Limo (een product gebassseerd op Primo van Ex Libris) werd aangetoond welke gevogen COVID19 heeft gehad op het gebruik van Limo, de verandering van Primo Central naar CDI (Central Discovery Index) en werd Primo VE toegelicht.
LIBISnet gebruikersdag 02062020 - Alma updateLIBIS
Presentatie gegeven door Gijs Noels (LIBIS @KU Leuven) tijdens de LIBISnet gebruikersdag. Omwille van omstandigheden vond de gebruikersdag plaats onder de vorm van een bewebinar. In de deze presentatie over Alma (geïntegreerd collectiebeheersysteem voor diverse materiaaltypes) werd de focus gelegd op de veranderingen ten gevolge van COVID19 die we gezien hebben binnen de bibliotheken van ons netwerk.
Book a-book , facilitating access to learning materials for students with dis...LIBIS
Presentation given by Bart Peeters (LIBIS) and Roel Vuegen (KU Leuven Libraries) on the International SIHO (Support Centre for Inclusive Higher Education) Conference of September 12th 2019. The conference presented a program full of inspiring speakers, new innovative support tools, opportunities to share experiences and good practices. As well as a debate with management, professionals, politicians and students.
20190920 de noden van onderzoekers rond discovery iaz2019(rw)LIBIS
Presentatie gegeven door Roxanne Wyns (KU Leuven Bibliotheken - LIBIS) op Informatie aan Zee, met als onderwerp "De noden van onderzoekers rond de discovery van onderzoeksbronnen"
Naar een geïntegreerde ontsluiting van de KADOC erfgoedcollecties met LimoLIBIS
Tijdens de LIBISnet gebruikersdag van 6 juni 2019 gaf Luc Schokkaert (KADOC) een plenaire sessie "Naar een geïntegreerde ontsluiting van de KADOC-erfgoedcollecties met Limo". Inhoud: KADOC-data zijn verspreid over diverse databanken: boeken, tijdschriften en audiovisuele documenten in Alma, archieven in scopeArchiv, naslaggegevens in ODIS,… Voor de gebruiker was het moeilijk een algemeen overzicht te krijgen van al het in KADOC beschikbare erfgoed. In de uiteenzetting komt aan bod hoe Limo werd ingezet om een geïntegreerde ontsluiting te realiseren en hoe de diverse systemen met elkaar zijn verbonden.
Auteursrechten bij digitale documenten - LIBISnet gebruikersdag 6 juni 2019LIBIS
De workshop "Auteursrechten bij digitale bestanden - Alma D en auteursrechten" werd tijdens de LIBISnet gebruikersdag van 6 juni gegeven door Diederik Lanoye en Mark Verbrugge (KU Leuven Bibliotheken). Inhoud: Alma-D biedt een praktische oplossing voor de opslag en bewaring van digitale en gedigitaliseerde documenten in vele formaten en het beheer van digitale collecties. Maar mag je elk document digitaliseren? Welke bestanden mag je wel of niet bewaren in Alma-D? En hoe kan je de toegang tot bestanden in Alma-D afschermen om aan de geldende regels te voldoen? In deze sessie bevragen we hoe jullie Alma-D (zullen) gebruiken en bespreken we hoe wensen en wetten in overeenstemming kunnen gebracht worden.
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".Christina Parmionova
The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
Contributi dei parlamentari del PD - Contributi L. 3/2019Partito democratico
DI SEGUITO SONO PUBBLICATI, AI SENSI DELL'ART. 11 DELLA LEGGE N. 3/2019, GLI IMPORTI RICEVUTI DALL'ENTRATA IN VIGORE DELLA SUDDETTA NORMA (31/01/2019) E FINO AL MESE SOLARE ANTECEDENTE QUELLO DELLA PUBBLICAZIONE SUL PRESENTE SITO
RFP for Reno's Community Assistance CenterThis Is Reno
Property appraisals completed in May for downtown Reno’s Community Assistance and Triage Centers (CAC) reveal that repairing the buildings to bring them back into service would cost an estimated $10.1 million—nearly four times the amount previously reported by city staff.
UN WOD 2024 will take us on a journey of discovery through the ocean's vastness, tapping into the wisdom and expertise of global policy-makers, scientists, managers, thought leaders, and artists to awaken new depths of understanding, compassion, collaboration and commitment for the ocean and all it sustains. The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
karnataka housing board schemes . all schemesnarinav14
The Karnataka government, along with the central government’s Pradhan Mantri Awas Yojana (PMAY), offers various housing schemes to cater to the diverse needs of citizens across the state. This article provides a comprehensive overview of the major housing schemes available in the Karnataka housing board for both urban and rural areas in 2024.
How To Cultivate Community Affinity Throughout The Generosity JourneyAggregage
This session will dive into how to create rich generosity experiences that foster long-lasting relationships. You’ll walk away with actionable insights to redefine how you engage with your supporters — emphasizing trust, engagement, and community!
Combined Illegal, Unregulated and Unreported (IUU) Vessel List.Christina Parmionova
The best available, up-to-date information on all fishing and related vessels that appear on the illegal, unregulated, and unreported (IUU) fishing vessel lists published by Regional Fisheries Management Organisations (RFMOs) and related organisations. The aim of the site is to improve the effectiveness of the original IUU lists as a tool for a wide variety of stakeholders to better understand and combat illegal fishing and broader fisheries crime.
To date, the following regional organisations maintain or share lists of vessels that have been found to carry out or support IUU fishing within their own or adjacent convention areas and/or species of competence:
Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR)
Commission for the Conservation of Southern Bluefin Tuna (CCSBT)
General Fisheries Commission for the Mediterranean (GFCM)
Inter-American Tropical Tuna Commission (IATTC)
International Commission for the Conservation of Atlantic Tunas (ICCAT)
Indian Ocean Tuna Commission (IOTC)
Northwest Atlantic Fisheries Organisation (NAFO)
North East Atlantic Fisheries Commission (NEAFC)
North Pacific Fisheries Commission (NPFC)
South East Atlantic Fisheries Organisation (SEAFO)
South Pacific Regional Fisheries Management Organisation (SPRFMO)
Southern Indian Ocean Fisheries Agreement (SIOFA)
Western and Central Pacific Fisheries Commission (WCPFC)
The Combined IUU Fishing Vessel List merges all these sources into one list that provides a single reference point to identify whether a vessel is currently IUU listed. Vessels that have been IUU listed in the past and subsequently delisted (for example because of a change in ownership, or because the vessel is no longer in service) are also retained on the site, so that the site contains a full historic record of IUU listed fishing vessels.
Unlike the IUU lists published on individual RFMO websites, which may update vessel details infrequently or not at all, the Combined IUU Fishing Vessel List is kept up to date with the best available information regarding changes to vessel identity, flag state, ownership, location, and operations.
This report explores the significance of border towns and spaces for strengthening responses to young people on the move. In particular it explores the linkages of young people to local service centres with the aim of further developing service, protection, and support strategies for migrant children in border areas across the region. The report is based on a small-scale fieldwork study in the border towns of Chipata and Katete in Zambia conducted in July 2023. Border towns and spaces provide a rich source of information about issues related to the informal or irregular movement of young people across borders, including smuggling and trafficking. They can help build a picture of the nature and scope of the type of movement young migrants undertake and also the forms of protection available to them. Border towns and spaces also provide a lens through which we can better understand the vulnerabilities of young people on the move and, critically, the strategies they use to navigate challenges and access support.
The findings in this report highlight some of the key factors shaping the experiences and vulnerabilities of young people on the move – particularly their proximity to border spaces and how this affects the risks that they face. The report describes strategies that young people on the move employ to remain below the radar of visibility to state and non-state actors due to fear of arrest, detention, and deportation while also trying to keep themselves safe and access support in border towns. These strategies of (in)visibility provide a way to protect themselves yet at the same time also heighten some of the risks young people face as their vulnerabilities are not always recognised by those who could offer support.
In this report we show that the realities and challenges of life and migration in this region and in Zambia need to be better understood for support to be strengthened and tuned to meet the specific needs of young people on the move. This includes understanding the role of state and non-state stakeholders, the impact of laws and policies and, critically, the experiences of the young people themselves. We provide recommendations for immediate action, recommendations for programming to support young people on the move in the two towns that would reduce risk for young people in this area, and recommendations for longer term policy advocacy.
3. 3
On the agenda
I. Introduction to standards
II. The standards landscape in DCH
III. Ontologies
IV. Standards in research
V. Best practices
Introduction to DH - Metadata standards and ontologies
Source: dilbert.com
5. 5
What is a standard?
“Put at its simplest, a standard is an agreed, repeatable way of doing
something. It is a published document that contains a technical
specification or other precise criteria designed to be used consistently
as a rule, guideline, or definition. Standards help to make life
simpler and to increase the reliability and the effectiveness of many
goods and services we use. Standards are created by bringing
together the experience and expertise of all interested parties such
as the producers, sellers, buyers, users and regulators of a particular
material, product, process or service.”
Introduction to DH - Metadata standards and ontologies
Definition according to ‘The British Standards Institution’ (BSI),
the world’s oldest standards setting organisation (1901)
6. 6
What is a standard? & What does it do?
WHAT
• Agreed
• Rule, guideline, definition
• Documented
• Repeatable
• Material, product, process,
service
• Involved
• Knowledge
• …
Introduction to DH - Metadata standards and ontologies
WHY
• Simplicity
• Consistency
• Reliability
• Interpretable
• Interoperability
• Effective
• Innovation
• Productivity
• …
10. 10
Types of standards
• Standards not formally recognized by a
standards setting body, but widely used and
recognized by the sector
• Standards formally recognized by a standards
setting body (e.g. ISO)
• In-house
• Community
• National
• International, nearly always approved by an
international standards setting body (e.g. ISO
8601)
• Open standards
Introduction to DH - Metadata standards and ontologies
11. 11
Standards development process
1. Open meeting
2. Consensus
3. Due process
4. Open IPR
5. One World
6. Open change
7. Open documents
8. Open interface
9. Open access
10.Ongoing support
A transparent and democratic process
Introduction to DH - Metadata standards and ontologies
Kenneth Krechmer, Open Standards Requirements, “The international
Journal of IT Standards and Standardisation Research”
12. 12
Selecting a standard
Open IPR
Open access
On-going support
Introduction to DH - Metadata standards and ontologies
14. 14
GLAM infrastructure overview
Introduction to DH - Metadata standards and ontologies
Coll.MS CMS
DAM/
LTP
Collection management system
- Metadata creation and management
(master files)
- Closed system (access rights)
- Metadata is pushed to public portal
DAM / LTP
- Ingest of files + metadata
- PID
- Visualisation view
er
Admin
UI
Admin
UI
Front
- end
Admin
UI
Front
- end
Public Access portal / CMS
Additional content such as
news, story's, …
Metadata
for publication
Descriptive &
administrative metadata
Physical & Digital born items
16. 16
Heron infrastructureA standard approach
Introduction to DH - Metadata standards and ontologies
LIAS ingester
Lias
Uploader
Pawtucket
FTP
API
PNX
DC
API
API
DC
IIIF
SPECTRUM
TIFF
JPEG
…
LIDO
XSLT
EDM
API
18. 18
Metadata @ libraries, archives and museums
• Domain specific description and export standards
• Often with (small) adaptations to fit specific needs
• Different standards according to it’s use environment
• Exports in different formats
Introduction to DH - Metadata standards and ontologies
20. 20
Collection management system
Introduction to DH - Metadata standards and ontologies
Coll.MS CMS
DAM/
LTP view
er
Admin
UI
Admin
UI
Front
- end
Admin
UI
Front
- end
Descriptive &
administrative metadata
Physical & Digital born items
Metadata
for publication
26. 26
LIBIS Collection management layer
• Alma for libraries
• MARC standard
• CollectiveAccess
• Flexible data model
• Standard of choice
• Museum standard:
SPECTRUM
• Scope for archives
• Archival standards such as
ISAD(G), ISAAR…
• …
Introduction to DH - Metadata standards and ontologies
27. 27
Use of standards
• Most organizations use in-house or adaptations of standards to fit
their specific needs
• Risk of losing interoperability
• Appoint an information manager
• Document changes and provide a mapping to the standard
Introduction to DH - Metadata standards and ontologies
29. 29
Content management system
Introduction to DH - Metadata standards and ontologies
Coll.MS CMS
DAM/
LTP view
er
Admin
UI
Admin
UI
Front
- end
Admin
UI
Front
- end
Descriptive &
administrative metadata
Physical & Digital born items
Metadata
for publication
30. 30
Service environment
• Meaningful access for users
• Metadata describing an object, usually includes a digital surrogate
• Subset of the metadata
• Cross-domain, usable quality for specific service, IPR information
Introduction to DH - Metadata standards and ontologies
32. 32
Discovery environment
Introduction to DH - Metadata standards and ontologies
Coll.MS CMS
DAM/
LTP view
er
Admin
UI
Admin
UI
Front
- end
Admin
UI
Front
- end
Descriptive &
administrative metadata
Physical & Digital born items
Metadata
for publication
33. 33
Resource discovery standards
Moving from the local to the global level
Introduction to DH - Metadata standards and ontologies
• For data interoperability: harvesting, aggregation, publishing,
indexing
• Access to metadata of many objects, often from many domains
• Result set with reference to digital representation
• Maximum relevance of results
• Limited metadata
• Dublin Core (DC) (or a variant of) is the most commonly used
metadata schema in de service and discovery environment
Formats, Shema’s, Mapping, Standards, Protocols, Semantics,…
34. 34
Data aggregation in DCH
The Europeana example
Introduction to DH - Metadata standards and ontologies
• Using aggregators
• Source > [Intermediate] > Target
• Protocols, tools and formats
• XML (or CSV)
• HTTP or FTP upload
• OAI-PMH, API
• Ingestion and mapping tools
35. 35
Dublin Core (DC)
http://dublincore.org/
Introduction to DH - Metadata standards and ontologies
Dublin Core Metadata Element Set:
1. Title
2. Creator
3. Subject
4. Description
5. Publisher
6. Contributor
7. Date
8. Type
9. Format
10. Identifier
11. Source
12. Language
13. Relation
14. Coverage
15. Rights
36. 36
Dublin Core Metadata Initiative (DCMI)
http://dublincore.org/ ; http://dublincore.org/documents/dcmi-terms/
Introduction to DH - Metadata standards and ontologies
37. 37
Dublin Core Metadata Initiative (DCMI)
http://dublincore.org/ ; http://dublincore.org/schemas/xmls/qdc/dcterms.xsd
Introduction to DH - Metadata standards and ontologies
38. 38
MARC XML
Example of a MARC XML record
Introduction to DH - Metadata standards and ontologies
46. 46
LIDO
• Validated as an official standard by ICOM/CIDOC Data Harvesting
and Interchange Working Group
• XML harvesting schema
• Documentation available on: www.lido-schema.org
Lightweight Information Describing Objects
Introduction to DH - Metadata standards and ontologies
48. 48
LIDO
• Events based on CIDOC-CRM (ISO 21127)
• An object is described according to a series of event that took place
in its lifetime
Event based structure
Introduction to DH - Metadata standards and ontologies
51. 51
EDM
• Cross-domain model for bring together diverse collections in
Europeana
• Distinguish “provided objects” (painting, book, movie, etc.) from their
digital representations
• Distinguish object from its metadata record
• Allow multiple records for the same object, containing potentially
contradictory statements about it
• Support for objects that are composed of other objects
• Support for contextual resources, including concepts from controlled
vocabularies
Europeana Data Model
Introduction to DH - Metadata standards and ontologies
https://pro.europeana.eu/resources/standardization-tools/edm-documentation
55. 55
Simply put
• Ontologies are rules to describe:
• Context
• Relationships
• Ex. What is an author? How does it relate to an editor/work ...
Introduction to DH - Metadata standards and ontologies
But what are they used for?
56. 56
Context - The Semantic Web
• Gives structure/meaning to documents
• Extension to the current web
• URI
• HTTP
• RDF
Introduction to DH - Metadata standards and ontologies
https://open.hpi.de/courses/semanticweb2016
57. 57
Moving from documents to data
DOCUMENTS
• Human readable
• Connected through links
• Links (relations) in 1 direction
• Findable through search engine (indexing)
• Difficult to interpret by software (no context)
Introduction to DH - Metadata standards and ontologies
DATA
• Machine readable
• Decentralized
• Exists of links
• Easy for machines to follow
• Relations in 2 directions (graph)
• Findable with own Query language (SPARQL)
Browser
Search
Engine
Agents
Search
Engine
VS
58. 58
(hyper)LINKS
• Highlighted text on web page
• One directional
• Accessed by clicking it
• References documents
• Opens a new window, download
• Jumps to a location in the document
• Stored in a HTML document and rendered by a web browser
Introduction to DH - Metadata standards and ontologies
59. 59
Linked Data
Introduction to DH - Metadata standards and ontologies
The link in linked data is a statement. It states a fact. We can tell a story by putting these
statements together. Computers can be made to understand these stories/graphs
60. 60
Describing data on the Web
• RDF forms the basis of Semantic web technologies
• Universal language to describe the characteristics of a resource on
the web
• Using XML for syntax and URIs for naming
• Uses a directed graph to describe resources
• Makes statements about resources in the form of subject-predicate-
object triplets
• RDF triples provides a labelled connection using URIs to make it
possible to link data with one another
• In this way a machine is able to find the semantic relations between
data
RDF ~ Resource Description Framework
Introduction to DH - Metadata standards and ontologies
62. 62
Triplets identified by an URI
Uniform Resource Identifier = global unique identifier
Introduction to DH - Metadata standards and ontologies
63. 63
Ontologies
• Ontologies are rules to describe:
• Context (classes)
• Relationships (properties)
• Ex. What is an author? How does it relate to an editor/work ...
Introduction to DH - Metadata standards and ontologies
64. 64
Ontologies ≠ Taxonomies/Thesauri/…
• Taxonomies, Vocabularies, Thesauri, Classifications…
• Function on a different level then ontologies
• Important role in the Semantic Web and Linked Data world
• Help with the interpretation and integration of data between different datasets
• May lead to the discovery of new relationships between information
expressed in a different natural language
Introduction to DH - Metadata standards and ontologies
66. 66
Ontology = Formal knowledge
representation scheme
Introduction to DH - Metadata standards and ontologies
67. 67
Another example ~ GeoNames authority
file
Introduction to DH - Metadata standards and ontologies
68. 68
Another example ~ GeoNames ontology
http://www.geonames.org/ontology/ontology_v3.1.rdf
Introduction to DH - Metadata standards and ontologies
69. 69
SKOS format
• Solution for converting a “classic” thesaurus or vocabulary into a
semantically interoperable format
• Based on the RDF specification, enables migration to OWL
• Not a formal knowledge representation / ontology
• Structured according to the ISO 25964 norm dedicated to thesauri
and interoperability with other vocabularies
• Components
Concepts Documented
URIs Semantically related (BT, NT, RT)
Labelled Concept schemes
Simple Knowledge Organisation System
Introduction to DH - Metadata standards and ontologies
72. 72
BIBFRAME
• Library of Congress
• To replace MARC
• Ongoing process
• From one record to data statements
• Integration with other standards
• Findability on the web
• New types of materials and metadata
Bibliographic Framework Initiative
Introduction to DH - Metadata standards and ontologies
73. 73
CIDOC-CRM
Introduction to DH - Metadata standards and ontologies
• CIDOC object-oriented Conceptual Reference Model (CRM)
• Domain ontology for concepts and information in cultural heritage
and museum documentation
• International standard (ISO 21127:2014) for the controlled exchange
of cultural heritage information
• Extensible semantic framework
• Object oriented model
74. 74
CIDOC-CRM
Introduction to DH - Metadata standards and ontologies
• CIDOC object-oriented Conceptual Reference Model (CRM)
• Domain ontology for concepts and information in cultural heritage
and museum documentation
• International standard (ISO 21127:2014) for the controlled exchange
of cultural heritage information
• Extensible semantic framework
• Object oriented model
Source: http://www.cidoc-crm.org/Resources/rdf-file-for-crm-core
75. 75
FRBRoo
• Joint effort of the CIDOC-CRM and Functional Requirements for
Bibliographic Records (FRBR) international working groups
• FRBR-object oriented
• Formal ontology intended to capture and represent the underlying
semantics of bibliographic information
• Facilitate the integration, mediation, and interchange of bibliographic
and museum information
Functional Requirements for Bibliographic Records - object oriented
Introduction to DH - Metadata standards and ontologies
76. 76
FRBRoo
Functional Requirements for Bibliographic Records - object oriented
Introduction to DH - Metadata standards and ontologies
Source: http://83.212.168.219/FRBR_Tutorial/graph-exercises
78. 78
Schema.org
• A vocabulary (ontology) for structured data on the Internet, on web
pages, in email messages, ...
• By Google, Microsoft, Yahoo and Yandex
• Embeddable in html by:
• Microdata
• RDFa
Introduction to DH - Metadata standards and ontologies
Source: http://schema.org/docs/full.html
79. 79
LD to provides context to search engines
Introduction to DH - Metadata standards and ontologies
83. 83
Why important for you?
Research Data Lifecycle
UK Data Archive
JISC
University of Virginia Library
Introduction to DH - Metadata standards and ontologies
84. 84
Why important for you?
Creating data
Creating data
• Design research
• Plan data management (formats, storage
etc.)
• Plan consent for sharing
• Locate existing data
• Collect data (experiment, observe, measure,
simulate)
• Capture and create metadata
UK Data Archive
Introduction to DH - Metadata standards and ontologies
85. 85
Why important for you?
Processing data
Processing data
• Enter data, digitize, transcribe, translate
• Check, validate, clean data
• Anonymize data where necessary
• Describe data
• Manage and store data
UK Data Archive
Introduction to DH - Metadata standards and ontologies
86. 86
Why important for you?
Analyzing data
Analyzing data
• Interpret data
• Derive data
• Produce research outputs
• Author publications
• Prepare data for preservation
UK Data Archive
Introduction to DH - Metadata standards and ontologies
87. 87
Why important for you?
Preserving data
Preserving data
• Migrate data to best format
• Migrate data to suitable medium
• Back-up and store data
• Create metadata and documentation
• Archive data
UK Data Archive
Introduction to DH - Metadata standards and ontologies
88. 88
Why important for you?
Access to data
Giving access to data
• Distribute data
• Share data
• Control access
• Establish copyright
• Promote data
UK Data Archive
Introduction to DH - Metadata standards and ontologies
89. 89
Why important for you?
Re-using data
Re-using data
• Follow-up research
• New research
• Undertake research reviews
• Scrutinize findings
• Teach and learn
UK Data Archive
Introduction to DH - Metadata standards and ontologies
90. 90
Sharing data to advance Science
Introduction to DH - Metadata standards and ontologies
91. 91
Metadata @ Arts, Social Sciences
and Humanities
• No tradition in using
descriptive standards
• Data models according to
researchers needs
• Can be based on common
principles within the research
domain
• Increase of data sharing
thanks to VRIs (Dariah,
Clarin…)
Introduction to DH - Metadata standards and ontologies
Source: Digital Curation Centre, metadata standards in Social Science & Humanities
92. 92
EOSC
European Open Science Cloud
Introduction to DH - Metadata standards and ontologies
Source: EOSC infographic Source: EOSC Declaration
94. 94
Identify standards & common practices
• Identify and use relevant metadata standards in your field
• If no metadata standards exist, explore common practices in the field
and collaborate if possible
Introduction to DH - Metadata standards and ontologies
95. 95
Data modeling
• Create a data model
• Choose and use standard terminology to enable discovery
• Maintain consistent data typing
Introduction to DH - Metadata standards and ontologies
96. 96
Data documentation
• Create a data dictionary
• Update data documentation on a regular basis during every step of
the lifecycle
• Document the integration of multiple datasets
• Describe the temporal extent your dataset
Introduction to DH - Metadata standards and ontologies
97. 97
Data storage
• Create, manage, and document your data storage system
• Document and store data using stable file formats
• Describe the overall organization of your dataset
Introduction to DH - Metadata standards and ontologies
98. 98
Data access and reuse
• Describe the dataset, incl. rights, versions, processing tools,…
• Provide the organization structure of your dataset
• Include a metadata specifications document in each dataset
• Deposit your dataset in a domain specific of institutional repository
Introduction to DH - Metadata standards and ontologies
99. 99
Discover more
• Digitisation: Standards Landscape for European Museums, Archives, Libraries
• Digital Curation Centre Resources on Social Sciences and Humanities
• European Open Science Cloud Declaration
• FAIR data principles
• Data one, Best practices in RDM
• Research Data management at KU Leuven
• LIBIS Services for researchers
• Ontologies for Cultural Heritage
Introduction to DH - Metadata standards and ontologies
100. 100
Thank you! Questions?
Roxanne Wyns - Roxanne.Wyns@kuleuven.be
Introduction to DH - Metadata standards and ontologies
Source: http://jennriley.com/metadatamap/