This document discusses automatic publication of library data under the linked data paradigm. It provides an overview of key concepts like open data, linked open data, and the semantic web. It also describes the ALIADA project which aims to develop an open source application to help libraries and museums automatically publish metadata as linked open data. This will allow their collections to be more accessible and interoperable online.
The European ALIADA project : introductionaliada project
The document discusses the European ALIADA project and linked open data. It defines semantic web and linked open data as structured data expressed as statements using URIs. It outlines the advantages of linked open data such as being machine readable and enabling connections between data. Finally, it discusses publishing linked open data, including using appropriate vocabularies and registering datasets.
In 2018 the ‘Strategy for culture in the digital age’ was published by the Flemish minister of culture. The culture sector is exploring open data to improve access of their collections for diverse groups of users. PACKED has researched, developed and published data, tools and strategies using open source and open data as a lever for building a sustainable digital memory. Aside from sharing our projects, results and peeking at the new challenges that lie ahead, we provide a platform for two of our partners to showcase projects which were set up in collaboration with PACKED:
-The King Baudouin Foundation collaborated with PACKED in order to open up their collections on Wikimedia plaftorms
-The Flemish Art Collection presents the Datahub and Arthub projects, which gives the public access to the visual arts in Flanders and facilitates (re-)use
- PACKED advocates for open data in the cultural heritage sector through various projects and training. They developed CultURIze, a tool to help small museums assign persistent URIs to collection items.
- The King Baudouin Foundation shares collection data on Wikimedia platforms like Wikidata to make it more accessible. Challenges include normalizing data from different sources and systems.
- The Flemish Art Collection's Arthub and Datahub projects aim to publish collection data as open data through APIs and formats. An ETL pipeline extracts, transforms and loads data from various museum databases into a central repository for reuse.
Magnus Bognerud - Current digital collection management projects at nasjonalm...lab_SNG
The National Museum of Art, Architecture and Design is undertaking two main digital collection management projects. The first is implementing a new collection management system between 2015-2019 to prepare for their move to a new building. The second is a machine learning project from 2015-2016 to explore using computer vision and AI to classify artworks and generate metadata to help users explore the collection. They are working with an outside company to test algorithms on sample images from the collection. The goal is to automatically recognize styles, techniques and other attributes to create reusable data about the artworks.
Dag Hensten - Nasjonalmuseet collections onlinelab_SNG
The National Museum of Norway has consolidated four museums into one institution since 2003. It is working to digitize its collection of over 400,000 objects and make them available online. It has developed an online collection using various APIs and technologies to display objects, associated biographies and other contextual information. There are ongoing efforts to improve the digital experience through additional languages, user feedback features, and new technologies like 360 degree images and 3D models. The museum is also committed to open sharing of its digital work to help other cultural institutions.
This document discusses automatic publication of library data under the linked data paradigm. It provides an overview of key concepts like open data, linked open data, and the semantic web. It also describes the ALIADA project which aims to develop an open source application to help libraries and museums automatically publish metadata as linked open data. This will allow their collections to be more accessible and interoperable online.
The European ALIADA project : introductionaliada project
The document discusses the European ALIADA project and linked open data. It defines semantic web and linked open data as structured data expressed as statements using URIs. It outlines the advantages of linked open data such as being machine readable and enabling connections between data. Finally, it discusses publishing linked open data, including using appropriate vocabularies and registering datasets.
In 2018 the ‘Strategy for culture in the digital age’ was published by the Flemish minister of culture. The culture sector is exploring open data to improve access of their collections for diverse groups of users. PACKED has researched, developed and published data, tools and strategies using open source and open data as a lever for building a sustainable digital memory. Aside from sharing our projects, results and peeking at the new challenges that lie ahead, we provide a platform for two of our partners to showcase projects which were set up in collaboration with PACKED:
-The King Baudouin Foundation collaborated with PACKED in order to open up their collections on Wikimedia plaftorms
-The Flemish Art Collection presents the Datahub and Arthub projects, which gives the public access to the visual arts in Flanders and facilitates (re-)use
- PACKED advocates for open data in the cultural heritage sector through various projects and training. They developed CultURIze, a tool to help small museums assign persistent URIs to collection items.
- The King Baudouin Foundation shares collection data on Wikimedia platforms like Wikidata to make it more accessible. Challenges include normalizing data from different sources and systems.
- The Flemish Art Collection's Arthub and Datahub projects aim to publish collection data as open data through APIs and formats. An ETL pipeline extracts, transforms and loads data from various museum databases into a central repository for reuse.
Magnus Bognerud - Current digital collection management projects at nasjonalm...lab_SNG
The National Museum of Art, Architecture and Design is undertaking two main digital collection management projects. The first is implementing a new collection management system between 2015-2019 to prepare for their move to a new building. The second is a machine learning project from 2015-2016 to explore using computer vision and AI to classify artworks and generate metadata to help users explore the collection. They are working with an outside company to test algorithms on sample images from the collection. The goal is to automatically recognize styles, techniques and other attributes to create reusable data about the artworks.
Dag Hensten - Nasjonalmuseet collections onlinelab_SNG
The National Museum of Norway has consolidated four museums into one institution since 2003. It is working to digitize its collection of over 400,000 objects and make them available online. It has developed an online collection using various APIs and technologies to display objects, associated biographies and other contextual information. There are ongoing efforts to improve the digital experience through additional languages, user feedback features, and new technologies like 360 degree images and 3D models. The museum is also committed to open sharing of its digital work to help other cultural institutions.
Developing a webarchiving strategy for national movements in FlandersTom Cobbaert
The document discusses the development of a web archiving strategy for the Archives and Documentation Centre for the Flemish Movement (ADVN). ADVN aims to selectively harvest websites of Flemish nationalist organizations and politicians on a quarterly basis using tools like Web Curator Tool or Wget to archive websites, blogs, Twitter, and Facebook pages. Collaboration may include the Internet Archive and ArchiveTeam since there is no national web archiving in Belgium currently. Issues to address include permissions, authenticity, access within privacy and copyright laws, and long-term digital preservation challenges.
The OAIS reference model and archaeological dataariadnenetwork
Presentation by Ulf Jakobsson,
Swedish National Data Service (SND)
Full-day session on archaeological infrastructures and services at the 18th Cultural Heritage and New Technologies (CHNT) conference
Vienna, Austria
11th -13th November 2013
Clare Lanigan - Presentation to IES Studentsdri_ireland
Presentation given by Clare Lanigan, DRI Education and Outreach Manager, to students of the School of Information and Library Science, University of North Carolina, at the Institute for the International Education of Students (IES) Abroad centre in Rathmines, Dublin, on 1 June 2017.
This document discusses developing a distributed network of digital heritage information in the Netherlands. It proposes taking a resource-centric linked data approach, implementing linked data principles in data sources, building a knowledge graph, and creating a registry to link organizations, datasets, and resources. This would allow for federated querying across distributed data sources and improved discovery of digital heritage information.
The Meertens Institute, part of the Royal Netherlands Academy of Arts and Sciences, is also a memory institution, where records are digitally preserved and curated. This talk will give an overview of the different types of records currently digitally curated at the Meertens Institute. We highlight our recent projects, such as the Sailing Letters project, where we use crowd sourcing to transcribe centuries-old handwritten letters, or the Radical Political Representation project, where we crowd source the analysis of political cartoons. These are all exemplary Digital Humanities cases, and we show our approach to the digital archiving of these materials, from creation to (re-)use.
New approaches for data acquisition at europeana iiif, sitemaps and schema.o...Nuno Freire
Presentation on experiments at Europeana regarding new methods of aggregating metadata.
Presented at the Seminar Linked Data in Research and Cultural Heritage, on 1st of May 2017.
State of Image Annotations - I Annotate 2016r0bcas7
- The document discusses image annotation standards and software. It proposes using resolution-independent coordinates and GeoJSON/WKT to annotate image areas with points, lines, and polygons.
- Major annotation standards mentioned are Open Annotation, Web Annotation, SharedCanvas, and IIIF. Software discussed includes Annotorious, HyperImage, digilib, SemToNotes, Mirador, and Diva.js.
- The document advocates for standards that allow annotating specific image areas, sharing annotations, and connecting annotations to form a "semantic network" of annotations linked to sources.
This document discusses several open community projects including Wikipedia, Wikimedia Commons, Wikidata, OpenStreetMap, Wheelmap, Telraam, Weather Observations Website, GitHub curated lists, Common Voice, and others. It provides brief descriptions of each project, what opportunities they present for public administrations and organizations, and encourages contributions to help document government data and services.
Overview of the ITS department's projects, services, and staff. A look at our areas, including IT infrastructure, eresources management, digital library services, and admin & communication.
This document discusses the benefits of applying linked data principles to library, archive, and museum metadata. It highlights how cultural heritage institutions already have rich, structured metadata that can be transformed into linked open data to power new discovery experiences and research applications. By publishing controlled vocabularies and authority files as linked data, these institutions leverage the reputation and maturity of their existing metadata while gaining the advantages of decentralized data on the open web.
Digital Cultural Heritage and the new EU Framework Programmelocloud
2nd LoCloud CY Awareness Event at the Ministry of Education and Culture.
Presentation delivered by Marinos Ioannides, Cyprus University of Technology
Cyprus
5 March 2014
Eaa2014 Opportunities and Challenges with Open Access and Open Data in the UKariadnenetwork
Presentation by Julian Richards, Archaeology Data Service (ADS)
EAA 2014 session: Open Access and Open Data in Archaeology
Istanbul, Turkey
13 September 2013
Fasti Online at the International Association of Classical Archaeology (AIAC)ariadnenetwork
FASTI Online is a database that has been online for 10 years, providing open access to excavation data from 14 countries. It contains records of over 5,100 excavation seasons and 3,300 archaeological sites. The database is built on open source technologies and allows users to search via maps, time periods, site types, and bibliographic references. In the future, the group hopes to expand the geographic and language coverage of site data, improve search functions, and increase connectivity to other datasets and resources.
Europeana is an online collection of over 24 million digitized items from European cultural heritage institutions. It was launched in 2008 to provide access to Europe's cultural works and make them more accessible online. Europeana is funded through the eContentPlus project and other contributors. It aims to provide a searchable database of metadata for items from various domains like libraries, museums, and archives. Challenges include gaining participation from all 27 EU countries, balancing operational realities with project goals, dealing with copyright issues, and ensuring sustainability beyond individual funding projects.
An overview of the online archaeological data services that will be available through ARIADNE. These include several services provided by ADS, University of York, FASTI Online and ARACHNE.
The LoCloud lightweight digital library and alternative content sources, Adam...locloud
The document discusses user stories and requirements for a proposed lightweight digital library system called LoCloud L3D. It provides examples of how smaller libraries and archives could use such a system to digitize and share their collections without specialized IT expertise. Key requirements identified include easy creation of metadata, support for multiple content types, customizable interfaces, and the ability to migrate from other digital library systems. Open issues discussed include prioritizing content types and features.
IIIF at europeana, IIIF conference, Vatican, 2017Nuno Freire
This document summarizes Europeana's work to aggregate metadata from cultural heritage institutions using the International Image Interoperability Framework (IIIF). It describes Europeana's goals of making over 54 million digitized objects discoverable. Case studies were conducted with partners to test crawling IIIF services and aggregating metadata. Ongoing work involves representing metadata in Schema.org and using linked data notifications. Future collaboration opportunities are discussed to further test IIIF for metadata aggregation across Europeana's network.
Automatic publication of library and museum data into the LOD cloudaliada project
General description of the ALIADA open source tool, that automatically publishes library and museum data into the LOD cloud. This presentation was a lightning talk at SWIB2014.
PACKED advocates for open data in the cultural heritage sector. They discuss infrastructure for publishing open data, including using persistent URI's and platforms like Wikidata. They provide training on topics like data cleaning and enrichment. Their goal is to help cultural institutions share their collections as open data by developing tools like CultURIze and advocating for more open infrastructures.
The King Baudouin Foundation collection is dispersed across many institutions. They worked with Wikimedia to publish the collection on Wikimedia platforms by normalizing the data to make it linkable using identifiers.
The Flemish Art Collection's Arthub and Datahub platforms aim to automate sharing collection data between applications using pipelines. The Datahub centrally stores
Developing a webarchiving strategy for national movements in FlandersTom Cobbaert
The document discusses the development of a web archiving strategy for the Archives and Documentation Centre for the Flemish Movement (ADVN). ADVN aims to selectively harvest websites of Flemish nationalist organizations and politicians on a quarterly basis using tools like Web Curator Tool or Wget to archive websites, blogs, Twitter, and Facebook pages. Collaboration may include the Internet Archive and ArchiveTeam since there is no national web archiving in Belgium currently. Issues to address include permissions, authenticity, access within privacy and copyright laws, and long-term digital preservation challenges.
The OAIS reference model and archaeological dataariadnenetwork
Presentation by Ulf Jakobsson,
Swedish National Data Service (SND)
Full-day session on archaeological infrastructures and services at the 18th Cultural Heritage and New Technologies (CHNT) conference
Vienna, Austria
11th -13th November 2013
Clare Lanigan - Presentation to IES Studentsdri_ireland
Presentation given by Clare Lanigan, DRI Education and Outreach Manager, to students of the School of Information and Library Science, University of North Carolina, at the Institute for the International Education of Students (IES) Abroad centre in Rathmines, Dublin, on 1 June 2017.
This document discusses developing a distributed network of digital heritage information in the Netherlands. It proposes taking a resource-centric linked data approach, implementing linked data principles in data sources, building a knowledge graph, and creating a registry to link organizations, datasets, and resources. This would allow for federated querying across distributed data sources and improved discovery of digital heritage information.
The Meertens Institute, part of the Royal Netherlands Academy of Arts and Sciences, is also a memory institution, where records are digitally preserved and curated. This talk will give an overview of the different types of records currently digitally curated at the Meertens Institute. We highlight our recent projects, such as the Sailing Letters project, where we use crowd sourcing to transcribe centuries-old handwritten letters, or the Radical Political Representation project, where we crowd source the analysis of political cartoons. These are all exemplary Digital Humanities cases, and we show our approach to the digital archiving of these materials, from creation to (re-)use.
New approaches for data acquisition at europeana iiif, sitemaps and schema.o...Nuno Freire
Presentation on experiments at Europeana regarding new methods of aggregating metadata.
Presented at the Seminar Linked Data in Research and Cultural Heritage, on 1st of May 2017.
State of Image Annotations - I Annotate 2016r0bcas7
- The document discusses image annotation standards and software. It proposes using resolution-independent coordinates and GeoJSON/WKT to annotate image areas with points, lines, and polygons.
- Major annotation standards mentioned are Open Annotation, Web Annotation, SharedCanvas, and IIIF. Software discussed includes Annotorious, HyperImage, digilib, SemToNotes, Mirador, and Diva.js.
- The document advocates for standards that allow annotating specific image areas, sharing annotations, and connecting annotations to form a "semantic network" of annotations linked to sources.
This document discusses several open community projects including Wikipedia, Wikimedia Commons, Wikidata, OpenStreetMap, Wheelmap, Telraam, Weather Observations Website, GitHub curated lists, Common Voice, and others. It provides brief descriptions of each project, what opportunities they present for public administrations and organizations, and encourages contributions to help document government data and services.
Overview of the ITS department's projects, services, and staff. A look at our areas, including IT infrastructure, eresources management, digital library services, and admin & communication.
This document discusses the benefits of applying linked data principles to library, archive, and museum metadata. It highlights how cultural heritage institutions already have rich, structured metadata that can be transformed into linked open data to power new discovery experiences and research applications. By publishing controlled vocabularies and authority files as linked data, these institutions leverage the reputation and maturity of their existing metadata while gaining the advantages of decentralized data on the open web.
Digital Cultural Heritage and the new EU Framework Programmelocloud
2nd LoCloud CY Awareness Event at the Ministry of Education and Culture.
Presentation delivered by Marinos Ioannides, Cyprus University of Technology
Cyprus
5 March 2014
Eaa2014 Opportunities and Challenges with Open Access and Open Data in the UKariadnenetwork
Presentation by Julian Richards, Archaeology Data Service (ADS)
EAA 2014 session: Open Access and Open Data in Archaeology
Istanbul, Turkey
13 September 2013
Fasti Online at the International Association of Classical Archaeology (AIAC)ariadnenetwork
FASTI Online is a database that has been online for 10 years, providing open access to excavation data from 14 countries. It contains records of over 5,100 excavation seasons and 3,300 archaeological sites. The database is built on open source technologies and allows users to search via maps, time periods, site types, and bibliographic references. In the future, the group hopes to expand the geographic and language coverage of site data, improve search functions, and increase connectivity to other datasets and resources.
Europeana is an online collection of over 24 million digitized items from European cultural heritage institutions. It was launched in 2008 to provide access to Europe's cultural works and make them more accessible online. Europeana is funded through the eContentPlus project and other contributors. It aims to provide a searchable database of metadata for items from various domains like libraries, museums, and archives. Challenges include gaining participation from all 27 EU countries, balancing operational realities with project goals, dealing with copyright issues, and ensuring sustainability beyond individual funding projects.
An overview of the online archaeological data services that will be available through ARIADNE. These include several services provided by ADS, University of York, FASTI Online and ARACHNE.
The LoCloud lightweight digital library and alternative content sources, Adam...locloud
The document discusses user stories and requirements for a proposed lightweight digital library system called LoCloud L3D. It provides examples of how smaller libraries and archives could use such a system to digitize and share their collections without specialized IT expertise. Key requirements identified include easy creation of metadata, support for multiple content types, customizable interfaces, and the ability to migrate from other digital library systems. Open issues discussed include prioritizing content types and features.
IIIF at europeana, IIIF conference, Vatican, 2017Nuno Freire
This document summarizes Europeana's work to aggregate metadata from cultural heritage institutions using the International Image Interoperability Framework (IIIF). It describes Europeana's goals of making over 54 million digitized objects discoverable. Case studies were conducted with partners to test crawling IIIF services and aggregating metadata. Ongoing work involves representing metadata in Schema.org and using linked data notifications. Future collaboration opportunities are discussed to further test IIIF for metadata aggregation across Europeana's network.
Automatic publication of library and museum data into the LOD cloudaliada project
General description of the ALIADA open source tool, that automatically publishes library and museum data into the LOD cloud. This presentation was a lightning talk at SWIB2014.
PACKED advocates for open data in the cultural heritage sector. They discuss infrastructure for publishing open data, including using persistent URI's and platforms like Wikidata. They provide training on topics like data cleaning and enrichment. Their goal is to help cultural institutions share their collections as open data by developing tools like CultURIze and advocating for more open infrastructures.
The King Baudouin Foundation collection is dispersed across many institutions. They worked with Wikimedia to publish the collection on Wikimedia platforms by normalizing the data to make it linkable using identifiers.
The Flemish Art Collection's Arthub and Datahub platforms aim to automate sharing collection data between applications using pipelines. The Datahub centrally stores
PACKED advocates for open data in the cultural heritage sector. They discuss infrastructure for publishing open data, including using persistent URI's and platforms like Wikidata. PACKED provides training on open data topics and helps cultural institutions publish collections. Their goal is to make more data available and reusable while addressing challenges like inconsistent data formats across institutions. The Flemish Art Collection discusses their work to aggregate collection data from different museums into a central Datahub and publish it through their Arthub portal. They aim to improve data quality and automate sharing to open up more collections.
From Catalogue 2.0 to the digital humanities: exploring the future of librari...Sally Chambers
This document discusses the evolving role of libraries and librarians in supporting digital scholarship and the digital humanities. It describes how traditional cataloguing tools like MARC are changing to incorporate new metadata standards and linked data. Research libraries' engagement with research infrastructures has been low but is increasing as opportunities arise in areas like research data management, digital repositories, and scholarly communication. The document argues libraries have important roles to play in discovery, data management, and as embedded partners supporting digital humanities researchers and their evolving needs. Collaboration between libraries and digital humanities centers is highlighted as a way to advance both fields.
This talk showcases PACKED vzw's linked open data-projects on persistent identification, opening up data, data enrichment and the potential of the Wikimedia ecosystem BUT also the areas where the Wikimedia platforms and its present tools could be improved. We make an argument for attracting more people with an IT background in the cultural sector, better open infrastructures and tools that make linked open data publishing and reuse possible: resolvers, datahubs, api tools - tools for publication of data and images: specific tools for mix’n match, tools which can deal with what heritage professionals have already produced (excel files). Lastly we encourage the public to solicit the heritage sector and create demand for LOD services ‘as if’ you already live in a society where citizens can take access to digital cultural resources for granted and as if you have no idea about the contradictions that cause institutions to delay opening up their collections.
This talk showcases PACKED vzw's linked open data-projects on persistent identification, opening up data, data enrichment and the potential of the Wikimedia ecosystem BUT also the areas where the Wikimedia platforms and its present tools could be improved. We make an argument for attracting more people with an IT background in the cultural sector, better open infrastructures and tools that make linked open data publishing and reuse possible: resolvers, datahubs, api tools - tools for publication of data and images: specific tools for mix’n match, tools which can deal with what heritage professionals have already produced (excel files). Lastly we encourage the public to solicit the heritage sector and create demand for LOD services ‘as if’ you already live in a society where citizens can take access to digital cultural resources for granted and as if you have no idea about the contradictions that cause institutions to delay opening up their collections.
This document discusses challenges and opportunities for linked open data in cultural heritage institutions. It summarizes that while many institutions have digitized collections and data, their "digital mindset" and outdated systems have limited data sharing and reuse. The document outlines recent and ongoing projects by PACKED to address this, such as developing tools to publish structured data on Wikimedia platforms, and a "datahub" and "resolver tool" to facilitate internal data management and linking to external references. Next steps include expanding these projects and conducting a survey to understand demand for cultural heritage data. The overall aim is to make institutions' data and collections more accessible and reusable on the web.
Presentation by Alina Saenko and Sam Donvil at Open Belgium 2018 -
http://2018.openbelgium.be/session/linked-open-data-limbo-co-creation-catalyst-cultural-heritage-resources
Methodological Proposals for Designing Federative Platforms in Cultural Linke...Antoine Courtin
As part of the on-going Labex project "Past in the present", our proposal aims at highlighting the organizational issues of Linked Data projects that have to deal with pluri-institutional contexts, among which libraries. First, we will discuss what is at stake. Second, we will present a methodology based on the building of several diagrams which highlight technical, conceptual, and organizational obstacles. We will also address the issues of designing and producing an information system intended to ensure the transmission of scientific skills, the exploitation of major vocabularies, associated to specific vocabularies, by foreign institutions and the harmonizing or building of bridges between heterogeneous descriptions.
This document discusses making data and software reusable according to FAIR principles. It provides examples of documenting cultural heritage projects in ways that could enable computer and human reuse of the data. These include publishing technical reports and data online with metadata, attributions and licenses. The document advocates planning for potential reuse when initially collecting and structuring data. This would help share data and knowledge with other researchers and systems in the future.
Reusing historical newspapers of KB in e-humanities - Case studies and exampl...Olaf Janssen
This slidedeck gives an overview of Dutch e-humanties projects that build upon the datasets of the Koninklijke Bibliotheek, the national library of the Netherlands.
It focuses on 8 projects that reuse the digitized historical newspapers (1618-1995) of the KB.
It was presented on 7-1-2014 at the Huygens Institute for the History of the Netherlands (Huygens ING for short). This is an institute of the Royal Netherlands Academy of Arts and Sciences (KNAW) where around 100 scholars work in the largest humanities institute of the Netherlands.
Keywords: biland,delpher,e-humanities,elite network shifts,hirods,historical newspapers,isher,koninklijke bibliotheek,national library of the netherlands,open data,polimedia,political mashup,reuse,sealincmedia,translantis,washp
Facilitating digital research in the humanities: from local services to Europ...Sally Chambers
This presentation was given as part of the 'Séminaire Européen de l’Ecole doctorale' on 'Les Infrastructures de la recherché, quels enjeux pour les humanités numériques ?' held at the University of Lille on 3 March 2016, see:
http://geriico.recherche.univ-lille3.fr/index.php?page=annee-2015---2016
The document provides an overview of the Europeana Fashion project. It discusses (1) the context of digitizing fashion content in European cultural heritage, (2) the objectives of aggregating 700,000 digital fashion items and improving access, and (3) the diverse consortium of 22 partners from 12 countries representing leading fashion institutions. It also previews plans to develop a fashion thesaurus, data model, and collaboration with Wikipedia, as well as tools for topic analysis of fashion trends on social media.
In order for museums to truly reap the benefits of publishing their collections online in a sustainable way, PACKED vzw presents the results of its Linked open data project as a best practice guide for the Flemish heritage sector.
Ähnlich wie Automatic publication of library and museum data into the semantic web: the description of the ALIADA project (20)
A szemantikus web és a könyvtárak, különös tekintettel a BIBFRAME formátumrahorvadam
A szemantikus web ismertetése. A szemantikus web jelenléte a könyvtárakban. A BIBFRAME formátum ismertetése. BIBFRAME a Magyar Nemzeti Múzeum Központi Könyvtárában. Másolásás katalogizálás jövője. A webnek fogunk közvetlenül katalogizálni.
This document describes an NBN:URN generator and resolver system. It discusses the preparation, protocol, design principles, and functions of the system. The system generates and resolves Uniform Resource Names (URNs) using a three-step process for both generation and resolution. It also allows for changing and deleting URN assignments. The system has a web interface and is implemented using PHP, Java servlets, and PostgreSQL for maximum simplicity, reliability and accessibility.
This document discusses ZING, which is presented as the next generation of the Z39.50 protocol. It describes some problems with Z39.50, such as its complexity and lack of popularity with the web community. ZING aims to simplify Z39.50 while keeping its strengths, and consists of protocols like SRW, SRU, CQL and others. SRW is described as an XML-oriented search protocol that retains concepts from Z39.50 like result sets and abstract access points, but is simplified. CQL is presented as a common query language that can range from simple to complex, and includes features like context sets and relations.
Automation at the National Széchényi Libraryhorvadam
The document summarizes the history of automation at the National Széchényi Library in Budapest, Hungary. It describes the library migrating from the DOBIS system to Amicus in 1997-2002, upgrading to newer versions of Amicus and LibriVision, loading additional records, and performing system tuning. It also provides details on the library's infrastructure, including servers, storage, networking, and public computers.
First steps towards publishing library data on the semantic webhorvadam
First steps towards publishing library data on the semantic web. Implementing:
CoolUri
RDFDC
SKOS
RDF database and SPARQL interface
Content negotiation
This document discusses FRBRization, or applying the FRBR conceptual model to bibliographic data. It summarizes the National Széchényi Library's efforts to implement FRBRization, including translating FRBR to Hungarian, matching FRBR entities to their cataloging standard, and adopting an algorithm to identify work relationships. Their initial implementation was able to show other editions of monographs in the OPAC with minimal changes. However, storing all relationships slowed down the OPAC. Future plans could involve a separate FRBR service accessed through the OPAC to fully represent work trees.
WEB2 developments at the National Széchényi Libraryhorvadam
This document discusses developments in WEB2 and integrating library services at the National Széchényi Library. It describes adding link, bookmark, permalink, and map services to the library catalog (LibriVision). It also covers integrating LibriVision with other services like Zotero and COinS for citations and OpenSearch for search syndication. The goal is to better connect library resources on the web through common standards.
Introduction to semantic web. The first results in publication of library data into the semantic web at the National Széchényi Libary (National Library of Hungary)
Ádám Horváth presented on the development of LibriVision widgets using the OpenSocial protocol. The OpenSocial protocol allows applications called widgets to be embedded into various social networking sites and personalized start pages. NSZL developed widgets for their digital library and LibriVision that search the collections via SRU/Z39.50 and display search results as links in the widget. Horváth demonstrated the LibriVision widget working in iGoogle, showing how OpenSocial defines APIs for activities, messaging, and other functions to integrate applications into supported social media platforms.
Catalogue enrichment in LibriVision
Link service based on OpenUrl
Bookmark service
Permalink
Google Cover Page
Map integration
Cover pages produced by NSZL
Permalink is now a Cool URI
Linked Data at the National Széchényi Library : road to the publicationhorvadam
This document discusses the National Széchényi Library's process of publishing its data as linked open data. It began by developing SRU and SKOS interfaces, then realized it had the components needed for linked data - SKOS thesauri, URL-based record access via LibriUrl, and SRU search of records. It focused on developing cool URIs, identifiers, content negotiation, the RDFDC vocabulary, and an RDF database. XSLT was used to convert MARCXML to RDFDC, and a FOAF file was generated from authority records. The OPAC was modified to support HTML link auto-discovery to the RDF. The library's data is now available as linked open data via S
RDF and Open Linked Data, a first approachhorvadam
This document discusses the potential benefits of libraries publishing their data as linked open data using semantic web technologies. It describes how linked data allows for standardized access to data across the web as a single API. Libraries can make their data more discoverable on the web and searchable by services like Google by publishing it as linked open data. Semantic web technologies like RDF and SPARQL allow for more powerful search capabilities. Several large libraries are already publishing portions of their data as linked open data, including authority files and entire catalogs. The document outlines some semantic web applications libraries could use to enhance discovery and provides examples of vocabularies for describing different types of metadata.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Automatic publication of library and museum data into the semantic web: the description of the ALIADA project
1. Automatic publication of library and
museum data into the semantic web: the
description of the ALIADA project
Ádám Horváth
Hungarian National Museum
Museum of Fine Arts, Budapest
Art Libraries Facing the Challenges of a Digital Age
Budapest, 2nd June, 2014
Dear Mr. Horvath,
Thanks, the links do work now. Nice work!
We matched our records to the LCSH in a different project (called "MACS") a while ago. The
result of the project was a database of corresponding entries in our SWD, the LCHS and
RAMEAU. So we did not solve that problem in our linked data project but simply used the
information already available. Unfortunately, the URIs are not included, but it was
possible to derive them from the IDs used (easy for LCSH, hard for RAMEAU).
I am not sure if these mappings are freely available, but if you are interested, I could
try to find out. A typical mapping looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Contents from database Work -->
...
<add>
<doc>
<field name="id">MACS0000001</field>
<field name="SWD">Schauspielkunst</field>
<field name="SWD_number">4129090-2</field>
<field name="RAMEAU">Art dramatique</field>
<field name="RAMEAU_number">FRBNF11930966X</field>
<field name="LCSH">Acting</field>
<field name="LCSH_number">sh 85000691</field>
</doc>
...
</add>
Would that be at all useful to you?
Best regards
Jan Hannemann
>-----Ursprüngliche Nachricht-----
>Von: HORVATH Adam [mailto:adam@oszk.hu]
>Gesendet: Montag, 10. Mai 2010 15:59
>An: Hannemann, Jan
>Cc: Hauser, Julia; Haffner, Alexander
>Betreff: Re: NSZL on the semantic web / DNB on the semantic web
>
>Dear Mr Hannemann,
>
>Thanks you for your reaction. The link should work now (I tried a
>minute ago).
>
>I studied your examples. You have very nice, detailed authority
>records. I like that you show the html representation in your
>catalogue.
>
>How did you manage to match your subject heading with the LCSH
>subject heading? What was the input for the matching? Can one have a
>file containing all the skosified LCSH subject heading, which is
>frely available?
>
>We plan to improve our bibliographic data with the bibo schema, to
>create a sitemap to our linked data and to do some frbrization based
>on our semantic data stored in Jena.
>
>If you can see any possible cooperation in the future please let us
>know.
>
>Best regards
>Adam Horvath
>
>
>
>
>
>Subject:NSZL on the semantic web / DNB on the semantic web
>Date sent:Mon, 10 May 2010 11:04:44 +0200
>From:"Hannemann, Jan" <[email_address]>
>To:<[email_address]>
>Copies to:"Hauser, Julia" <[email_address]>,
>"Haffner, Alexander" <[email_address]>
>
>> Dear Mr. Horvath,
>>
>> I am from the German National Library and have heard about your recent
>> publication of your bibliographic information as linked open data.
>> Please let me congratulate you on this important accomplishment!
>>
>> Incidentally, we have also just completed our first linked data
>> project. In this first step we have published large parts of our
>> authority files as linked data; a follow-up project is currently being
>> planned. Perhaps it will be possible to benefit from each others
>> efforts and experiences.
>>
>> At first glance, it seems that the projects are quite similar;
>> apprarently we have chosen different naming schemes for out URIs, but
>> that difference is cosmetic at best. Currently, out service only
>> offers the usual XML/RDF representation; MARC and other formats might
>> be added in the future. We also have content negotiation for the
>> RDF/XML and HTML representations of the data. The main difference is
>> apparently in the data modelling and ontologies used.
>>
>> By the way, some of the URLs in your slides don't seem to work, such
>> as http://nektar.oszk.hu/resource/auth/33589 (slide 17). The default
>> SPARQL queries return errors (HTTP Status 500 -
>> com.hp.hpl.jena.shared.JenaException: Exception during database
>> access).
>>
>> We would like to invite you to take a look at our service.
>> Unfortunately, the documentation (https://wiki.d-nb.de/display/LDS) is
>> available only in German at the moment.
>>
>> Here's a brief English overview of the features:
>>
>> - The services comprises authority file data about 1,8 million persons
>> (from Name Authority File PND), 160.000 subject headings (from Subject
>> Headings Authority File SWD) and about 1.3 million corporate bodies
>> (from Comporate Body Authority File GKD). - The data modelling has
>> been refined compared to an earlier prototype (March 2010). - We've
>> added additional links to external data sources, in particular from
>> our SWD to appropriate data at LCSH and RAMEAU. - The service is now
>> integrated into our web presence; a special test environment that we
>> used before is no longer needed.
>>
>> These examples illustrate our work:
>>
>> - The German author Bertolt Brecht (http://d-nb.info/gnd/118514768)
>> has the following XML/RDF representation:
>> http://d-nb.info/gnd/118514768/about - The Subject Heading for
>> "Führungskraft" ("Executive" or "Cadres (personnel)") is found here:
>> http://d-nb.info/gnd/4071497-4 - The associated XML/RDF representation
>> can be found here: http://d-nb.info/gnd/4071497-4/about
>>
>> The next step for us is planning a follow-up project that will improve
>> the technical infrastructure, develop automatic update mechanisms and
>> expand the data we represent.
>>
>> We are looking forward to your feedback.
>>
>> Best regards,
>>
>> Jan Hannemann
>>
>>
>> _______________________________________
>>
>> Dr. Jan Hannemann
>> Deutsche Nationalbibliothek
>> Informationstechnik
>> Adickesallee 1
>> D-60322 Frankfurt am Main
>> Telefon: +49-69-1525-1769
>> Telefax: +49-69-1525-1799
>>
>>
>>
>
Dear Mr. Horvath,
Thanks, the links do work now. Nice work!
We matched our records to the LCSH in a different project (called "MACS") a while ago. The
result of the project was a database of corresponding entries in our SWD, the LCHS and
RAMEAU. So we did not solve that problem in our linked data project but simply used the
information already available. Unfortunately, the URIs are not included, but it was
possible to derive them from the IDs used (easy for LCSH, hard for RAMEAU).
I am not sure if these mappings are freely available, but if you are interested, I could
try to find out. A typical mapping looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Contents from database Work -->
...
<add>
<doc>
<field name="id">MACS0000001</field>
<field name="SWD">Schauspielkunst</field>
<field name="SWD_number">4129090-2</field>
<field name="RAMEAU">Art dramatique</field>
<field name="RAMEAU_number">FRBNF11930966X</field>
<field name="LCSH">Acting</field>
<field name="LCSH_number">sh 85000691</field>
</doc>
...
</add>
Would that be at all useful to you?
Best regards
Jan Hannemann
>-----Ursprüngliche Nachricht-----
>Von: HORVATH Adam [mailto:adam@oszk.hu]
>Gesendet: Montag, 10. Mai 2010 15:59
>An: Hannemann, Jan
>Cc: Hauser, Julia; Haffner, Alexander
>Betreff: Re: NSZL on the semantic web / DNB on the semantic web
>
>Dear Mr Hannemann,
>
>Thanks you for your reaction. The link should work now (I tried a
>minute ago).
>
>I studied your examples. You have very nice, detailed authority
>records. I like that you show the html representation in your
>catalogue.
>
>How did you manage to match your subject heading with the LCSH
>subject heading? What was the input for the matching? Can one have a
>file containing all the skosified LCSH subject heading, which is
>frely available?
>
>We plan to improve our bibliographic data with the bibo schema, to
>create a sitemap to our linked data and to do some frbrization based
>on our semantic data stored in Jena.
>
>If you can see any possible cooperation in the future please let us
>know.
>
>Best regards
>Adam Horvath
>
>
>
>
>
>Subject:NSZL on the semantic web / DNB on the semantic web
>Date sent:Mon, 10 May 2010 11:04:44 +0200
>From:"Hannemann, Jan" <[email_address]>
>To:<[email_address]>
>Copies to:"Hauser, Julia" <[email_address]>,
>"Haffner, Alexander" <[email_address]>
>
>> Dear Mr. Horvath,
>>
>> I am from the German National Library and have heard about your recent
>> publication of your bibliographic information as linked open data.
>> Please let me congratulate you on this important accomplishment!
>>
>> Incidentally, we have also just completed our first linked data
>> project. In this first step we have published large parts of our
>> authority files as linked data; a follow-up project is currently being
>> planned. Perhaps it will be possible to benefit from each others
>> efforts and experiences.
>>
>> At first glance, it seems that the projects are quite similar;
>> apprarently we have chosen different naming schemes for out URIs, but
>> that difference is cosmetic at best. Currently, out service only
>> offers the usual XML/RDF representation; MARC and other formats might
>> be added in the future. We also have content negotiation for the
>> RDF/XML and HTML representations of the data. The main difference is
>> apparently in the data modelling and ontologies used.
>>
>> By the way, some of the URLs in your slides don't seem to work, such
>> as http://nektar.oszk.hu/resource/auth/33589 (slide 17). The default
>> SPARQL queries return errors (HTTP Status 500 -
>> com.hp.hpl.jena.shared.JenaException: Exception during database
>> access).
>>
>> We would like to invite you to take a look at our service.
>> Unfortunately, the documentation (https://wiki.d-nb.de/display/LDS) is
>> available only in German at the moment.
>>
>> Here's a brief English overview of the features:
>>
>> - The services comprises authority file data about 1,8 million persons
>> (from Name Authority File PND), 160.000 subject headings (from Subject
>> Headings Authority File SWD) and about 1.3 million corporate bodies
>> (from Comporate Body Authority File GKD). - The data modelling has
>> been refined compared to an earlier prototype (March 2010). - We've
>> added additional links to external data sources, in particular from
>> our SWD to appropriate data at LCSH and RAMEAU. - The service is now
>> integrated into our web presence; a special test environment that we
>> used before is no longer needed.
>>
>> These examples illustrate our work:
>>
>> - The German author Bertolt Brecht (http://d-nb.info/gnd/118514768)
>> has the following XML/RDF representation:
>> http://d-nb.info/gnd/118514768/about - The Subject Heading for
>> "Führungskraft" ("Executive" or "Cadres (personnel)") is found here:
>> http://d-nb.info/gnd/4071497-4 - The associated XML/RDF representation
>> can be found here: http://d-nb.info/gnd/4071497-4/about
>>
>> The next step for us is planning a follow-up project that will improve
>> the technical infrastructure, develop automatic update mechanisms and
>> expand the data we represent.
>>
>> We are looking forward to your feedback.
>>
>> Best regards,
>>
>> Jan Hannemann
>>
>>
>> _______________________________________
>>
>> Dr. Jan Hannemann
>> Deutsche Nationalbibliothek
>> Informationstechnik
>> Adickesallee 1
>> D-60322 Frankfurt am Main
>> Telefon: +49-69-1525-1769
>> Telefax: +49-69-1525-1799
>>
>>
>>
>