Submit Search
Upload
TripleWave: Spreading RDF Streams on the Web
•
Download as PPTX, PDF
•
1 like
•
2,137 views
Andrea Mauri
Follow
My presentation at the Interational Semantic Web Conference 2016.
Read less
Read more
Technology
Report
Share
Report
Share
1 of 24
Download now
Recommended
The slides of my talk at INSIGHT Centre for Data Analytics (in NUI Galway) where I presented TripleWave (http://streamreasoning.github.io/TripleWave/), an open-source framework to create and publish streams of RDF data.
Triplewave: a step towards RDF Stream Processing on the Web
Triplewave: a step towards RDF Stream Processing on the Web
Daniele Dell'Aglio
Talk at Stream Reasoning workshop Berlin 2016
Connecting Stream Reasoners on the Web
Connecting Stream Reasoners on the Web
Jean-Paul Calbimonte
Keynote provided on the Ordring 2013 workshop about the need for a W3C community group on RDF Stream Processing
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
Oscar Corcho
RDF Stream Processing Tutorial: RSP implementations
RDF Stream Processing Tutorial: RSP implementations
RDF Stream Processing Tutorial: RSP implementations
Jean-Paul Calbimonte
RDF Stream processing and reactive systems
RDF Stream Processing: Let's React
RDF Stream Processing: Let's React
Jean-Paul Calbimonte
Presentation of the talk "RSP4J: An API for RDF Stream Processing" At the 18th Extended Semantic Web Conference
RSP4J: An API for RDF Stream Processing
RSP4J: An API for RDF Stream Processing
Riccardo Tommasini
Query Rewriting in RDF Stream Processing. RSP .
Query Rewriting in RDF Stream Processing
Query Rewriting in RDF Stream Processing
Jean-Paul Calbimonte
Stream Reasoning Workshop
RDF Stream Processing and the role of Semantics
RDF Stream Processing and the role of Semantics
Jean-Paul Calbimonte
Recommended
The slides of my talk at INSIGHT Centre for Data Analytics (in NUI Galway) where I presented TripleWave (http://streamreasoning.github.io/TripleWave/), an open-source framework to create and publish streams of RDF data.
Triplewave: a step towards RDF Stream Processing on the Web
Triplewave: a step towards RDF Stream Processing on the Web
Daniele Dell'Aglio
Talk at Stream Reasoning workshop Berlin 2016
Connecting Stream Reasoners on the Web
Connecting Stream Reasoners on the Web
Jean-Paul Calbimonte
Keynote provided on the Ordring 2013 workshop about the need for a W3C community group on RDF Stream Processing
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
OrdRing 2013 keynote - On the need for a W3C community group on RDF Stream Pr...
Oscar Corcho
RDF Stream Processing Tutorial: RSP implementations
RDF Stream Processing Tutorial: RSP implementations
RDF Stream Processing Tutorial: RSP implementations
Jean-Paul Calbimonte
RDF Stream processing and reactive systems
RDF Stream Processing: Let's React
RDF Stream Processing: Let's React
Jean-Paul Calbimonte
Presentation of the talk "RSP4J: An API for RDF Stream Processing" At the 18th Extended Semantic Web Conference
RSP4J: An API for RDF Stream Processing
RSP4J: An API for RDF Stream Processing
Riccardo Tommasini
Query Rewriting in RDF Stream Processing. RSP .
Query Rewriting in RDF Stream Processing
Query Rewriting in RDF Stream Processing
Jean-Paul Calbimonte
Stream Reasoning Workshop
RDF Stream Processing and the role of Semantics
RDF Stream Processing and the role of Semantics
Jean-Paul Calbimonte
Brief report about the contents of the Stream Reasoning workshop at SIWC 2016. Additional info about the event are available at: http://streamreasoning.org/events/sr2016
Summary of the Stream Reasoning workshop at ISWC 2016
Summary of the Stream Reasoning workshop at ISWC 2016
Daniele Dell'Aglio
Benchmarks like LSBench, SRBench, CSRBench and, more recently, CityBench satisfy the growing need of shared datasets, ontologies and queries to evaluate window-based RDF Stream Processing (RSP) engines. However, no clear winner emerges out of the evaluation. In this paper, we claim that the RSP community needs to adopt a Systematic Comparative Research Approach (SCRA) if it wants to move a step forward. To this end, we propose a framework that enables SCRA for window based RSP engines. The contributions of this paper are: (i) the requirements to satisfy for tools that aim at enabling SCRA; (ii) the architecture of a facility to design and execute experiment guaranteeing repeatability, reproducibility and comparability; (iii) Heaven – a proof of concept implementation of such architecture that we released as open source –; (iv) two RSP engine implementations, also open source, that we propose as baselines for the comparative research (i.e., they can serve as terms of comparison in future works). We prove Heaven effectiveness using the baselines by: (i) showing that top-down hypothesis verification is not straight forward even in controlled conditions and (ii) providing examples of bottom-up comparative analysis.
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
Riccardo Tommasini
Abstract. Many approaches have been proposed for Stream Reasoning (SR). Some of them combine information flow processing (IFP) tech- niques and semantic technologies to make sense in real-time of noisy, vast and heterogeneous data streams that come from complex domains. More recent works shown the presence of a trade-off between through- put and reasoning expressiveness. Indeed, systems with IFP-like perfor- mance are not really expressive (e.g. up to an RDFS subset) and vice versa. For static data, Information Integration (II) systems approached the problem already. The idea consists in spreading the reasoning com- plexity over different layers of an hierarchical architecture and treating it where it is easier to do. Is it possible realize an expressive and efficient stream reasoning (E2SR), by defining a hierarchical approach that adapts II techniques to the streaming scenario? In this paper, I discuss my plan towards E2SR, the intuition of adapting Information Integration tech- niques to the streaming scenario and the need of Stream Reasoning of comparative analysis to support its technological progress.
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
Riccardo Tommasini
Linked Data Notificaitons for RDF streams at WSP (Web Stream Processing Workshop) - ISWC 2017
Linked Data Notifications for RDF Streams
Linked Data Notifications for RDF Streams
Jean-Paul Calbimonte
by Oscar Corcho @ISWC2013 Workshop on Ordering and Reasoning, Sydney, 22/10/2013
On the need for a W3C community group on RDF Stream Processing
On the need for a W3C community group on RDF Stream Processing
PlanetData Network of Excellence
Presentation at the Fourth Stream Reasoning Workshop 2019, 16-17 April, Linköping, Sweden.
RSP-QL*: Querying Data-Level Annotations in RDF Streams
RSP-QL*: Querying Data-Level Annotations in RDF Streams
keski
Presentation of short paper submitted to OrdRing workshop, held at ISWC 2014 - http://streamreasoning.org/events/ordring2014. In the last years, there has been an increase in the amount of real-time data generated. Sensors attached to things are transforming how we interact with our environment. Extracting meaningful information from these streams of data is essential for some application areas and requires processing systems that scale to varying conditions in data sources, complex queries, and system failures. This paper describes ongoing research on the development of a scalable RDF streaming engine.
Towards efficient processing of RDF data streams
Towards efficient processing of RDF data streams
Alejandro Llaves
Streaming Day: an overview of Stream Reasoning Logical reasoning in real time on multiple, heterogeneous, gigantic and inevitably noisy data streams in order to support the decision process of extremely large numbers of concurrent users. -- S. Ceri, E. Della Valle, F. van Harmelen and H. Stuckenschmidt, 2010
Streaming Day - an overview of Stream Reasoning
Streaming Day - an overview of Stream Reasoning
Riccardo Tommasini
Presentation of RDF-Gen implemented in datAcron project (http://datAcron-project.eu) for converting archival and streaming data to RDF triples.
RDF-Gen: Generating RDF from streaming and archival data
RDF-Gen: Generating RDF from streaming and archival data
Giorgos Santipantakis
Knowledge Discovery tools using Linked Data techniques - {resentation for the Linked Data 4 Knowledge Discovery Workshop at ECML/PKDD2015 conference - http://events.kmi.open.ac.uk/ld4kd2015/ -
LD4KD 2015 - Demos and tools
LD4KD 2015 - Demos and tools
Vrije Universiteit Amsterdam
This presentation will describe how to go beyond a "Hello world" stream application and build a real-time data-driven product. We will present architectural patterns, go through tradeoffs and considerations when deciding on technology and implementation strategy, and describe how to put the pieces together. We will also cover necessary practical pieces for building real products: testing streaming applications, and how to evolve products over time. Presented at highloadstrategy.com 2016 by Øyvind Løkling (Schibsted Products & Technology), joint work with Lars Albertsson (independent, www.mapflat.com).
Building real time data-driven products
Building real time data-driven products
Lars Albertsson
Slides for the presentation on Triple Pattern Fragments in the Modeling, Generating and Publishing knowledge as Linked Data tutorial at EKAW 2016.
EKAW - Triple Pattern Fragments
EKAW - Triple Pattern Fragments
Ruben Taelman
The aim of the EU FP 7 Large-Scale Integrating Project LarKC is to develop the Large Knowledge Collider (LarKC, for short, pronounced “lark”), a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. The LarKC platform is available at larkc.sourceforge.net. This talk, is part of a tutorial for early users of the LarKC platform, and describes the data model used within LarKC.
LarKC Tutorial at ISWC 2009 - Data Model
LarKC Tutorial at ISWC 2009 - Data Model
LarKC
Completeness metadata about RDF data sources has been proposed to provide a partial closed-world assumption over generally incomplete RDF. Wikidata, as one of the major RDF data sources, contains complete information of a range of topics from the cantons of Switzerland to the crew of Apollo 11. We develop COOL-WD as a tool to manage and consume completeness information on Wikidata. Get more information at http://ceur-ws.org/Vol-1666/paper-02.pdf Citation: Radityo Eko Prasojo, Fariz Darari, Simon Razniewski, Werner Nutt: Managing and Consuming Completeness Information for Wikidata Using COOL-WD. COLD@ISWC 2016
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Fariz Darari
Graph relationships are everywhere. In fact, more often than not, analyzing relationships between points in your datasets lets you extract more business value from your data. Consider social graphs, or relationships of customers to each other and products they purchase, as two of the most common examples. Now, if you think you have a scalability issue just analyzing points in your datasets, imagine what would happen if you wanted to start analyzing the arbitrary relationships between those data points: the amount of potential processing will increase dramatically, and the kind of algorithms you would typically want to run would change as well. If your Hadoop batch-oriented approach with MapReduce works reasonably well, for scalable graph processing you have to embrace an in-memory, explorative, and iterative approach. One of the best ways to tame this complexity is known as the Bulk synchronous parallel approach. Its two most widely used implementations are available as Hadoop ecosystem projects: Apache Giraph (used at Facebook), and Apache GraphX (as part of a Spark project). In this talk we will focus on practical advice on how to get up and running with Apache Giraph and GraphX; start analyzing simple datasets with built-in algorithms; and finally how to implement your own graph processing applications using the APIs provided by the projects. We will finally compare and contrast the two, and try to lay out some principles of when to use one vs. the other.
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
rhatr
This talk will present recommended patterns and corresponding anti-patterns for testing data processing pipelines. We will suggest technology and architecture to improve testability, both for batch and streaming processing pipelines. We will primarily focus on testing for the purpose of development productivity and product iteration speed, but briefly also cover data quality testing. Presented at highloadstrategy.com 2016 by Lars Albertsson (independent, www.mapflat.com), joint work with Øyvind Løkling (Schibsted Products & Technology).
Test strategies for data processing pipelines
Test strategies for data processing pipelines
Lars Albertsson
I used this slideset to present our research paper at the 14th Int. Semantic Web Conference (ISWC 2015). Find a preprint of the paper here: http://olafhartig.de/files/HartigPerez_ISWC2015_Preprint.pdf
LDQL: A Query Language for the Web of Linked Data
LDQL: A Query Language for the Web of Linked Data
Olaf Hartig
Slides from my BigDataCon / Jax London talk earlier today
Big Data, Mob Scale.
Big Data, Mob Scale.
darach
An architectural overview of how to build stream data processing applications.
A primer on building real time data-driven products
A primer on building real time data-driven products
Lars Albertsson
The presentation I gave at Linköping University about web stream processing. I discuss two problems: (i) exchanging data streams on the web, and (ii) combining streams and contextual quasi-static data on the web
On web stream processing
On web stream processing
Daniele Dell'Aglio
The presentation I gave at DeSemWeb about stream exchange and processing on the Web. Article available at: http://w3id.org/wesp
On a web of data streams
On a web of data streams
Daniele Dell'Aglio
overview of the RDF graph database-as-a-service (GraphDB based) on the Self-Service Semantic Suite (S4) http://s4.ontotext.com presentation for the AKSW Group of the University of Leipzig
RDF Database-as-a-Service with S4
RDF Database-as-a-Service with S4
Marin Dimitrov
More Related Content
What's hot
Brief report about the contents of the Stream Reasoning workshop at SIWC 2016. Additional info about the event are available at: http://streamreasoning.org/events/sr2016
Summary of the Stream Reasoning workshop at ISWC 2016
Summary of the Stream Reasoning workshop at ISWC 2016
Daniele Dell'Aglio
Benchmarks like LSBench, SRBench, CSRBench and, more recently, CityBench satisfy the growing need of shared datasets, ontologies and queries to evaluate window-based RDF Stream Processing (RSP) engines. However, no clear winner emerges out of the evaluation. In this paper, we claim that the RSP community needs to adopt a Systematic Comparative Research Approach (SCRA) if it wants to move a step forward. To this end, we propose a framework that enables SCRA for window based RSP engines. The contributions of this paper are: (i) the requirements to satisfy for tools that aim at enabling SCRA; (ii) the architecture of a facility to design and execute experiment guaranteeing repeatability, reproducibility and comparability; (iii) Heaven – a proof of concept implementation of such architecture that we released as open source –; (iv) two RSP engine implementations, also open source, that we propose as baselines for the comparative research (i.e., they can serve as terms of comparison in future works). We prove Heaven effectiveness using the baselines by: (i) showing that top-down hypothesis verification is not straight forward even in controlled conditions and (ii) providing examples of bottom-up comparative analysis.
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
Riccardo Tommasini
Abstract. Many approaches have been proposed for Stream Reasoning (SR). Some of them combine information flow processing (IFP) tech- niques and semantic technologies to make sense in real-time of noisy, vast and heterogeneous data streams that come from complex domains. More recent works shown the presence of a trade-off between through- put and reasoning expressiveness. Indeed, systems with IFP-like perfor- mance are not really expressive (e.g. up to an RDFS subset) and vice versa. For static data, Information Integration (II) systems approached the problem already. The idea consists in spreading the reasoning com- plexity over different layers of an hierarchical architecture and treating it where it is easier to do. Is it possible realize an expressive and efficient stream reasoning (E2SR), by defining a hierarchical approach that adapts II techniques to the streaming scenario? In this paper, I discuss my plan towards E2SR, the intuition of adapting Information Integration tech- niques to the streaming scenario and the need of Stream Reasoning of comparative analysis to support its technological progress.
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
Riccardo Tommasini
Linked Data Notificaitons for RDF streams at WSP (Web Stream Processing Workshop) - ISWC 2017
Linked Data Notifications for RDF Streams
Linked Data Notifications for RDF Streams
Jean-Paul Calbimonte
by Oscar Corcho @ISWC2013 Workshop on Ordering and Reasoning, Sydney, 22/10/2013
On the need for a W3C community group on RDF Stream Processing
On the need for a W3C community group on RDF Stream Processing
PlanetData Network of Excellence
Presentation at the Fourth Stream Reasoning Workshop 2019, 16-17 April, Linköping, Sweden.
RSP-QL*: Querying Data-Level Annotations in RDF Streams
RSP-QL*: Querying Data-Level Annotations in RDF Streams
keski
Presentation of short paper submitted to OrdRing workshop, held at ISWC 2014 - http://streamreasoning.org/events/ordring2014. In the last years, there has been an increase in the amount of real-time data generated. Sensors attached to things are transforming how we interact with our environment. Extracting meaningful information from these streams of data is essential for some application areas and requires processing systems that scale to varying conditions in data sources, complex queries, and system failures. This paper describes ongoing research on the development of a scalable RDF streaming engine.
Towards efficient processing of RDF data streams
Towards efficient processing of RDF data streams
Alejandro Llaves
Streaming Day: an overview of Stream Reasoning Logical reasoning in real time on multiple, heterogeneous, gigantic and inevitably noisy data streams in order to support the decision process of extremely large numbers of concurrent users. -- S. Ceri, E. Della Valle, F. van Harmelen and H. Stuckenschmidt, 2010
Streaming Day - an overview of Stream Reasoning
Streaming Day - an overview of Stream Reasoning
Riccardo Tommasini
Presentation of RDF-Gen implemented in datAcron project (http://datAcron-project.eu) for converting archival and streaming data to RDF triples.
RDF-Gen: Generating RDF from streaming and archival data
RDF-Gen: Generating RDF from streaming and archival data
Giorgos Santipantakis
Knowledge Discovery tools using Linked Data techniques - {resentation for the Linked Data 4 Knowledge Discovery Workshop at ECML/PKDD2015 conference - http://events.kmi.open.ac.uk/ld4kd2015/ -
LD4KD 2015 - Demos and tools
LD4KD 2015 - Demos and tools
Vrije Universiteit Amsterdam
This presentation will describe how to go beyond a "Hello world" stream application and build a real-time data-driven product. We will present architectural patterns, go through tradeoffs and considerations when deciding on technology and implementation strategy, and describe how to put the pieces together. We will also cover necessary practical pieces for building real products: testing streaming applications, and how to evolve products over time. Presented at highloadstrategy.com 2016 by Øyvind Løkling (Schibsted Products & Technology), joint work with Lars Albertsson (independent, www.mapflat.com).
Building real time data-driven products
Building real time data-driven products
Lars Albertsson
Slides for the presentation on Triple Pattern Fragments in the Modeling, Generating and Publishing knowledge as Linked Data tutorial at EKAW 2016.
EKAW - Triple Pattern Fragments
EKAW - Triple Pattern Fragments
Ruben Taelman
The aim of the EU FP 7 Large-Scale Integrating Project LarKC is to develop the Large Knowledge Collider (LarKC, for short, pronounced “lark”), a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. The LarKC platform is available at larkc.sourceforge.net. This talk, is part of a tutorial for early users of the LarKC platform, and describes the data model used within LarKC.
LarKC Tutorial at ISWC 2009 - Data Model
LarKC Tutorial at ISWC 2009 - Data Model
LarKC
Completeness metadata about RDF data sources has been proposed to provide a partial closed-world assumption over generally incomplete RDF. Wikidata, as one of the major RDF data sources, contains complete information of a range of topics from the cantons of Switzerland to the crew of Apollo 11. We develop COOL-WD as a tool to manage and consume completeness information on Wikidata. Get more information at http://ceur-ws.org/Vol-1666/paper-02.pdf Citation: Radityo Eko Prasojo, Fariz Darari, Simon Razniewski, Werner Nutt: Managing and Consuming Completeness Information for Wikidata Using COOL-WD. COLD@ISWC 2016
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Fariz Darari
Graph relationships are everywhere. In fact, more often than not, analyzing relationships between points in your datasets lets you extract more business value from your data. Consider social graphs, or relationships of customers to each other and products they purchase, as two of the most common examples. Now, if you think you have a scalability issue just analyzing points in your datasets, imagine what would happen if you wanted to start analyzing the arbitrary relationships between those data points: the amount of potential processing will increase dramatically, and the kind of algorithms you would typically want to run would change as well. If your Hadoop batch-oriented approach with MapReduce works reasonably well, for scalable graph processing you have to embrace an in-memory, explorative, and iterative approach. One of the best ways to tame this complexity is known as the Bulk synchronous parallel approach. Its two most widely used implementations are available as Hadoop ecosystem projects: Apache Giraph (used at Facebook), and Apache GraphX (as part of a Spark project). In this talk we will focus on practical advice on how to get up and running with Apache Giraph and GraphX; start analyzing simple datasets with built-in algorithms; and finally how to implement your own graph processing applications using the APIs provided by the projects. We will finally compare and contrast the two, and try to lay out some principles of when to use one vs. the other.
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
rhatr
This talk will present recommended patterns and corresponding anti-patterns for testing data processing pipelines. We will suggest technology and architecture to improve testability, both for batch and streaming processing pipelines. We will primarily focus on testing for the purpose of development productivity and product iteration speed, but briefly also cover data quality testing. Presented at highloadstrategy.com 2016 by Lars Albertsson (independent, www.mapflat.com), joint work with Øyvind Løkling (Schibsted Products & Technology).
Test strategies for data processing pipelines
Test strategies for data processing pipelines
Lars Albertsson
I used this slideset to present our research paper at the 14th Int. Semantic Web Conference (ISWC 2015). Find a preprint of the paper here: http://olafhartig.de/files/HartigPerez_ISWC2015_Preprint.pdf
LDQL: A Query Language for the Web of Linked Data
LDQL: A Query Language for the Web of Linked Data
Olaf Hartig
Slides from my BigDataCon / Jax London talk earlier today
Big Data, Mob Scale.
Big Data, Mob Scale.
darach
An architectural overview of how to build stream data processing applications.
A primer on building real time data-driven products
A primer on building real time data-driven products
Lars Albertsson
What's hot
(19)
Summary of the Stream Reasoning workshop at ISWC 2016
Summary of the Stream Reasoning workshop at ISWC 2016
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
Heaven: A Framework for Systematic Comparative Research Approach for RSP Engines
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
A Hierarchical approach towards Efficient and Expressive Stream Reasoning
Linked Data Notifications for RDF Streams
Linked Data Notifications for RDF Streams
On the need for a W3C community group on RDF Stream Processing
On the need for a W3C community group on RDF Stream Processing
RSP-QL*: Querying Data-Level Annotations in RDF Streams
RSP-QL*: Querying Data-Level Annotations in RDF Streams
Towards efficient processing of RDF data streams
Towards efficient processing of RDF data streams
Streaming Day - an overview of Stream Reasoning
Streaming Day - an overview of Stream Reasoning
RDF-Gen: Generating RDF from streaming and archival data
RDF-Gen: Generating RDF from streaming and archival data
LD4KD 2015 - Demos and tools
LD4KD 2015 - Demos and tools
Building real time data-driven products
Building real time data-driven products
EKAW - Triple Pattern Fragments
EKAW - Triple Pattern Fragments
LarKC Tutorial at ISWC 2009 - Data Model
LarKC Tutorial at ISWC 2009 - Data Model
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Managing and Consuming Completeness Information for Wikidata Using COOL-WD
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
Introduction into scalable graph analysis with Apache Giraph and Spark GraphX
Test strategies for data processing pipelines
Test strategies for data processing pipelines
LDQL: A Query Language for the Web of Linked Data
LDQL: A Query Language for the Web of Linked Data
Big Data, Mob Scale.
Big Data, Mob Scale.
A primer on building real time data-driven products
A primer on building real time data-driven products
Similar to TripleWave: Spreading RDF Streams on the Web
The presentation I gave at Linköping University about web stream processing. I discuss two problems: (i) exchanging data streams on the web, and (ii) combining streams and contextual quasi-static data on the web
On web stream processing
On web stream processing
Daniele Dell'Aglio
The presentation I gave at DeSemWeb about stream exchange and processing on the Web. Article available at: http://w3id.org/wesp
On a web of data streams
On a web of data streams
Daniele Dell'Aglio
overview of the RDF graph database-as-a-service (GraphDB based) on the Self-Service Semantic Suite (S4) http://s4.ontotext.com presentation for the AKSW Group of the University of Leipzig
RDF Database-as-a-Service with S4
RDF Database-as-a-Service with S4
Marin Dimitrov
My presentation on RDFauthor at EKAW2010, Lisbon. For more information on RDFauthor visit http://aksw.org/Projects/RDFauthor; for the code visit http://code.google.com/p/rdfauthor/.
RDFauthor (EKAW)
RDFauthor (EKAW)
Norman Heino
ISWC 2017 In-Use paper. Despite the advantages of Linked Data as a data integration paradigm, accessing and consuming Linked Data is still a cumbersome task. Linked Data applications need to use technologies such as RDF and SPARQL that, despite their expressive power, belong to the data integration stack. As a result, applications and data cannot be cleanly separated: SPARQL queries, endpoint addresses, namespaces, and URIs end up as part of the application code. Many publishers address these problems by building RESTful APIs around their Linked Data. However, this solution has two pitfalls: these APIs are costly to maintain; and they blackbox functionality by hiding the queries they use. In this paper we describe grlc, a gateway between Linked Data applications and the LOD cloud that offers a RESTful, reusable and uniform means to routinely access any Linked Data. It generates an OpenAPI compatible API by using parametrized queries shared on the Web. The resulting APIs require no coding, rely on low-cost external query storage and versioning services, contain abundant provenance information, and integrate access to different publishing paradigms into a single API. We evaluate grlc qualitatively, by describing its reported value by current users; and quantitatively, by measuring the added overhead at generating API specifications and answering to calls.
Automatic Query-Centric API for Routine Access to Linked Data
Automatic Query-Centric API for Routine Access to Linked Data
Albert Meroño-Peñuela
Presentation on RDF Stream Processing models given at the SR4LD tutorial (ISWC 2013) -- updated version at: http://www.slideshare.net/dellaglio/rsp2014-01rspmodelsss
RDF Stream Processing Models (SR4LD2013)
RDF Stream Processing Models (SR4LD2013)
Daniele Dell'Aglio
The integration of multimedia assets on the web with structured (linked) data promises further opportunities for digital market places regarding findability and recommendations. The new W3C standards for Media Annotation, Media Fragment UIRs and Linked Data Platforms build a stable base for this purpose. Thomas Kurz shows how to use the Linked Data Platform Apache Marmotta as a backend for the storage and retrieval of Linked Media. In his talk he is going to show extensions for a seamless integration of media streaming for Non-RDF resources and spatio-regional media fragment retrieval with SPARQL.
Linked Media Management with Apache Marmotta
Linked Media Management with Apache Marmotta
Thomas Kurz
As of Drupal 7 we'll have RDFa markup in core, in this session I will: -explain what the implications are of this and why this matters -give a short introduction to the Semantic web, RDF, RDFa and SPARQL in human language -give a short overview of the RDF modules that are available in contrib -talk about some of the potential use cases of all these magical technologies
Semantic web and Drupal: an introduction
Semantic web and Drupal: an introduction
Kristof Van Tomme
Archive integration with RDF
Archive integration with RDF
Lars Marius Garshol
Semantic Web Servers
Semantic Web Servers
webhostingguy
APNIC Infrastructure and Development Director Che-Hoo Cheng gives an overview of the RPKI, why it is important, and how to create ROAs and ROVs to secure routing announcements.
APAN 50: RPKI industry trends and initiatives
APAN 50: RPKI industry trends and initiatives
APNIC
Leveraging Wikipedia as a Hub for Data Integration: the Remixing Archival Metadata Project (RAMP) Timothy A. Thompson, Metadata Librarian (Spanish/Portuguese Specialty), Princeton University Library
November 19, 2014 NISO Virtual Conference: Can't We All Work Together?: Inter...
November 19, 2014 NISO Virtual Conference: Can't We All Work Together?: Inter...
National Information Standards Organization (NISO)
This presentation tells the story, and FME solutions of a Dutch Utility company for the automatic exchange of data containers containing RDF Linked data, BIM, and documents. The presentation will focus on the non-traditional representation of RDF Linked Data and how this integrates with FME through SPARQL, Apache Jena, and a few customer-built transformers in FME. This FME solution also uses my Excel switch-based method of directing the data flow (my presentation during the FME World Fair).
RDF Linked Data - Automatic Exchange of BIM Containers
RDF Linked Data - Automatic Exchange of BIM Containers
Safe Software
Web Services
Web Services
Katrien Verbert
This is part 4 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data. See also http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
ISWC GoodRelations Tutorial Part 4
ISWC GoodRelations Tutorial Part 4
Martin Hepp
This is part 4 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data. See also http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
GoodRelations Tutorial Part 4
GoodRelations Tutorial Part 4
guestecacad2
Developing CouchApps
Developing CouchApps
westhoff
Presentation on RDF Stream Processing models given at the RSP2014 tutorial (ESWC 2014)
RDF Stream Processing Models (RSP2014)
RDF Stream Processing Models (RSP2014)
Daniele Dell'Aglio
Introduction to Apache Any23. Any23 is a library, a Web Service and a Command Line Tool written in Java, that extracts structured RDF data from a variety of Web documents and markup formats. Any23 is an Apache Software Foundation top level project.
Apache Any23 - Anything to Triples
Apache Any23 - Anything to Triples
Michele Mostarda
http://flink-forward.org/kb_sessions/flink-and-beam-current-state-roadmap/ It is no secret that the Dataflow model, which evolved from Google’s MapReduce, Flume, and MillWheel, has been a major influence to Apache Flink’s streaming API. The essentials of this model are captured in Apache Beam. Beam provides the Dataflow API with the option to deploy to various backends (e.g. Flink, Spark). In this talk we will examine the current state of the Flink Runner. Beam’s Runners manage the translation of the Beam API into the backend API. The Beam project itself has made an effort to summarize the capabilities of each Runner to provide an overview of the supported API concepts. From all open sources backends, Flink is currently the Runner which supports the most features. We will look at the supported Beam features and their counterpart in Flink. Further, we will look at potential improvements and upcoming features of the Flink Runner.
Maximilian Michels - Flink and Beam
Maximilian Michels - Flink and Beam
Flink Forward
Similar to TripleWave: Spreading RDF Streams on the Web
(20)
On web stream processing
On web stream processing
On a web of data streams
On a web of data streams
RDF Database-as-a-Service with S4
RDF Database-as-a-Service with S4
RDFauthor (EKAW)
RDFauthor (EKAW)
Automatic Query-Centric API for Routine Access to Linked Data
Automatic Query-Centric API for Routine Access to Linked Data
RDF Stream Processing Models (SR4LD2013)
RDF Stream Processing Models (SR4LD2013)
Linked Media Management with Apache Marmotta
Linked Media Management with Apache Marmotta
Semantic web and Drupal: an introduction
Semantic web and Drupal: an introduction
Archive integration with RDF
Archive integration with RDF
Semantic Web Servers
Semantic Web Servers
APAN 50: RPKI industry trends and initiatives
APAN 50: RPKI industry trends and initiatives
November 19, 2014 NISO Virtual Conference: Can't We All Work Together?: Inter...
November 19, 2014 NISO Virtual Conference: Can't We All Work Together?: Inter...
RDF Linked Data - Automatic Exchange of BIM Containers
RDF Linked Data - Automatic Exchange of BIM Containers
Web Services
Web Services
ISWC GoodRelations Tutorial Part 4
ISWC GoodRelations Tutorial Part 4
GoodRelations Tutorial Part 4
GoodRelations Tutorial Part 4
Developing CouchApps
Developing CouchApps
RDF Stream Processing Models (RSP2014)
RDF Stream Processing Models (RSP2014)
Apache Any23 - Anything to Triples
Apache Any23 - Anything to Triples
Maximilian Michels - Flink and Beam
Maximilian Michels - Flink and Beam
More from Andrea Mauri
While basic Web analytics tools are widespread and provide statistics about website navigation, no approaches exist for merging such statistics with information about the Web application structure, content and semantics. Current analytics tools only analyze the user interaction at page level in terms of page views, entry and landing page, page views per visit, and so on. We show the advantages of combining Web application models with runtime navigation logs, at the purpose of deepening the understanding of users behaviour. We propose a model-driven approach that combines user interaction modeling (based on the IFML standard), full code generation of the designed application, user tracking at runtime through logging of runtime component execution and user activities, integration with page content details, generation of integrated schema-less data streams, and application of large-scale analytics and visualization tools for big data, by applying both traditional data visualization techniques and direct representation of statistics on visual models of the Web application.
A Big Data Analysis Framework for Model-Based Web User Behavior Analytics
A Big Data Analysis Framework for Model-Based Web User Behavior Analytics
Andrea Mauri
Slides presented at "1st International Workshop on the Social Web for Environmental and Ecological Monitoring"
Model Driven Development of Social Media Environmental Monitoring Applications
Model Driven Development of Social Media Environmental Monitoring Applications
Andrea Mauri
Slides of my PhD defense
Methodologies for the Development of Crowd and Social-based applications
Methodologies for the Development of Crowd and Social-based applications
Andrea Mauri
Slides presented at the Third International Workshop on the Theory and Practice of Social Machines (at WWW2015 conference in Florence)
An explorative approach for Crowdsourcing tasks design