Georg Güntner: Smart Enterprises – Erfolgreicher Einsatz semantischer Technol...Semantic Web Company
Die Technologien des „Web of Data“ haben heute einen Reife- und Akzeptanzgrad erreicht, der den produktiven Einsatz in Unternehmen zur Unterstützung der Geschäftsprozesse ermöglicht. Dabei sind Ansätze wie Open (Linked) Data und Open Source zwar weit verbreitet, jedoch lassen sich die Technologien auch auf interne und geschlossene Datenquellen und proprietäre Strukturen anwenden.
Der Vortrag skizziert die Grundlagen und Anwendungen eines „semantischen Werkzeugkastens“, der modular mit den Informationssystemen eines Unternehmens verbunden wir, um Datensilos zu öffnen, adaptive integrierte Sichten über die Inhalte und Daten zu legen und Entscheidungsprozesse zu unterstützen.
Der Werkzeugkasten für Enterprise Content Integration umfasst Apache Stanbol (Knowledge Extraktion), das Linked Media Framework (Networked Knowledge) und VIE (Interactive Knowledge).
Georg Güntner: Smart Enterprises – Erfolgreicher Einsatz semantischer Technol...Semantic Web Company
Die Technologien des „Web of Data“ haben heute einen Reife- und Akzeptanzgrad erreicht, der den produktiven Einsatz in Unternehmen zur Unterstützung der Geschäftsprozesse ermöglicht. Dabei sind Ansätze wie Open (Linked) Data und Open Source zwar weit verbreitet, jedoch lassen sich die Technologien auch auf interne und geschlossene Datenquellen und proprietäre Strukturen anwenden.
Der Vortrag skizziert die Grundlagen und Anwendungen eines „semantischen Werkzeugkastens“, der modular mit den Informationssystemen eines Unternehmens verbunden wir, um Datensilos zu öffnen, adaptive integrierte Sichten über die Inhalte und Daten zu legen und Entscheidungsprozesse zu unterstützen.
Der Werkzeugkasten für Enterprise Content Integration umfasst Apache Stanbol (Knowledge Extraktion), das Linked Media Framework (Networked Knowledge) und VIE (Interactive Knowledge).
Zukunft von Linked Media: Trends, Entwicklungen und VisionenSalzburg NewMediaLab
Sandra Schön und Georg Güntner (2013). Zukunft von Linked Media:
Trends, Entwicklungen und Visionen. Band 6 in der Reihe der „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert.
Macht mit im Web! Anreizsysteme zur Unterstützung von Aktivitäten bei Communi...Salzburg NewMediaLab
Sandra Schön, Martin Ebner, Hannes Rothe, Renate Steinmann und Florian Wenger (2013). Macht mit im Web! Anreizsysteme zur Unterstützung von Aktivitäten bei Community- und Content-Plattformen. Band 6 der Reihe „Social Media“, herausgegeben von Georg Güntner und Sebastian Schaffert, Salzburg: Salzburg Research. (ISBN 978-3-902448-38-5)
Linked Media. Ein White-Paper zu den Potentialen von Linked People, Linked C...Salzburg NewMediaLab
Salzburg NewMediaLab – The Next Generation (2011). Linked Media. Ein White-Paper zu den Potentialen von Linked People, Linked Content und Linked Data in Unternehmen.
Unter Mitwirkung von Christoph Bauer, Andreas Blumauer, Tobias Bürger,
Manuel Fernandez, Wolfgang Gewald, Dietmar Glachs, Georg Güntner, Gerhard Haberl, Thomas Kurz, Siegfried Reich,
Sebastian Schaffert, Marius Schebella, Sandra Schön, Katharina Siorpaes, Rupert Westenthaler, Markus Winkler und Edgar Zwischenbrugger (Leitung: Sandra Schön und Georg Güntner)
Band 1 der Reihe „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert
Qualitätssicherung bei Annotationen. Soziale und technologische Verfahren in ...Salzburg NewMediaLab
Band 5 der Reihe "Linked Media Lab Reports"
Annotationen stellen ein Mittel dar, um Texte, Dokumente und audiovisuelle Materialien im Web und in unternehmensinternen Informationssystemen mit ergänzenden Schlagwörtern zu versehen, die den Inhalt der Materialien prägnant kennzeichnen. Traditionell werden Annotationen von Fachleuten, wie Archivaren oder den Autoren selbst durchgeführt. Bei neueren Verfahren werden Annotationen, beispielsweise Schlagworte, auch automatisch oder von einer Community hinzugefügt. Sowohl bei den klassischen Verfahren als auch - und insbesondere - bei den jüngeren Verfahren spielt die Qualitätssicherung für Annotationen eine zunehmend wichtige Rolle, da sie u.a. Voraussetzung für hochwertige Suchergebnisse ist.
Die Anbieter und Betreiber von Informationssystemen und Medienarchiven setzen unterschiedliche Verfahren sozialer und technischer Natur ein, um die Qualität der Annotationen effektiv zu sichern. In diesem fünften Band der Linked Media Lab Reports des „Salzburg NewMediaLab – The Next Generation“ werden tradierte sowie innovative Ansätze aus der Literatur und der Praxis von Medienarchiven zusammengetragen und vorgestellt.
Autoren: Sandra Schön, Georg Güntner, Jean-Christoph Börner, Sven Leitinger, Marius Schebella, Andreas Strasser, Stefan Thaler, Michael Vielhaber und Andrea Wolfinger.
Band 5 der Reihe „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert
Smarte Annotationen. Ein Beitrag zur Evaluation von Empfehlungen für Annotati...Salzburg NewMediaLab
Sandra Schön und Thomas Kurz
unter Mitwirkung von Christoph Bauer, Jean-Christoph Börner, Peter M. Hofer, Katalin Lejtovicz, Marius Schebella, Michael Springer, Andrea Wolfinger und Edgar Zwischenbrugger
Smarte Annotationen.
Ein Beitrag zur Evaluation von Empfehlungen für Annotationen.
Presseberichte zum ITK-Anbieter Tieto in den österreichischen Medien, 1. Halbjahr 2011. PR-Agentur: results&relations Wien.
Pressemedien:
* Telekommunikations + IT Report
* It&t business
ITB Präsentation: Vom Extranet zum Social Web - Social Media im B2B EinsatzRealizing Progress
Am 07. März 2013 gehaltener Vortrag unter dem Vortragstitel “Vom Extranet zum Social Web: Social Media im B2B-Einsatz – Das Beispiel Tourismusnetzwerk.info” im Rahmen des Kongressprogramms der internationalen Tourismusbörse in Berlin ITB von Florian Bauhuber.
(in German) Vortrag wurde auf der Veranstaltung "Lernen-Arbeiten-Wissen" des Delan vor Ort am 28.08.2007 in Wiesbaden bei der Hessen Agentur gehalten. Weitere Infos dazu: http://tinyurl.com/38vktj
How Enterprise Architecture & Knowledge Graph Technologies Can Scale Business...Semantic Web Company
Organising data, for most of us, means Excel spreadsheets and folders upon folders. Knowledge graph technology, however, organises data in ways similar to the brain – through context and relations. By connecting your data, you (and also machines) are able to gain context within your knowledge, helping you to make informed decisions based on all of the information you already have.
So, how can enterprises benefit from this and scale?
PwC Sr. Research Fellow for Emerging Tech, Alan Morrison, and Sebastian Gabler, Head of Sales of Semantic Web Company tackle the importance of Enterprise Knowledge Graphs and how these technologies scale business efficiency.
Learn about:
• Application-centric development to data-centric approaches
• How enterprise architects learn how to benefit from knowledge graphs: use cases
• Learn which use cases fit well to which type of graph, and which technologies are involved
• Understand how RDF helps with data integration.
• What is AI-assisted entity linking?
• Understand data virtualisation vs. materialisation
- Learn to understand what knowledge graphs are for
- Understand the structure of knowledge graphs (and how it relates to taxonomies and ontologies)
- Understand how knowledge graphs can be created using manual, semi-automatic, and fully automatic methods.
- Understand knowledge graphs as a basis for data integration in companies
- Understand knowledge graphs as tools for data governance and data quality management
- Implement and further develop knowledge graphs in companies
- Query and visualize knowledge graphs (including SPARQL and SHACL crash course)
- Use knowledge graphs and machine learning to enable information retrieval, text mining and document classification with the highest precision
- Develop digital assistants and question and answer systems based on semantic knowledge graphs
- Understand how knowledge graphs can be combined with text mining and machine learning techniques
- Apply knowledge graphs in practice: Case studies and demo applications
Weitere ähnliche Inhalte
Ähnlich wie Konzepte, Services, Geschäftsmodelle des Zukunftsweb in Österreich
Zukunft von Linked Media: Trends, Entwicklungen und VisionenSalzburg NewMediaLab
Sandra Schön und Georg Güntner (2013). Zukunft von Linked Media:
Trends, Entwicklungen und Visionen. Band 6 in der Reihe der „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert.
Macht mit im Web! Anreizsysteme zur Unterstützung von Aktivitäten bei Communi...Salzburg NewMediaLab
Sandra Schön, Martin Ebner, Hannes Rothe, Renate Steinmann und Florian Wenger (2013). Macht mit im Web! Anreizsysteme zur Unterstützung von Aktivitäten bei Community- und Content-Plattformen. Band 6 der Reihe „Social Media“, herausgegeben von Georg Güntner und Sebastian Schaffert, Salzburg: Salzburg Research. (ISBN 978-3-902448-38-5)
Linked Media. Ein White-Paper zu den Potentialen von Linked People, Linked C...Salzburg NewMediaLab
Salzburg NewMediaLab – The Next Generation (2011). Linked Media. Ein White-Paper zu den Potentialen von Linked People, Linked Content und Linked Data in Unternehmen.
Unter Mitwirkung von Christoph Bauer, Andreas Blumauer, Tobias Bürger,
Manuel Fernandez, Wolfgang Gewald, Dietmar Glachs, Georg Güntner, Gerhard Haberl, Thomas Kurz, Siegfried Reich,
Sebastian Schaffert, Marius Schebella, Sandra Schön, Katharina Siorpaes, Rupert Westenthaler, Markus Winkler und Edgar Zwischenbrugger (Leitung: Sandra Schön und Georg Güntner)
Band 1 der Reihe „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert
Qualitätssicherung bei Annotationen. Soziale und technologische Verfahren in ...Salzburg NewMediaLab
Band 5 der Reihe "Linked Media Lab Reports"
Annotationen stellen ein Mittel dar, um Texte, Dokumente und audiovisuelle Materialien im Web und in unternehmensinternen Informationssystemen mit ergänzenden Schlagwörtern zu versehen, die den Inhalt der Materialien prägnant kennzeichnen. Traditionell werden Annotationen von Fachleuten, wie Archivaren oder den Autoren selbst durchgeführt. Bei neueren Verfahren werden Annotationen, beispielsweise Schlagworte, auch automatisch oder von einer Community hinzugefügt. Sowohl bei den klassischen Verfahren als auch - und insbesondere - bei den jüngeren Verfahren spielt die Qualitätssicherung für Annotationen eine zunehmend wichtige Rolle, da sie u.a. Voraussetzung für hochwertige Suchergebnisse ist.
Die Anbieter und Betreiber von Informationssystemen und Medienarchiven setzen unterschiedliche Verfahren sozialer und technischer Natur ein, um die Qualität der Annotationen effektiv zu sichern. In diesem fünften Band der Linked Media Lab Reports des „Salzburg NewMediaLab – The Next Generation“ werden tradierte sowie innovative Ansätze aus der Literatur und der Praxis von Medienarchiven zusammengetragen und vorgestellt.
Autoren: Sandra Schön, Georg Güntner, Jean-Christoph Börner, Sven Leitinger, Marius Schebella, Andreas Strasser, Stefan Thaler, Michael Vielhaber und Andrea Wolfinger.
Band 5 der Reihe „Linked Media Lab Reports“, herausgegeben von Christoph Bauer, Georg Güntner und Sebastian Schaffert
Smarte Annotationen. Ein Beitrag zur Evaluation von Empfehlungen für Annotati...Salzburg NewMediaLab
Sandra Schön und Thomas Kurz
unter Mitwirkung von Christoph Bauer, Jean-Christoph Börner, Peter M. Hofer, Katalin Lejtovicz, Marius Schebella, Michael Springer, Andrea Wolfinger und Edgar Zwischenbrugger
Smarte Annotationen.
Ein Beitrag zur Evaluation von Empfehlungen für Annotationen.
Presseberichte zum ITK-Anbieter Tieto in den österreichischen Medien, 1. Halbjahr 2011. PR-Agentur: results&relations Wien.
Pressemedien:
* Telekommunikations + IT Report
* It&t business
ITB Präsentation: Vom Extranet zum Social Web - Social Media im B2B EinsatzRealizing Progress
Am 07. März 2013 gehaltener Vortrag unter dem Vortragstitel “Vom Extranet zum Social Web: Social Media im B2B-Einsatz – Das Beispiel Tourismusnetzwerk.info” im Rahmen des Kongressprogramms der internationalen Tourismusbörse in Berlin ITB von Florian Bauhuber.
(in German) Vortrag wurde auf der Veranstaltung "Lernen-Arbeiten-Wissen" des Delan vor Ort am 28.08.2007 in Wiesbaden bei der Hessen Agentur gehalten. Weitere Infos dazu: http://tinyurl.com/38vktj
How Enterprise Architecture & Knowledge Graph Technologies Can Scale Business...Semantic Web Company
Organising data, for most of us, means Excel spreadsheets and folders upon folders. Knowledge graph technology, however, organises data in ways similar to the brain – through context and relations. By connecting your data, you (and also machines) are able to gain context within your knowledge, helping you to make informed decisions based on all of the information you already have.
So, how can enterprises benefit from this and scale?
PwC Sr. Research Fellow for Emerging Tech, Alan Morrison, and Sebastian Gabler, Head of Sales of Semantic Web Company tackle the importance of Enterprise Knowledge Graphs and how these technologies scale business efficiency.
Learn about:
• Application-centric development to data-centric approaches
• How enterprise architects learn how to benefit from knowledge graphs: use cases
• Learn which use cases fit well to which type of graph, and which technologies are involved
• Understand how RDF helps with data integration.
• What is AI-assisted entity linking?
• Understand data virtualisation vs. materialisation
- Learn to understand what knowledge graphs are for
- Understand the structure of knowledge graphs (and how it relates to taxonomies and ontologies)
- Understand how knowledge graphs can be created using manual, semi-automatic, and fully automatic methods.
- Understand knowledge graphs as a basis for data integration in companies
- Understand knowledge graphs as tools for data governance and data quality management
- Implement and further develop knowledge graphs in companies
- Query and visualize knowledge graphs (including SPARQL and SHACL crash course)
- Use knowledge graphs and machine learning to enable information retrieval, text mining and document classification with the highest precision
- Develop digital assistants and question and answer systems based on semantic knowledge graphs
- Understand how knowledge graphs can be combined with text mining and machine learning techniques
- Apply knowledge graphs in practice: Case studies and demo applications
Deep Text Analytics - How to extract hidden information and aboutness from textSemantic Web Company
- Deep Text Analytics (DTA) is an application of Semantic AI
- DTA fuses methods and algorithms taken from language modeling, corpus linguistics, machine learning, knowledge representation and the semantic web result into Deep Text Analytics methods
- Main areas of use cases for DTA are Information retrieval, NLU, Question answering, and Recommender Systems
Leveraging Knowledge Graphs in your Enterprise Knowledge Management SystemSemantic Web Company
Knowledge graphs and graph-based data in general are becoming increasingly important for addressing various data management challenges in industries such as financial services, life sciences, healthcare or energy.
At the core of this challenge is the comprehensive management of graph-based data, ranging from taxonomy to ontology management to the administration of comprehensive data graphs along with a defined governance framework. Various data sources are integrated and linked (semi) automatically using NLP and machine learning algorithms. Tools for securing high data quality and consistency are an integral part of such a platform.
PoolParty 7.0 can now handle a full range of enterprise data management tasks. Based on agile data integration, machine learning and text mining, or ontology-based data analysis, applications are developed that allow knowledge workers, marketers, analysts or researchers a comprehensive and in-depth view of previously unlinked data assets.
At the heart of the new release is the PoolParty GraphEditor, which complements the Taxonomy, Thesaurus, and Ontology Manager components that have been around for some time. All in all, data engineers and subject matter experts can now administrate and analyze enterprise-wide and heterogeneous data stocks with comfortable means, or link them with the help of artificial intelligence.
Unified views of business-critical information across all customer-facing processes and HR-related tasks are most relevant for decision makers.
In this talk we present a SharePoint extension that supports the automatic linking of unstructured content like Word documents with structured information from other databases, such as statistical data. As a result, decision makers have knowledge portals based on linked data at their fingertips.
While the importance of managed metadata and Term Store is clear to most SharePoint architects, the significance of a semantic layer outside of the content silos has not yet been explored systematically.
We will present a four-layered content architecture and will take a close look on some of the aspects of the semantic layer and its integration with SharePoint:
- Keeping Term Store and the semantic layer in sync
- Automatic tagging of SharePoint content
- Use of graph databases to store tags
- Entity-centric search & analytics applications
Metadata is most often stored per data source, and therefore it is meaningless outside of the silo. In this presentation, we will give a live demo of a SharePoint extension that makes use of an explicit semantic layer based on standards. This approach builds the basis to start linking data across the silos in a most agile way.
The resulting knowledge graph can start on a small scale, to develop continuously and to grow with the requirements. In this presentation we will give an example to illustrate how initially disconnected HR-related data (CVs in SharePoint; statistical data from labour market; skills and competencies taxonomies; salary spreadsheets) gets linked automatically, and is then made available through an extensive search & analytics application.
Slides based on a workshop held at SEMANTiCS 2018 in Vienna. Introduces a methodology for knowledge graph management based on Semantic Web standards, ranging from taxonomies over ontologies, mappings, graph and entity linking. Further topics covered: Semantic AI and machine learning, text mining, and semantic search.
Semantic Artificial Intelligence is the fusion of various types of AI, incl. symbolic AI, reasoning, and machine learning techniques like deep learning. At the same time, Semantic AI has a strong focus on data management and data governance. With the 'wedding' of various AI techniques new promises are made, but also fundamental approaches like 'Explainable AI (XAI)', knowledge graphs, or Linked Data are more strongly focused.
Bringing Machine Learning and Knowledge Graphs Together
Six Core Aspects of Semantic AI:
- Hybrid Approach
- Data Quality
- Data as a Service
- Structured Data Meets Text
- No Black-box
- Towards Self-optimizing Machines
The PoolParty Semantic Classifier is a component of the Semantic Suite, which makes use of machine learning in combination with Knowledge Graphs.
We discuss the potential of the fusion of machine learning, neuronal networks, and knowledge graphs based on use cases and this concrete technology offering.
We introduce the term 'Semantic AI' that refers to the combined usage of various AI methods.
Machines learn better with Semantics!
See how taxonomy management and the maintenance of knowledge graphs benefit from machine learning and corpus analysis, and how, in return, machine learning gets improved when using semantic knowledge models for further enrichment.
A quick introduction to taxonomies, and how they relate to ontologies and knowledge graph. See how they can serve as part of a semantic layer in your information architecture. Learn which use cases can be developed based on this.
PoolParty GraphSearch - The Fusion of Search, Recommendation and AnalyticsSemantic Web Company
See how Cognitive Search works when based on Semantic Knowledge Graphs.
We showcase the latest developments and new features of PoolParty GraphSearch:
- Navigate a semantic knowledge graph
- Ontology-based data access (OBDA)
- Search over various search spaces: Ontology-driven facets including hierarchies
- Sophisticated autocomplete including context information
- Custom views on entity-centric and document-centric search results
- Linked data: put various tagging services such as TRIT or PoolParty Extractor in series and benefit from comprehensive semantic enrichment
- Statistical charts to explain results from unified data repositories quickly
- Plug-in system for various recommendation and matchmaking algorithms
This talk discusses how companies can apply semantic technologies to build cognitive applications. It examines the role of semantic technologies within the larger Artificial Intelligence (AI) technology ecosystem, with the aim of raising awareness of different solution approaches.
To succeed in a digital and increasingly self-service-oriented business environment, companies can no longer rely solely on IT professionals. Solutions like the PoolParty Semantic Suite utilize domain experts and business users to shape the cognitive intelligence of knowledge-driven applications.
Cognitive solutions essentially mimic how the human brain works. The search for cognitive solutions has challenged computer scientists for more than six decades. The research has matured to the extent that it has moved out of the laboratory and is now being applied in a range of knowledge-intensive industries.
There is no such thing as a single, all-encompassing “AI technology.” Rather, the large global professional technology community and software vendors are continuously developing a broad set of methods and tools for natural language processing and advanced data analytics. They are creating a growing library of machine learning algorithms to enhance the automated learning capabilities of computer systems. These emerging technologies need to be customized or combined with complementary solutions as semantic knowledge graphs, depending on the use case.
A hybrid approach to cognitive computing, employing both the statistical and knowledge base models, will have a critical influence on the development of applications. Highly automated data processing based on sophisticated machine-learning algorithms must give end user the option to independently modify the functioning of smart applications in order to overcome the disadvantages associated with ‘black-box’ approaches.
This talk will give an overview over state-of-the-art smart applications, which are becoming a fusion of search, recommendation, and question-answer machines. We will cover specific use cases in focused knowledge domains, and we will discuss how this approach allows for AI-enabled use cases and application scenarios that are currently highly prioritized by corporate and digital business players.
In this engaging, 1-hour webinar (hosted by http://www.poolparty.biz and http://www.mekon.com), you will learn how to tailor information chunks to readers’ unique needs. We will talk about:
- Benefits and principles of granular structured content, and how to start preparing your own content for this new architecture.
- Best practices for linking structured content to standards-based taxonomies, and some pitfalls to avoid
- The underlying semantic architecture that you can work toward for a truly mature and scalable approach to linking content and data
- Key use cases that you can apply to your own organization
See how you can configure your linked data eco-system based on PoolParty's semantic middleware configurator. Benefit from Shadow Concept Extraction by making implicit knowledge visible. Combine knowledge graphs with machine learning and integrate semantics into your enterprise information systems.
Technical Deep Dive: Learn more about the most complete Semantic Middleware on the market. See how to integrate semantic services into your Enterprise Information Systems.
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
This talk addresses two questions: “How can the quality of taxonomies be defined?” and “How can it be measured?” See how quality criteria vary depending on how a taxonomy is applied, such as automatic content classification in ecommerce or a knowledge graph for data integration in enterprises. Distinguish between formal quality, structural properties, content coverage, and network topology. Investigate the advantages of standards-based and machine-processable SKOS taxonomies to be able to measure the quality of taxonomies automatically, as well as several tools and techniques for quality assessment.
Consistency is crucial to a good user experience. Designers go to great lengths to create and test consistent visual designs. The structural design of an information environment, which is of equal importance to a good user experience, is too often ignored. Blumauer presents a “four-layered content architecture” for making sense of any information environment by clearly distinguishing between the content, metadata, and semantic layers and the navigation logic. He discusses several use cases for a taxonomy-driven user experience such as personalization or dynamically created topic pages.
Konzepte, Services, Geschäftsmodelle des Zukunftsweb in Österreich
1. Konzepte, Services, Geschäftsmodelle
des Zukunftsweb in Österreich
Position Statement
Webinar im Rahmen des Projekts “Zukunftsweb”
28.01.2010, 16:30 bis 18:00 Uhr
www.zukunftsweb.at
Georg Güntner
Salzburg Research Forschungsgesellschaft m.b.H.
Jakob Haringer Straße 5/3 | 5020 Salzburg, Austria
T +43.662.2288-DW | F +43.662.2288-222
georg.guentner@salzburgresearch.at
www.salzburgresearch.at