This presentation is a supplementary material for "Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness" (Otmane Azeroual, Joachim Schopfel, Janne Polonen, and Anastasija Nikiforova) paper presented at 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K) conference, and also received the Best Paper Award. In this presentation we raise a discussion on this topic showing that the improvement of FAIRness is a dual or bidirectional process, where CRIS promotes and contributes to the FAIRness of data and infrastructures, and FAIR principles push for further improvement in the underlying CRIS data model and format, positively affecting the sustainability of these systems and underlying artifacts. CRIS are beneficial for FAIR, and FAIR is beneficial for CRIS.
See the text here -> https://www.scitepress.org/Link.aspx?doi=10.5220/0011548700003335
Cite as -> Azeroual, O.; Schöpfel, J.; Pölönen, J. and Nikiforova, A. (2022). Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness. In Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KMIS, ISBN 978-989-758-614-9; ISSN 2184-3228, pages 63-71. DOI: 10.5220/0011548700003335
This document discusses FAIR data principles and open data. It provides an overview of the FAIR data principles of Findable, Accessible, Interoperable and Reusable data. It outlines the benefits of open data in a big data world but also acknowledges the challenges of implementing open data practices. The document proposes establishing an African Open Data Forum and developing research data infrastructure, skills training, policies and strategies to support open science and FAIR data practices in Africa.
Susanna Sansone's talk at the "Beyond Open" Knowledge Dialogues/Open Data Hong Kong event on research data, hosted at the Hong Kong Innocentre on Monday 20 November 2017.
Presentation during the 14th Association of African Universities (AAU) Conference and African Open Science Platform (AOSP)/Research Data Alliance (RDA) Workshop in Accra, Ghana, 7-8 June 2017.
This document summarizes Simon Hodson's presentation on open science and FAIR data developments globally. Some key points:
1) There is a growing policy push for open research data, with funders and organizations adopting data sharing policies based on FAIR data principles of findability, accessibility, interoperability, and reusability.
2) Initiatives are working to build the international ecosystem of open science, including components for reporting research outputs, persistent identifiers, data standards, data repositories, and criteria for trustworthy data.
3) The African Open Science Platform aims to lay the foundations for open science in Africa through frameworks for policy, incentives, training, and technical infrastructure development.
4) International
Research data support: a growth area for academic libraries?Robin Rice
This document summarizes a presentation given by Robin Rice from the University of Edinburgh on research data management and the role of academic libraries. The presentation covered open science and the FAIR data principles, drivers for research data management policy changes, examples of research data management services, and the changing skills needed in academic libraries to support research data. It provided an overview of the University of Edinburgh's research data services, which include tools and support across the data lifecycle from writing data management plans to long-term data preservation. The presentation also discussed the skills important for data librarians and ways for librarians to develop skills in open science and research data management.
Open Data Strategies and Research Data RealitiesMartin Donnelly
The document summarizes a presentation about facilitating open science training in Europe. It discusses the benefits of open data and research, including increased impact, accessibility, efficiency and transparency. However, it also notes challenges like privacy, recognition issues, and technical limitations. Emerging consensus supports the "FAIR" principles of findable, accessible, interoperable and reusable data. The presentation provides guidance on open data strategies, including having a data management plan, describing and archiving data appropriately, and using standards. It emphasizes communication and seeking help from research support organizations.
Stewardship data-guidelines- research information network jan 2008Eldad Sotnick-Yogev
Although dated - January 2008 - this document serves as an excellent introduction to the questions any organisation needs to ask as they bring in a Data Management Platform (DMP). From page 6 the questions they highlight are effective in helping think through the roles, rights, responsibilities and relationships that need to be accounted for
Presentation during the 14th Association of African Universities (AAU) Conference and African Open Science Platform (AOSP)/Research Data Alliance (RDA) Workshop in Accra, Ghana, 7-8 June 2017.
This document discusses FAIR data principles and open data. It provides an overview of the FAIR data principles of Findable, Accessible, Interoperable and Reusable data. It outlines the benefits of open data in a big data world but also acknowledges the challenges of implementing open data practices. The document proposes establishing an African Open Data Forum and developing research data infrastructure, skills training, policies and strategies to support open science and FAIR data practices in Africa.
Susanna Sansone's talk at the "Beyond Open" Knowledge Dialogues/Open Data Hong Kong event on research data, hosted at the Hong Kong Innocentre on Monday 20 November 2017.
Presentation during the 14th Association of African Universities (AAU) Conference and African Open Science Platform (AOSP)/Research Data Alliance (RDA) Workshop in Accra, Ghana, 7-8 June 2017.
This document summarizes Simon Hodson's presentation on open science and FAIR data developments globally. Some key points:
1) There is a growing policy push for open research data, with funders and organizations adopting data sharing policies based on FAIR data principles of findability, accessibility, interoperability, and reusability.
2) Initiatives are working to build the international ecosystem of open science, including components for reporting research outputs, persistent identifiers, data standards, data repositories, and criteria for trustworthy data.
3) The African Open Science Platform aims to lay the foundations for open science in Africa through frameworks for policy, incentives, training, and technical infrastructure development.
4) International
Research data support: a growth area for academic libraries?Robin Rice
This document summarizes a presentation given by Robin Rice from the University of Edinburgh on research data management and the role of academic libraries. The presentation covered open science and the FAIR data principles, drivers for research data management policy changes, examples of research data management services, and the changing skills needed in academic libraries to support research data. It provided an overview of the University of Edinburgh's research data services, which include tools and support across the data lifecycle from writing data management plans to long-term data preservation. The presentation also discussed the skills important for data librarians and ways for librarians to develop skills in open science and research data management.
Open Data Strategies and Research Data RealitiesMartin Donnelly
The document summarizes a presentation about facilitating open science training in Europe. It discusses the benefits of open data and research, including increased impact, accessibility, efficiency and transparency. However, it also notes challenges like privacy, recognition issues, and technical limitations. Emerging consensus supports the "FAIR" principles of findable, accessible, interoperable and reusable data. The presentation provides guidance on open data strategies, including having a data management plan, describing and archiving data appropriately, and using standards. It emphasizes communication and seeking help from research support organizations.
Stewardship data-guidelines- research information network jan 2008Eldad Sotnick-Yogev
Although dated - January 2008 - this document serves as an excellent introduction to the questions any organisation needs to ask as they bring in a Data Management Platform (DMP). From page 6 the questions they highlight are effective in helping think through the roles, rights, responsibilities and relationships that need to be accounted for
Presentation during the 14th Association of African Universities (AAU) Conference and African Open Science Platform (AOSP)/Research Data Alliance (RDA) Workshop in Accra, Ghana, 7-8 June 2017.
Supporting Research Data Management in UK Universities: the Jisc Managing Res...L Molloy
Research data management in the UK: interventions by the Jisc Managing Research Data programme and the Digital Curation Centre. Specifies the importance of academic librarians for RDM. Includes links to openly available training resources. Presentation by L Molloy to ExLibris event, 'Excellence in Academic Knowledge Management', Utrecht, 29 October 2013.
Big Data Europe: SC6 Workshop 3: The European Research Data Landscape: Opport...BigData_Europe
Slides of the keynote at the 3rd Big Data Europe SC6 Workshop co-located at SEMANTiCS2018 in Amsterdam (NL) on: The European Research Data Landscape: Opportunities for CESSDA by Peter Doorn, Director DANS, Chair, Science Europe W.G. on Research Data. Chair, CESSDA ERIC General Assembly
The document summarizes the Jisc Managing Research Data Programme which aims to support universities in improving research data management. It discusses why managing research data is important, highlighting funder policies and the benefits of open data. It provides an overview of Jisc's activities including training projects, guidance resources, and funding for institutional infrastructure services and repositories. The presentation emphasizes the importance of institutional policies, support services, skills development and cultural change to effectively manage research data in line with funder expectations.
056-Science Europe Draft Proposal for a Sceince Europe position statement on ...innovationoecd
The document proposes core principles for research information systems to adopt in order to support the constant evolution of research. The principles are flexibility, openness, adherence to FAIR data principles, and minimizing data entry. It also outlines four follow-up actions organizations can take to work towards implementing the principles: combining data from different sources, improving funder and grant identification, adopting researcher identifiers like ORCID, and documenting subject classification systems.
Slides presented at the Spanish Agency of Science and Technology (FECYT) and the network of Spanish repositories (RECOLECTA) Research Data Management Webinar Series - see url:
http://www.recolecta.net/buscador/webminars.jsp
SCONUL Summer Conference 2018 - Paul Feldmansconul
The document outlines Jisc's strategic priorities for 2020 related to learning and teaching which include better student outcomes through personalized learning, improved planning and management of technology enhanced learning, and the delivery of high-quality and cost-effective blended learning. It also discusses using learning analytics and other data to create more efficient campuses and improve teaching, curricula, retention and attainment. The priorities are aimed at responding to changes in the external landscape and expanding provision through new digital models.
Ingrid Dillo from DANS (Dutch Academy and Research Funding Organisation) discusses data sharing and the FAIR principles. She explains that data sharing is important for research validation, reuse, and building on prior work. However, ensuring data quality and trust is key. The FAIR principles provide guidelines for findable, accessible, interoperable and reusable data. Certification mechanisms like CoreTrustSeal help create trust in digital repositories. While open data is important, responsible data management practices are also needed. Guidelines have been developed to help researchers and institutions in the arts and humanities domain apply FAIR principles to their work.
Paul Jeffreys - Research Integrity: Institutional ResponsibilityJisc
This document summarizes a presentation given at a research integrity conference about the actions the University of Oxford is taking to meet its responsibilities regarding research data management. The university recognizes data management as important for ensuring research integrity and is coordinating various digital services through developing policies, overseeing data management, addressing funding, and creating a university-wide research data catalogue and repository. While still in early stages, the university aims to provide sustainable data services and ensure long-term access to and integrity of research data.
This document discusses several studies on user engagement in research data curation. It finds that institutional repositories for data were developed without input from researchers, leading to systems that did not meet researchers' needs. Barriers to open data sharing included concerns over commercial use and maintaining ownership. Successful data curation requires understanding disciplinary differences and developing trusted relationships with researchers through dialogue early in projects.
Rachel Bruce UK research and data management where are we nowJisc
The document discusses the state of research data management in UK universities. It finds that while areas like data cataloguing and access/storage systems are progressing, governance of data access/reuse and digital preservation/planning are lagging. Barriers to progress include low researcher priority, funding availability, and lack of staff/infrastructure. Gaps include defining responsibilities, standards, costs, and tools. Coordination and sharing resources across institutions is needed to help universities advance research data management.
Ross Wilkinson - Data Publication: Australian and Global Policy DevelopmentsWiley
Australia invests $AUD1-2B per annum in research data. Like most countries, it wants to get the best return possible on this data. Europe is spending E1.4B on their open data “pilot”. This means the data should be FAIR: findable, accessible, interoperable, and reusable. Part of this is that data should be routinely “published” and available in a “data repository”. But what does this mean?
Ross Wilkinson
CEO, Australian National Data Service
Presented at the 2015 Wiley Publishing Seminar, 5 November, Melbourne, Australia.
The Spanish Open Research Data Network. Lessons learnedmaredata
This document summarizes a presentation about Maredata, a Spanish network focused on open research data management. The network brings together Spanish research teams working on topics like interoperability, access, preservation, and open data policies. It aims to coordinate these groups, avoid duplications in research, and promote transparency. The benefits of open research data discussed include increased collaboration, validation of results, and transparency. Future areas of focus for the network include identifying discipline-specific research data management needs, exploring open health data, and addressing issues like data protection, quality, and ethics.
This document discusses managing research data for open science based on the UK experience. It outlines key aspects of open science such as making research more open, global, collaborative and closer to society. The document discusses mandates for open research data from funding bodies in the UK and EU, including stipulations in Horizon 2020 and requirements from EPSRC. It defines what constitutes research data and examines challenges around research data management, including technology issues, people issues, policy issues and resources. The importance of data skills training for researchers and data professionals is also covered.
Biesenbender - The research core dataset as a standard for research informationinnovationoecd
The document discusses the development of the Research Core Dataset (RCD) as a voluntary standard for research information in Germany. It notes the need for research information standards to reduce reporting burdens and ensure comparability. The RCD was developed through a multi-stakeholder process to define standard definitions for key research output and process data in a way that can be customized for different reporting needs while imposing minimal requirements on institutions. Next steps include voluntary implementation of the RCD standard and establishment of support for institutions to assist with adoption.
NordForsk Open Access Reykjavik 14-15/8-2014:Finnish data-initiativeNordForsk
This document summarizes Finland's Research Data Initiative from 2009-2017. The initiative aimed to develop a sustainable research data infrastructure in Finland by providing services like data storage, metadata, and long-term preservation. It also sought to encourage open data sharing and reuse. The initiative progressed from early planning projects to establishing core services. Lessons learned include the importance of flexible governance, permanent preservation, embracing change through openness, and addressing cultural shifts around data sharing. The initiative aims to enhance research through improved access, collaboration and reuse of scientific data.
The document summarizes Susanna-Assunta Sansone's presentation on enabling FAIR (Findable, Accessible, Interoperable, Reusable) digital resources. It discusses the driving forces behind FAIR including reproducibility crises, new data types, and changing publishing. It then outlines community efforts to develop standards, policies, and tools to improve metadata and data sharing according to FAIR principles. These include domain-specific standards, the FAIRsharing registry, metrics to assess FAIRness, and ongoing work to provide FAIR guidance and services.
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...Anastasija Nikiforova
“Data is the new oil” is only partly true, since according to Forbes, data is more than oil, while according to Ataccama, “Manual Data Quality Doesn’t Cut It in 2023” – this was the main driver behind of my guest lecture entitled “Data Quality for AI or AI for Data quality: advances in Data Quality Management for the success and sustainability of emerging technologies, business and society”, as part of which we discussed what is the role of artificial intelligence in data quality management and what is the role of data quality for AI, concluding that it is not about “data quality for AI” OR “AI for data quality” but rather about AND.
We also looked at what is the current market offer regarding AI-driven data quality management, what are the pros and cons of these solutions and what are the prerequisites that we have to take into account when using them (e.g., metadata and their quality for those, which derive DQ rules based on metadata analysis), and how possibly more promising solution could be built.
We also looked at what are those data quality specificities we should consider depending on the artifact – a data object (dataset), whose owner is known / is unknown (open data), Information System, Data Warehouse, Data Lake, Data Lakehouse, Data Mesh – where, when and how DQ takes place in them? What are the current trends? And are these indeed trends or rather hype?
Towards High-Value Datasets determination for data-driven development: a syst...Anastasija Nikiforova
Slides for the talk delivered as part of EGOV-CeDEM-ePart 2023 (EGOV2023) conference, aimed at examining how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks, which was done by conducting a Systematic Literature Review.
Read the paper here -> https://link.springer.com/chapter/10.1007/978-3-031-41138-0_14
Weitere ähnliche Inhalte
Ähnlich wie Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness
Supporting Research Data Management in UK Universities: the Jisc Managing Res...L Molloy
Research data management in the UK: interventions by the Jisc Managing Research Data programme and the Digital Curation Centre. Specifies the importance of academic librarians for RDM. Includes links to openly available training resources. Presentation by L Molloy to ExLibris event, 'Excellence in Academic Knowledge Management', Utrecht, 29 October 2013.
Big Data Europe: SC6 Workshop 3: The European Research Data Landscape: Opport...BigData_Europe
Slides of the keynote at the 3rd Big Data Europe SC6 Workshop co-located at SEMANTiCS2018 in Amsterdam (NL) on: The European Research Data Landscape: Opportunities for CESSDA by Peter Doorn, Director DANS, Chair, Science Europe W.G. on Research Data. Chair, CESSDA ERIC General Assembly
The document summarizes the Jisc Managing Research Data Programme which aims to support universities in improving research data management. It discusses why managing research data is important, highlighting funder policies and the benefits of open data. It provides an overview of Jisc's activities including training projects, guidance resources, and funding for institutional infrastructure services and repositories. The presentation emphasizes the importance of institutional policies, support services, skills development and cultural change to effectively manage research data in line with funder expectations.
056-Science Europe Draft Proposal for a Sceince Europe position statement on ...innovationoecd
The document proposes core principles for research information systems to adopt in order to support the constant evolution of research. The principles are flexibility, openness, adherence to FAIR data principles, and minimizing data entry. It also outlines four follow-up actions organizations can take to work towards implementing the principles: combining data from different sources, improving funder and grant identification, adopting researcher identifiers like ORCID, and documenting subject classification systems.
Slides presented at the Spanish Agency of Science and Technology (FECYT) and the network of Spanish repositories (RECOLECTA) Research Data Management Webinar Series - see url:
http://www.recolecta.net/buscador/webminars.jsp
SCONUL Summer Conference 2018 - Paul Feldmansconul
The document outlines Jisc's strategic priorities for 2020 related to learning and teaching which include better student outcomes through personalized learning, improved planning and management of technology enhanced learning, and the delivery of high-quality and cost-effective blended learning. It also discusses using learning analytics and other data to create more efficient campuses and improve teaching, curricula, retention and attainment. The priorities are aimed at responding to changes in the external landscape and expanding provision through new digital models.
Ingrid Dillo from DANS (Dutch Academy and Research Funding Organisation) discusses data sharing and the FAIR principles. She explains that data sharing is important for research validation, reuse, and building on prior work. However, ensuring data quality and trust is key. The FAIR principles provide guidelines for findable, accessible, interoperable and reusable data. Certification mechanisms like CoreTrustSeal help create trust in digital repositories. While open data is important, responsible data management practices are also needed. Guidelines have been developed to help researchers and institutions in the arts and humanities domain apply FAIR principles to their work.
Paul Jeffreys - Research Integrity: Institutional ResponsibilityJisc
This document summarizes a presentation given at a research integrity conference about the actions the University of Oxford is taking to meet its responsibilities regarding research data management. The university recognizes data management as important for ensuring research integrity and is coordinating various digital services through developing policies, overseeing data management, addressing funding, and creating a university-wide research data catalogue and repository. While still in early stages, the university aims to provide sustainable data services and ensure long-term access to and integrity of research data.
This document discusses several studies on user engagement in research data curation. It finds that institutional repositories for data were developed without input from researchers, leading to systems that did not meet researchers' needs. Barriers to open data sharing included concerns over commercial use and maintaining ownership. Successful data curation requires understanding disciplinary differences and developing trusted relationships with researchers through dialogue early in projects.
Rachel Bruce UK research and data management where are we nowJisc
The document discusses the state of research data management in UK universities. It finds that while areas like data cataloguing and access/storage systems are progressing, governance of data access/reuse and digital preservation/planning are lagging. Barriers to progress include low researcher priority, funding availability, and lack of staff/infrastructure. Gaps include defining responsibilities, standards, costs, and tools. Coordination and sharing resources across institutions is needed to help universities advance research data management.
Ross Wilkinson - Data Publication: Australian and Global Policy DevelopmentsWiley
Australia invests $AUD1-2B per annum in research data. Like most countries, it wants to get the best return possible on this data. Europe is spending E1.4B on their open data “pilot”. This means the data should be FAIR: findable, accessible, interoperable, and reusable. Part of this is that data should be routinely “published” and available in a “data repository”. But what does this mean?
Ross Wilkinson
CEO, Australian National Data Service
Presented at the 2015 Wiley Publishing Seminar, 5 November, Melbourne, Australia.
The Spanish Open Research Data Network. Lessons learnedmaredata
This document summarizes a presentation about Maredata, a Spanish network focused on open research data management. The network brings together Spanish research teams working on topics like interoperability, access, preservation, and open data policies. It aims to coordinate these groups, avoid duplications in research, and promote transparency. The benefits of open research data discussed include increased collaboration, validation of results, and transparency. Future areas of focus for the network include identifying discipline-specific research data management needs, exploring open health data, and addressing issues like data protection, quality, and ethics.
This document discusses managing research data for open science based on the UK experience. It outlines key aspects of open science such as making research more open, global, collaborative and closer to society. The document discusses mandates for open research data from funding bodies in the UK and EU, including stipulations in Horizon 2020 and requirements from EPSRC. It defines what constitutes research data and examines challenges around research data management, including technology issues, people issues, policy issues and resources. The importance of data skills training for researchers and data professionals is also covered.
Biesenbender - The research core dataset as a standard for research informationinnovationoecd
The document discusses the development of the Research Core Dataset (RCD) as a voluntary standard for research information in Germany. It notes the need for research information standards to reduce reporting burdens and ensure comparability. The RCD was developed through a multi-stakeholder process to define standard definitions for key research output and process data in a way that can be customized for different reporting needs while imposing minimal requirements on institutions. Next steps include voluntary implementation of the RCD standard and establishment of support for institutions to assist with adoption.
NordForsk Open Access Reykjavik 14-15/8-2014:Finnish data-initiativeNordForsk
This document summarizes Finland's Research Data Initiative from 2009-2017. The initiative aimed to develop a sustainable research data infrastructure in Finland by providing services like data storage, metadata, and long-term preservation. It also sought to encourage open data sharing and reuse. The initiative progressed from early planning projects to establishing core services. Lessons learned include the importance of flexible governance, permanent preservation, embracing change through openness, and addressing cultural shifts around data sharing. The initiative aims to enhance research through improved access, collaboration and reuse of scientific data.
The document summarizes Susanna-Assunta Sansone's presentation on enabling FAIR (Findable, Accessible, Interoperable, Reusable) digital resources. It discusses the driving forces behind FAIR including reproducibility crises, new data types, and changing publishing. It then outlines community efforts to develop standards, policies, and tools to improve metadata and data sharing according to FAIR principles. These include domain-specific standards, the FAIRsharing registry, metrics to assess FAIRness, and ongoing work to provide FAIR guidance and services.
Ähnlich wie Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness (20)
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...Anastasija Nikiforova
“Data is the new oil” is only partly true, since according to Forbes, data is more than oil, while according to Ataccama, “Manual Data Quality Doesn’t Cut It in 2023” – this was the main driver behind of my guest lecture entitled “Data Quality for AI or AI for Data quality: advances in Data Quality Management for the success and sustainability of emerging technologies, business and society”, as part of which we discussed what is the role of artificial intelligence in data quality management and what is the role of data quality for AI, concluding that it is not about “data quality for AI” OR “AI for data quality” but rather about AND.
We also looked at what is the current market offer regarding AI-driven data quality management, what are the pros and cons of these solutions and what are the prerequisites that we have to take into account when using them (e.g., metadata and their quality for those, which derive DQ rules based on metadata analysis), and how possibly more promising solution could be built.
We also looked at what are those data quality specificities we should consider depending on the artifact – a data object (dataset), whose owner is known / is unknown (open data), Information System, Data Warehouse, Data Lake, Data Lakehouse, Data Mesh – where, when and how DQ takes place in them? What are the current trends? And are these indeed trends or rather hype?
Towards High-Value Datasets determination for data-driven development: a syst...Anastasija Nikiforova
Slides for the talk delivered as part of EGOV-CeDEM-ePart 2023 (EGOV2023) conference, aimed at examining how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks, which was done by conducting a Systematic Literature Review.
Read the paper here -> https://link.springer.com/chapter/10.1007/978-3-031-41138-0_14
Public data ecosystems in and for smart cities: how to make open / Big / smar...Anastasija Nikiforova
This is a set of slides used as part of my keynote "Public data ecosystems in and for smart cities: how to make open / Big / smart / geo data ecosystems value-adding for SDG-compliant Smart Living and Society 5.0" delivered at the 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023) -> https://carmaconf2023.wordpress.com/keynote-speakers/. read more here -> https://anastasijanikiforova.com/2023/06/30/keynote-at-the-5th-international-conference-on-advanced-research-methods-and-analytics-carma-2023/
Artificial Intelligence for open data or open data for artificial intelligence?Anastasija Nikiforova
This is a presentation used to deliver an invited talk for Babu Banarasi Das University (BBDU, Department of Computer Science and Engineering) Development Program «Artificial Intelligence for Sustainable Development» organized by AI Research Centre, Department of Computer Science & Engineering, ShodhGuru Research Labs, Soft Computing Research Society, IEEE UP Section, Computational Intelligence Society Chapter in 2022. Read more here -> https://anastasijanikiforova.com/2022/09/24/ai-for-open-data-or-open-data-for-ai-an-invited-talk-for-bbdu-development-program-artificial-intelligence-for-sustainable-development%f0%9f%8e%a4/
Overlooked aspects of data governance: workflow framework for enterprise data...Anastasija Nikiforova
This presentation is a supplementary material for the article "Overlooked aspects of data governance: workflow framework for enterprise data deduplication" (Azeroual, Nikiforova, Shei) presented at The International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS2023).
Abstract of the paper: Data quality in companies is decisive and critical to the benefits their products and services can provide. However, in heterogeneous IT infrastructures where, e.g., different applications for Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), product management, manufacturing, and marketing are used, duplicates, e.g., multiple entries for the same customer or product in a database or information system, occur. There can be several reasons for this, but the result of non-unique or duplicate records is a degraded data quality. This ultimately leads to poorer, inefficient, and inaccurate data-driven decisions. For this reason, in this paper, we develop a conceptual data governance framework for effective and efficient management of duplicate data, and improvement of data accuracy and consistency in large data ecosystems. We present methods and recommendations for companies to deal with duplicate data in a meaningful way.
Data Quality as a prerequisite for you business success: when should I start ...Anastasija Nikiforova
These are slides for my talk "Data Quality as a prerequisite for you business success: when should I start taking care of it?" I delivered as an invited keynote for HackCodeX Forum that gathered international experts to share their experience and knowledge on the emerging technologies and areas such as Artificial Intelligence, Security, Data Quality, Quantum Computing, Sustainability, Open Data, Privacy etc.
Framework for understanding quantum computing use cases from a multidisciplin...Anastasija Nikiforova
This presentation is a supplementary material for the article "Framework for understanding quantum computing use cases from a multidisciplinary perspective and future research directions" (Ukpabi, D.C., Karjaluoto, H., Botticher, A., Nikiforova, A., Petrescu, D.I., Schindler, P., Valtenbergs, V., Lehmann, L., & Yakaryılmaz, A) available at https://arxiv.org/ftp/arxiv/papers/2212/2212.13909.pdf. THe presentation, however, was delivered for QWorld Quantum Science Days 2023 | May 29-31.
Data Lake or Data Warehouse? Data Cleaning or Data Wrangling? How to Ensure t...Anastasija Nikiforova
This presentation was delivered as part of the Data Science Seminar titled “When, Why and How? The Importance of Business Intelligence“ organized by the Institute of Computer Science (University of Tartu) in cooperation with Swedbank.
In this presentation I talked about:
*“Data warehouse vs. data lake – what are they and what is the difference between them?” (structured vs unstructured, static vs dynamic (real-time data), schema-on-write vs schema on-read, ETL vs ELT) with further elaboration on What are their goals and purposes? What is their target audience? What are their pros and cons?
*“Is the Data warehouse the only data repository suitable for BI?” – no, (today) data lakes can also be suitable. And even more, both are considered the key to “a single version of the truth”. Although, if descriptive BI is the only purpose, it might still be better to stay within data warehouse. But, if you want to either have predictive BI or use your data for ML (or do not have a specific idea on how you want to use the data, but want to be able to explore your data effectively and efficiently), you know that a data warehouse might not be the best option.
*“So, the data lake will save my resources a lot, because I do not have to worry about how to store /allocate the data – just put it in one storage and voila?!” – no, in this case your data lake will turn into a data swamp! And you are forgetting about the data quality you should (must!) be thinking of!
*“But how do you prevent the data lake from becoming a data swamp?” – in short and simple terms – proper data governance & metadata management is the answer (but not as easy as it sounds – do not forget about your data engineer and be friendly with him [always… literally always :D) and also think about the culture in your organization.
*“So, the use of a data warehouse is the key to high quality data?” – no, it is not! Having ETL do not guarantee the quality of your data (transform&load is not data quality management). Think about data quality regardless of the repository!
*“Are data warehouses and data lakes the only options to consider or are we missing something?“– true! Data lakehouse!
*“If a data lakehouse is a combination of benefits of a data warehouse and data lake, is it a silver bullet?“– no, it is not! This is another option (relatively immature) to consider that may be the best bit for you, but not a panacea. Dealing with data is not easy (still)…
In addition, in this talk I also briefly introduced the ongoing research into the integration of the data lake as a data repository and data wrangling seeking for an increased data quality in IS. In short, this is somewhat like an improved data lakehouse, where we emphasize the need of data governance and data wrangling to be integrated to really get the benefits that the data lakehouses promise (although we still call it a data lake, since a data lakehouse is nut sufficiently mature concept with different definitions of it).
Open data hackathon as a tool for increased engagement of Generation Z: to h...Anastasija Nikiforova
This is presentation for the paper "Open data hackathon as a tool for increased engagement of Generation Z: to hack or not to hack?" presented at EGETC2022.
A hackathon is known as a form of civic innovation in which participants representing citizens can point out existing problems or social needs and propose a solution. Given the high social, technical, and economic potential of open government data (OGD), the concept of open data hackathons is becoming popular around the world. This concept has become popular in Latvia with the annual hackathons organised for a specific cluster of citizens – Generation Z. This study presents the latest findings on the role of open data hackathons and the benefits that they can bring to both the society, participants, and government. First, a systematic literature review is carried out to establish a knowledge base. Then, empirical research of 4 case studies of open data hackathons for Generation Z participants held between 2018 and 2021 in Latvia is conducted to understand which ideas dominated and what were the main results of these events for the OGD initiative. It demonstrates that, despite the widespread belief that young people are indifferent to current
societal and natural problems, the ideas developed correspond to current situation and are aimed at solving them, revealing aspects for improvement in both the
provision of data, infrastructure, culture, and government- related areas.
Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Inno...Anastasija Nikiforova
This is the presentation for our ongoing study "Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Innovation Resistance Theory" (Anastasija Nikiforova, Anneke Zuiderwijk) presented at ICEGOV2022 conference – 15th International Conference on Theory and Practice of Electronic Governance (nominated to the Best Paper Awards).
In short, the study aims to develop an Open Government Data-adapted Innovation Resistance Theory model to empirically identify predictors affecting public agencies’ resistance to openly sharing government data. Here we want to understand:
💡what are functional and behavioural factors that facilitate or hamper opening government data by public organizations?
💡does IRT provide a new and more complete insight compared to more traditional UTAUT and TAM? – IRT has not been applied in this domain, yet, so we are checking whether it should be considered, or rather those models we are familiar so much are the best ones?
💡and additionally – does the COVID-19 pandemic had an [obvious/significant] effect on the public agencies in terms of their readiness or resistance to openly share government data?
Based on a review of the literature on both IRT research and barriers associated with open data sharing by public agencies, we developed an initial version of the model. Once the model is refined in a qualitative study (interviews with public agencies), we will validate it to study the resistance of public authorities to openly sharing government data in a quantitative study.
Read the paper and cite as -> Nikiforova A., Zuiderwijk A. (2022) Barriers to openly sharing government data: towards an open data-adapted innovation resistance theory, In 15th International Conference on Theory and Practice of Electronic Governance (ICEGOV 2022). Association for Computing Machinery, New York, NY, USA, 215–220, https://doi.org/10.1145/3560107.3560143 – best paper award nominee
Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRISAnastasija Nikiforova
This presentation is a supplementary material for the "Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS" presented at 15th International Conference on Current Research Information Systems (CRIS2022) - Linking Research Information across data spaces. It provides an insight on the ongoing study of combining data lake as a data repository and data wrangling seeking for an increased data quality in CRIS systems, although the proposed approach is domain-agnostic and can be used not only within CRIS.
Read the article here -> Azeroual, O., Schöpfel, J., Ivanovic, D., & Nikiforova, A. (2022, May). Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS. In CRIS2022: 15th International Conference on Current Research Information Systems --> https://hal.archives-ouvertes.fr/hal-03694519/
The role of open data in the development of sustainable smart cities and smar...Anastasija Nikiforova
This presentation is a supplementary material for the guest lecture "The role of open data in the development of sustainable smart cities and smart society" I delivered for the Federal University of Technology – Paraná (Universidade Tecnológica Federal do Paraná (UTFPR)) (Brazil, May 2022).
Data security as a top priority in the digital world: preserve data value by ...Anastasija Nikiforova
Today, in the age of information and Industry 4.0, billions of data sources, including but not limited to interconnected devices (sensors, monitoring devices) forming Cyber-Physical Systems (CPS) and the Internet of Things (IoT) ecosystem, continuously generate, collect, process, and exchange data. With the rapid increase in the number of devices and information systems in use, the amount of data is increasing. Moreover, due to the digitization and variety of data being continuously produced and processed with a reference to Big Data, their value, is also growing. As a result, the risk of security breaches and data leaks. The value of data, however, is dependent on several factors, where data quality and data security that can affect the data quality if the data are accessed and corrupted, are the most vital. Data serve as the basis for decision-making, input for models, forecasts, simulations etc., which can be of high strategical and commercial / business value. This has become even more relevant in terms of COVID-19 pandemic, when in addition to affecting the health, lives, and lifestyle of billions of citizens globally, making it even more digitized, it has had a significant impact on business. This is especially the case because of challenges companies have faced in maintaining business continuity in this so-called “new normal”. However, in addition to those cybersecurity threats that are caused by changes directly related to the pandemic and its consequences, many previously known threats have become even more desirable targets for intruders, hackers. Every year millions of personal records become available online. Moreover, the popularity of IoTSE decreased a level of complexity of searching for connected devices on the internet and easy access even for novices due to the widespread popularity of step-by-step guides on how to use IoT search engine to find and gain access if insufficiently protected to webcams, routers, databases and other artifacts. A recent research demonstrated that weak data and database protection in particular is one of the key security threats. Various measures can be taken to address the issue. The aim of the study to which this presentation refers is to examine whether “traditional” vulnerability registries provide a sufficiently comprehensive view of DBMS security, or whether they should be intensively and dynamically inspected by DBMS holders by referring to Internet of Things Search Engines moving towards a sustainable and resilient digitized environment. The paper brings attention to this problem and make the reader think about data security before looking for and introducing more advanced security and protection mechanisms, which, in the absence of the above, may bring no value.
IoTSE-based Open Database Vulnerability inspection in three Baltic Countries:...Anastasija Nikiforova
This presentation is devoted to the "IoTSE-based Open Database Vulnerability inspection in three Baltic Countries: ShoBEVODSDT sees you" research paper developed by Artjoms Daskevics and Anastasija Nikiforova and presented during the The International conference on Internet of Things, Systems, Management and Security (IOTSMS2021) co-located with The 8th International Conference on Social Networks Analysis, Management and Security (SNAMS2021), December 6-9, 2021, Valencia, Spain (online)
Read paper here -> Daskevics, A., & Nikiforova, A. (2021, December). IoTSE-based open database vulnerability inspection in three Baltic countries: ShoBEVODSDT sees you. In 2021 8th International Conference on Internet of Things: Systems, Management and Security (IOTSMS) (pp. 1-8). IEEE -> https://ieeexplore.ieee.org/abstract/document/9704952?casa_token=NfEjYuud0wEAAAAA:6QxucVPuY762I3qzD6D_oWqa0B9eMUFRNMG-E7dyHKohSYIzI0bH1V9bLaAcly_Lp-Ll52ghO5Y
Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can...Anastasija Nikiforova
This presentations is a supplementary material for presenting the "Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can Save Your Business" (authored by Anastasija Nikiforova and Natalija Kozmina) research paper during the The International Conference on Intelligent Data Science Technologies and Applications (IDSTA2021), November 15-16, 2021. Tartu, Estonia (web-based)
Read paper here -> Nikiforova, A., & Kozmina, N. (2021, November). Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can Save Your Business. In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (pp. 66-73). IEEE -> https://ieeexplore.ieee.org/abstract/document/9660802?casa_token=LFJa20LrXAwAAAAA:wVwhTcCPWqxdloAvDQ3-l98KkkLx70xzG3zNvIIkJbC6wvJ4VxwX_VGc3mmW_7c1T-QJlOtTiao
ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detect...Anastasija Nikiforova
This presentation is devoted to the "ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detection tool or what Internet of Things Search Engines know about you" research paper developed by Artjoms Daskevics and Anastasija Nikiforova and presented during the The International Conference on Intelligent Data Science Technologies and Applications (IDSTA2021), November 15-16, 2021. Tartu, Estonia (web-based).
Read paper here -> Daskevics, A., & Nikiforova, A. (2021, November). ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detection tool or what Internet of Things Search Engines know about you. In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (pp. 38-45). IEEE.
OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERSAnastasija Nikiforova
"OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERS" set of slides was prepared for the Guest Lecture, which I has delivered to the students of the University of South-Eastern Norway (USN), October 2021
Invited talk "Open Data as a driver of Society 5.0: how you and your scientif...Anastasija Nikiforova
This presentation is prepared as a part of my talk on the openness (open data and open science) in the context of Society 5.0 during the International Conference and Expo on Nanotechnology and Nanomaterials. It was very pleasant to receive an invitation to deliver the talk on my recently published article Smarter Open Government Data for Society 5.0: Are Your Open Data Smart Enough? (Sensors 2021, 21(15), 5204), which I have entitled as “Open Data as a driver of Society 5.0: how you and your scientific outputs can contribute to the development of the Super Smart Society and transformation into Smart Living?“. The paper has been briefly discussed in my previous post, thus, just a few words on this talk and overall experience.
Towards enrichment of the open government data: a stakeholder-centered determ...Anastasija Nikiforova
This set of slides is a part of the presentation prepared and delivered in the scope of the 14th International Conference on Theory and Practice of Electronic Governance (ICEGOV 2021), 6-8 October, 2021, Smart Digital Governance for Global Sustainability
It is based on the paper -> Nikiforova, A. (2021, October). Towards enrichment of the open government data: a stakeholder-centered determination of High-Value Data sets for Latvia. In 14th International Conference on Theory and Practice of Electronic Governance (pp. 367-372) -> https://dl.acm.org/doi/abs/10.1145/3494193.3494243?casa_token=bPeuwmFWwQwAAAAA:ls-xXIPK5uXDHyxtBxqsMJOCuV6ud_ip59BX8n78uJnqvql6e8H9urlDG9zzeNklRmGFwI4sCXU06w
Atvērtā lekcija "Atvērto datu potenciāls" notika LU SZF maģistrantūras kursa “Datu sabiedrības vadība” ietvaros, ko nolasīja Dr.sc.comp. Anastasija Ņikiforova, LU Datorikas fakultātes docente, pētniece.
Atvērtie dati tiek uzskatīti par vērtīgu resursu, kura izmantošana ir potenciāli spējīga sniegt ievērojamus ekonomiskus, tehnoloģiskus un sociālus ieguvumus. Taču to panākšanai ir jāizpildās virknei priekšnosacījumu, kas attiecināmi gan uz datiem, gan uz infrastruktūru, gan uz lietotājiem, t.i. atvērto datu iniciatīvas veiksmes faktors ir ilgtspējīgas atvērto pārvaldes datu ekosistēmas izveide un uzturēšana. Lekcijas mērķis ir sniegt ieskatu par atvērto datu popularitāti un potenciālu tehnoloģisko un ekonomisko procesu attīstībai, uzmanību pievēršot to praktiskiem pielietojumiem gan Latvijā, gan ārpus tās, datus transformējot (inovatīvajos) risinājumos un pakalpojumos. Tāpat, ir plānots sniegts ieskatu par nozīmīgākajiem aspektiem, kas potenciāli ir spējīgi sekmēt ilgtspējīgas atvērto datu ekosistēmas izveidi, nodrošinot iespēju ikvienam interesentam atvērtus datus transformēt vērtībā.
PhD, Dc. comp.sc. Anastasija Ņikiforova ir Latvijas Universitātes Datorikas Fakultātes docente un Inovatīvo informācijas tehnoloģiju laboratorijas pētniece. Dr. Ņikiforovas pētnieciskas intereses ir saistītas ar datu pārvaldības, īpaši datu kvalitātes, un atvērto datu saistītājiem jautājumiem. LU Datorikas fakultātē papildus citiem docētājiem kursiem viņa ir izstrādājusi Specsemināru “Atvērtie dati un datu kvalitāte” un maģistra programmas kursu “Atvērtie pārvaldes dati datu-virzītā pasaulē”. Dr. Ņikiforova ir Latvijas Zinātnes padomes eksperte Inženierzinātnes un tehnoloģijas (Elektrotehnika, elektronika, informācijas un komunikāciju tehnoloģijas) un Dabaszinātnes (Datorzinātnes un informātika) nozarēs, kā arī LATA (Latvijas Atvērto Tehnoloģiju Asociācija) asociētā biedre. Viņa ir vairāk kā 25 zinātnisko rakstu (līdz-)autore, 4 no kuriem ir publicēti augstākā rangā Q1 žurnālos.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness
1. Otmane Azeroual, German Centre for Higher Education Research and Science Studies (DZHW), Germany
Joachim Schöpfel, GERiiCO Laboratory, University of Lille, France
Janne Pölönen, Federation of Finnish Learned Societies, Finland
Anastasija Nikiforova, Institute of Computer Science, University of Tartu, Estonia & European Open Science Cloud TF “FAIR metrics and data quality”
14th International Conference on Knowledge Management and Information Systems in conjunction with 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering
and Knowledge Management (Valletta, Malta, 24-26 October, 2022)
Putting FAIR Principles in the Context of Research
Information: FAIRness for CRIS and CRIS for FAIRness
2. BACKGROUND AND
PROBLEM STATEMENT
✔ Today, more and more data – both produced, collected, and available
from the past in paper form, are being digitized, which is also the case
for the research domain.
✔ Digitization refers to making data available (and integrated) in an
electronic and machine-readable format for further use, making it
significantly more efficient.
✔ But, even if digitized, it is not clear whether these data are used and,
more importantly, whether the data are of sufficient quality, and value
and knowledge could be extracted from them.
✔ In other words, digitization does not necessarily involve data quality.
✔ FAIR principles represent a promising asset to achieve this!
4. FAIR PRINCIPLES
✔ The FAIR principles were originally developed as guidelines or recommendations
for the effective and efficient management of research data and stewardship as
part of a new open science policy framework, with a “specific emphasis on
enhancing the ability of machines to automatically find and use the data” in
data repositories (Wilkinson et al., 2016)
✔ FAIR principles have become central element in the debate and implementation
of open science policies
✔ Today they are increasingly being applied to metadata, identifiers, catalogs,
software, and larger infrastructures such as European Open Science Cloud
5. ✔ Since their publication, they have rapidly
proliferated and have become part of
(inter)national research funding programs
✔ A special feature of the FAIR principles is the
emphasis on the legibility, readability, and
understandability of data
✔ At the same time, they pose a prerequisite for
data for their reliability, trustworthiness, and
quality
They are considered as important
for research information and
respective systems such as CRIS,
BUT this topic is underrepresented
subject for research
Source: https://blog.orvium.io/fair-principles-in-scientific-data/
FAIR PRINCIPLES
6. Supporting the call for the need for a ”one-
stop-shop and register-once-use-many
approach”, we argue that
CRIS is a key component of the research
infrastructure landscape, directly targeted
and enabled by operational application and
the promotion of FAIR principles
We hypothesize that
the improvement of FAIRness is a
bidirectional process, where CRIS promotes
FAIRness of data and infrastructures, and
FAIR principles push further improvements
to the underlying CRIS
Source: https://stock.adobe.com/fi/search?k=hypothesis
7. CURRENT RESEARCH
INFORMATION SYSTEMS
✔ Current Research Information Systems (CRIS, also RIS and
RIMS – Research Information management Systems) are seen as:
“core elements of the technological solution since they
provide rich additional meta-data on datasets and put the
datasets and their meta-data into their proper context, and so
significantly enhance the FAIRness of datasets”
(Terheggen and Smons, 2016)
https://www.elsevier.com/research-intelligence/rims-and-cris-systems
✔CRIS are not just data repository, but a key component of the research
infrastructure landscape/ ecosystem, which is directly targeted and involved by
the operational application and promotion of the FAIR principles.
8. Internal & External
sources
(C)RIS front-end
Based on Azeroual, O., Saake, G., & Wastl, J. (2018). Data measurement in research information systems: metrics for the evaluation of data quality. Scientometrics, 115(3), 1271-1290.
9. CURRENT RESEARCH
INFORMATION SYSTEMS
https://www.elsevier.com/research-intelligence/rims-and-cris-systems
Research information management systems are helpful to
assess the FAIRness of research data and data repositories
Research information management systems contribute to
the FAIRness of other research infrastructure
Research information management systems can be
improved through the application of the FAIR principles
Top 10 RIMS/ RIS/ CRIS benefits
Three levels constitutes a set of propositions
Visibility
Strategic Decision Making
Collaboration
Societal Impact
Research Management
Reporting
Performance
Funding
Open Science
Assessments
10. WHY DOES IT
MATTER?
✔ The annual cost of not having FAIR data are estimated to a minimum of €10.2 billion
per year and 16.9 billion in lost innovation opportunities (PwC EU Services, 2018)
✔ The actual costs are likely to be significantly higher due to unquantifiable elements
such as the value of improved research quality & other indirect effects of FAIR data
✔ The impact on innovation, would account for over 60% of the likely costs, while the
minimum cost encompassing indicators such as – time spent, license costs,
research duplication, cost of storage and research retraction – for the remaining 40%
✔ These indicators, however, represent three areas applicable to all sectors (i.e.,
academia, private, public, non-profit) and can be described as
(1) impact on activities
(2) impact on collaboration
(3) impact on innovation
11. DOES IT MATTER?
✔ In the European Commission Open Science policy, FAIR and open data
sharing is one of the eight pillars of Open Science
✔ Similarly, in the UNESCO Recommendation on Open Science, FAIRness has
become an essential feature of what has been called “open science culture”
(multi-dimensional / complex concept)
✔ In EOSC - the FAIRsFAIR project addresses “the development and concrete
realization of an overall knowledge infrastructure on academic quality data
management, procedures, standards, metrics and related matters, based on the
FAIR principles”, as a kind of general reference/guide to best practices in Higher
Education and Research.
Future of Scholarly Communication
European Open Science Cloud (EOSC)
FAIR data
Skills
Research Integrity
Rewards
Altmetrics
Citizen Science
EUROPEAN COMMISSION: OPEN SCIENCE POLICY PLATFROM
8 PILLARS OF OPEN SCIENCE
***At the moment EOSC have begun their own research to define and introduce guidelines to application of FAIR principles to
digital objects [not necessarily limited to the research domain], thereby expanding the scope to the entire digital environment
and all data-based objects, including research objects such as scientific articles and software, available on the Internet.
12. METHOD
Exploratory study:
(1) an overview of the challenges associated with research data and
research information management,
(2) analysis of each level based on a review of relevant studies,
(3) definition of perspectives for further research, thereby raising an
awareness of this topic and making a call for other researchers to
refer to it.
Systematic Literature Review of euro-CRIS repository (!) - all relevant studies
available in euroCRIS repository were selected 3 studies were found to be
relevant to illustrate the approach
The scope was extended by referring to projects and initiatives around the world
identified using a snowballing approach, i.e., referring to the projects covered in the
selected studies, OR based on our own experience (at both regional & (inter)national
levels, representing different communities).
euroCRIS was founded in 2002 as an international not-for-
profit association that brings together experts on research
information in general & RIS in particular.
One of the things that euroCRIS does is maintain a
metadata standard known as Common European
Research Information Format (CERIF).
https://www.elsevier.com/research-intelligence/rims-and-cris-systems
!!!The low number of relevant studies points out the limited body of
knowledge on this topic, thereby making this study unique and
constituting a call for action!!!
13. Sweden: FAIRness of
research outputs
✔ in 2017, the government of Sweden gave the Swedish Research
Council and the National Library of Sweden parallel assignments to
propose criteria and a method for assessing how well research data
and scholarly publications produced by Swedish organizations comply
with the FAIR principles.
✔ Suggested criteria aimed at providing an “overall picture of FAIRness”
of national research results, through the collected metadata:
(1) metadata quality (richness)
(2) licensing and persistent identifiers
(3) openness
(4) accessibility
(5) standard vocabularies.
14. Austria: FAIRness of
research outputs
✔ Reflection on how the implementation of a commercial CRIS at the University of
Vienna and the creation of a national network of CRIS managers from all
Austrian universities (FIS/CRIS Austria) contributed to the visibility, findability,
accessibility and interoperability of research information, through the
development of standards (including identifiers and data models) and shared
strategies.
✔ More recently, this network developed a tool that enables tracking and
monitoring of the transition to open access based on data stored in local CRIS
which is interoperable and connected with OpenAIRE.
15. Belgium: metadata-driven
assessment of FAIRness and
compliance with OS policy
✔ An “application profile for research data” based on experience with
the Flemish research information system (FRIS), including various
aspects of metadata such as description, discovery,
contextualization, coupling users, software and computing
resources to data, research proposal, funding, project information,
research outputs, outcomes, impact etc. to assess FAIRness and
compliance with open science policy
✔ a common metadata model and interoperation/ interoperability
across multiple metadata models
16. To sum up…
✔ What these initiatives have in common is that CRIS is used to assess different levels
of FAIR-ness and FAIRness of the object under assessment (as part of the open
science assessment). This assessment is primarily based on the collecting and
processing metadata
✔ 1
✔ Confirm that CRIS has the potential and capacity at the institutional, regional or
national level to contribute to the monitoring of open science policies and, in
particular, to the follow-up of projects aimed at improving the FAIRness of research
data, research repositories and other related research infrastructures
✔ In addition, it has been clearly recognized that CRIS has the potential to support and
facilitate more responsible research assessment systems to reward and incentivize
researchers for open science practices, including open and FAIR data***
***NB: FAIR data are not necessarily open, although
open FAIR data is a target to go!
17. CONTRIBUTING TO THE
FAIRNESS OF RESEARCH
INFRASTRUCTURES
The FAIR data principles provide a comprehensive framework and guidance on
the criteria that well-preserved data must meet & on the standardization of data
schemes
Several tools have been developed to assess the FAIRness of research data and / or
data repositories: Australian Research Data Commons (ARDC) FAIR Data self-
assessment tool, the Dutch DANS FAIRdat tool or the EUDAT Fair Data Checklist***
***The results obtained using different tools tend to differ significantly with a very vague
understanding on what should be done to improve the result if another tool assessed the
level of FAIRness as sufficient create new information silos and in most cases are not linked to
professional assessment systems such as CRIS.
18. IMPROVING THE FAIRNESS OF CRIS
!!!we suggest to apply it not only to data and information, BUT
also to the upper level of the data or information management
systems, i.e., CRIS
CRIS itself can be improved by following and / or applying FAIR
principles on it
CRIS is not only improving the FAIRness of research data management,
BUT
the FAIR principles are also beneficial for the further development of
sustainable and FAIR CRIS*
FAIR is typically discussed at three levels:
(1) digital object (e.g., dataset, video, journal,
book etc.)
(2) metadata about this object on elementary
level, including title, creator, identifier, date etc.,
(3) metadata records with the reference to the
body of metadata element on the object in a
specific database
*also in line with the Science Europe Position Statement on Research Information
Systems that suggests that “research information systems should foster the
findability, accessibility, interoperability, and reusability of the data that they store by
implementing the FAIR Guiding Principles for research activity data»
19. FAIRNESS OF CRIS
Two levels can be distinguished:
✔ the need for standard data and metadata, especially persistent identifiers,
requires tools capable of producing and processing, handling them, and this
is a strong argument in favor of CRIS as the central system (middleware) in
the research infrastructure ecosystem
✔ this need also calls for more standardization of CRIS, improved data models and
formats especially the long tail of less standardized research information systems
(cf. the large diversity and heterogeneity of CRIS)
✔ The standard format can improve CRIS interoperability BUT it is not enough – CRIS should
(also) prefer open identifier systems “to make things findable” and link information on source
data and rights information to support access and facilitate reuse
✔ The interconnection of infrastructures based on the FAIR principles is another example of an
improvement in CRIS, which must fulfill certain technical requirements based on the FAIR
principles.
Example: European OpenAIRE community accepts only CRIS meeting their FAIR requirements
!!!However, the FAIRness of research information management
infrastructures has its specific «limitations» due to the nature of the data
and the potential impact of their reuse - some of the data can be
personal data and protected by privacy laws such GDPR, other data
may be confidential being highly financial and of interest for
competitors etc.
for ethical and legal reasons, the accessibility of CRIS data
must be controlled and respect the above, i.e., it cannot be a
guideline and require an openness of all data, following the
H2020 Program Guidelines on FAIR Data of
“as open as possible, as restricted as necessary”
20. TO SUM UP…
It is crucial that the research information is available in such a way that it can be found, accessed, linked and reused as easily as possible (for authorized users)
thereby being as FAIR as possible
Research information is not just research data & research information management systems such as CRIS are not just repositories for research data they
are much more complex, alive, dynamic, interactive and multi-stakeholder objects
CRIS are part of the research infrastructure ecosystem and are linked to data repositories, where the idea of CRIS partly overlaps with the main goal of FAIR
principles
CRIS can (already does) improve the FAIRness of research infrastructures and data through the evaluation (monitoring) & standardization of data & metadata
The improvement of FAIRness is a dual or bidirectional process ⇒ CRIS are beneficial for FAIR, and FAIR is beneficial for CRIS
FAIR principles push for further improvement in
the underlying CRIS data model and format
CRIS promotes and contributes to the
FAIRness of data and infrastructures
21. TO SUM UP…
➢ But nevertheless, the impact of CRIS on FAIRness is mainly focused on the:
(1) findability through the use of persistent identifiers,
(2) interoperability through standard metadata,
while the impact on the other two principles, namely accessibility and reusability seems to be more indirect, related to
and conditioned by metadata on licensing and access
➢ Paraphrasing “FAIRness is necessary, but not sufficient for ‘open’” “CRIS are necessary but not sufficient for FAIRness”
➢ Rewards and incentives as recommended by Science Europe, is critical to ensure the “independence and transparency of the data,
infrastructure and criteria necessary for research assessment and for determining research impacts”
22. CALL FOR ACTION!
more case studies are needed to explore the potential of research information management to monitor FAIR projects and
infrastructures at the local, regional, national and international levels
more empirical evidence needs to be presented on the real and specific impact of CRIS on the development of FAIR data
repositories and other research infrastructures, with a particular focus on standardization
further development of CRIS data models and formats should focus on FAIR principles, especially findability and
interoperability, in an explicit way.
ethical and legal aspects of accessibility of CRIS data require further investigation to get a full picture of what it really
means to apply the FAIR principles to research information management***
***this is a research currently conducted by the EOSC Task Force by means of both surveys, interviews, case studies and
other activities, it can and should be supplemented with other independent and use-case based studies
23. THANK YOU FOR
ATTENTION!
QUESTIONS?
For more information, see
ResearchGate, anastasijanikiforova.com
For questions or any other queries,
contact us via email -
Nikiforova.Anastasija@gmail.com,
azeroual@dzhw.eu