SlideShare ist ein Scribd-Unternehmen logo
1 von 37
Downloaden Sie, um offline zu lesen
The Effect of Inter‐Assessor 
Disagreement on IR System
Evaluation: 
A Case Study with Lancers and 
Students
Tetsuya Sakai
tetsuyasakai@acm.org
Waseda University, Japan
December 5, 2017@EVIA 2017, NII
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
We ran the NTCIR‐13 
We Want Web task because
NTCIR‐13 WWW data [Luo+17]
SogouT‐16
ClueWeb12‐B13
2 “Lancers” for 100 topics, plus a Student
assessor for odd‐numbered ones
Japanese crowdsourcing site 
https://www.lancers.jp/
"Lancers claims to be Japan's largest freelancing site, 
producing over $200 million worth of accumulated 
freelancing gigs to date“ (Jan 28, 2014)
https://www.techinasia.com/lancers‐produces‐200‐million‐freelancing‐gigs‐growing
++ I can easily find workers with a high reputation 
and necessary skills online
‐‐ I need to pay about 20% on top of the workers’ 
fees to the company
‐‐ Lancers.jp refuses to get paid directly from 
University   (I pay to Lancers.jp then University 
reimburses me)
Research questions (and answers)
RQ1: Is hiring lancers worthwhile?
‐ Maybe.
RQ2: How do lancers compare to students as 
relevance assessors? (lancers and students get paid 
equally per work hour.)
‐ See below.
One of the main findings:
‐ Lancer‐lancer agreements statistically significantly 
higher than lancer‐student agreements. 
‐ Lancers’ qrels more discriminative than Students’ qrels
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
Prior art: inter‐assessor agreement 
(not exhaustive)
[Voorhees00]
[Bailey+08] contains a concise survey covering 1969‐2008
[Carterete+10]
[Webber+12]
[Demeester+14]
[Megorskaya+15]
[Wang+15]
[Ferrante+17]
[Maddalena+17]
Variations in Relevance Judgments and the 
Measurement of Retrieval Effectiveness
Considering Assessor Agreement in IR 
Evaluation
Comparison with [Voorhees00]
• TREC news docs vs clueweb12
• (topic originator + 2 additional judges) vs
(2 lancers + 1 student)
• binary vs graded relevance
• overlap, recall, precision vs linear weighted kappa 
with 95%CIs + statistical significance
“The actual value of the effectiveness measure was affected by 
the different conditions, but in each case the relative 
performance of the retrieved runs was almost always the same. 
These results validate the use of the TREC test collections for 
comparative retrieval experiments.”
Different qrels, 
very similar rankings
Comparison with [Maddalena+17]
• Five assessments vs three (lancer, lancer, student)
• Krippendorph’s α (disregards which assesments
come from which assessors [Sakai19CLEF@20]) vs
linear weighted kappa for every assessor pair (since 
we’re interested in lancer‐lancer vs lancer‐student 
agreements)
• absolute measure values and system rankings vs
absolute measure values, system rankings, and 
statistical significance 
• nDCG vs nDCG, Q‐measure, nERR (official WWW 
measures)
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
Topic assignment
• Nine English‐speaking lancers
• Five English‐speaking students from Waseda U
• For each (odd‐numbered) topic, 2 lancers and 1 
student were assigned at random
• The official NTCIR‐13 WWW results are based on 
100 topics with 2 lancers per topic
• The present study focusses on the 50 odd‐
numbered topics, each judged by 3 assessors
PLY (Peng/Lingtao/Yimeng)
Instructions for assessors
Full document available at
http://waseda.box.com/lancers4www
ERROR: the right panel does not show any contents at all, even after 
waiting for a few seconds for the content to load.
H.REL: highly relevant ‐ it is *likely* that the user who entered this search 
query will find this page relevant.
REL: relevant ‐ it is *possible* that the user who entered this search query 
will find this page relevant.
NONREL: nonrelevant ‐ it is *unlikely* that the user who entered this 
search query will find this page relevant.
0
0
2
1
Qrels versions
The final relevance levels were obtained by
simply summing up the individual assessments (0‐2).
See [Maddalena+17][Sakai17EVIA‐unanimity] for alternatives.
Note: “lancer1” and “lancer2” are based on assessments from
nine different lancers;
“student” is based on assessments from five different students.
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
Linear weighted kappa with 95%CI
lancer‐lancer agreement is statistically significantly higher
than lancer‐student agreements. 
Mean (min/max) per‐topic linear 
weighted kappa
‐ lancer‐lancer still higher than lancer‐student
‐ For some topics, no agreement beyond chance
but this is not uncommon; see [Bailey+08] 
#topics whose 95%CI lower limit for kappa was not positive
Binary kappa with 95%CI 
(1,2 mapped to 1)
lancer‐lancer agreement is statistically significantly higher
than lancer‐student agreements. 
Binary raw agreement (does not take 
chance agreement into account)
Still highest
System rankings by different qrels versions
nDCG@10 (see paper for Q and nERR)
τ with 95%CIs
Rankings not statistically significantly
different, but not identical
Statistical significance overlap at α=5% 
(Randomised Tukey HSD test, 13*12/2=78 run pairs) 
nDCG@10 (see paper for Q and nERR)
Different qrels can lead to different conclusions.
lancers1,2 quite similar, but lancer1 vs student not so.
Significant with 
lancer1
Significant with 
lancer2
255 1
Discriminative power at α=5%
(Randomised Tukey HSD test, 13*12/2=78 run pairs)
Relevance
levels
L0‐L6
L0‐L4
L0‐L2
L0‐L2
L0‐L2
• Fine‐grained relevance levels give high discriminative power 
(more conclusions at α=5%)
• student qrels least discriminative
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
High‐ and low‐agreement topics
mean (min/max) per‐topic linear weighted kappa
• 27 “high‐agreement” topics:
for every assessor pair, the 95%CI for kappa was above zero
• 23 “low‐agreement” topics:
at least for one assessor pair, the 95%CI for kappa included zero
Maybe test collection builders can consider removing some 
low‐agreement (i.e., unreliable) topics beforehand?
Let’s compare the evaluation outcomes of 
full/high‐agreement/low‐agreement sets (using “all3”).
Similar topic set sizes
System rankings by 
full/high‐agreement/low‐agreement 
topic sets
Statistically significantly different rankings in blue
• Ranking by the low‐agreement set statistically significantly different from
that by the high‐agreement set
• Ranking by the high‐agreement set similar to that of the full set ranking
(= high‐agreement topics dominate the results)
τ with 95%CIs
Statistical significance overlap at α=5%
Significant with 
high‐agreement set
Significant with 
low‐agreement set
510 1
Discriminative power at α=5%
• Low‐agreement topics ⇒ low discriminative power
(high‐agreement vs low‐agreement substantially different 
discriminative power despite similar topic set sizes)
• Indeed it may be useful to consider removing some low‐
agreement topics within the process of test collection 
construction
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
Reviewer 4
“This paper presents a very interesting idea: some 
topics are more "stable" than other topics in that 
they are the topics that seem to determine the 
ranking of systems in a test collection. These 
stable/good topics happen to be topics that human 
assessors better agree on what is and is not 
relevant. Now, why is this? The paper does not tell 
us, but I think it is a really neat finding and I think 
other people will be able to build on this work. Thus, 
this paper should be accepted.”
⇒ Thank you.
Reviewer 1
“Of particular interest was the idea of using only the 27 
topics with significant agreement between all pairs of 
assessors [...] The authors note this is a small number of 
groups and therefore the results are somewhat 
tentative. A different point that I would like to 
emphasize is that the difference across topic sets is more 
than just agreements; variability across topics has always 
been more than variation across systems so this needs to 
be considered in terms of recommendations of using this 
type of set reduction.”
⇒ I agree that maintaining variability across topics is 
important. See also Topic Set Size Design [Sakai16IRJ].
Reviewer 3
[Maddalena+17] proposed a similar analysis of 
agreement between assessors and they also studied 
the effect of removing high or low agreement topics 
on system rankings. Can the authors point out the 
differences between their analysis and the one 
proposed in [Maddalena+17]?”
⇒ Sorry for missing this in the submitted version. 
Please refer to an earlier slide.
Reviewer 2
“<Weakness> The framework is not technically sound from an IR point of 
view, which is due to that the authors did not envision the IR evaluation 
from a holistic point of view. The evaluation as in this paper is a trial of 
batch execution, followed by an error analysis to fix problems in an IR 
system. This repeats until the system satisfies certain conditions. However, 
in this paper, the result of each run cannot be compared across the trials, 
due to more than one reason, such as, a different set of topics may be 
used depending on the trial. It is not possible to use this method for 
continuously improving a system.
Besides this, a lot of important concepts in this paper lack the formal 
definition and real (or even hypothetical) working examples, while the 
authors were extremely concerned with mathematical trifles. Whereas 
most of the key concepts are derivatives of the “reliability”, this concept 
itself has not been well defined to begin with. Those drawbacks make this 
evaluation unreliable. The novelty of this paper is not clear. It is also not 
clear to what extent this evaluation method has increased the reliability of 
the evaluation while maintaining its accuracy, compared with existing 
evaluation methods.”
⇒ …
TALK OUTLINE
1. Motivation: NTCIR‐13 WWW and Lancers
2. Prior art
3. Collecting relevance assessments
4. Inter‐assessor agreement
5. Using high‐agreement topics only
6. EVIA reviewers’ comments
7. Conclusions and future work
Conclusions
Main findings from a case study with the English NTCIR‐
13 WWW test collection (with only 13 runs):
‐ lancer‐lancer agreements statistically significantly 
higher than lancer‐student agreements. Lancers’ qrels
more discriminative than Students’ qrels.
‐ Combining multiple assessors’ labels to form fine‐
grained relevance levels gives us high discriminative 
power.
‐ High‐agreement topic set more useful for obtaining 
concrete research conclusions than low‐agreement 
ones. Consider removing low‐agreement ones while 
maintaining topic variability?
Future work: in‐depth analysis of the assessors’ activity log data
RQ1, RQ2
I thank
• PLY = Peng Xiao, Lingtao Li, Yimeng Fan of the Sakai 
Lab who developed the PLY relevance assessment 
tool
• NTCIR‐13 WWW organisers
• NTCIR‐13 WWW participants
References
[Bailey+08] Relevance Assessment: Are Judges Exchangeable and Does It Matter? SIGIR 2008.
[Carterette+10] The Effect of Assessor Errors on IR System Evaluation, SIGIR 2010.
[Demeester+14] Exploiting User Disagreement for Web Search Evaluation: an Experimental Approach, 
WSDM 2014.
[Ferrante+17] AWARE: Exploiting Evaluation Measures to Combine Multiple Assessors, TOIS 36(2), 2017.
[Luo+17] Overview of the NTCIR‐13 We Want Web Task, NTCIR‐13, 2017.
[Maddalena+17] Considering Assessor Agreement in IR Evaluation, ICTIR 2017.
[Megorskaya+15] On the Relation between Assessors’ Agreement and Accuracy in Gamified Relevance 
Assessment, SIGIR 2015.
[Sakai16IRJ] Topic Set Size Design, Information Retrieval Journal 19(3), 2016. 
http://link.springer.com/content/pdf/10.1007%2Fs10791‐015‐9273‐z.pdf
[Sakai17EVIA‐unanimity] Unanimity‐Aware Gain for Highly Subjective Assessments, EVIA 2017.
[Sakai19CLEF@20] How to Run an Evaluation Task, a book chapter to appear in 2019 from Springer.
[Voorhees00] Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, 
Information Processing and Management, 2000.
[Wang+15] Assessor Differences and User Preferences in Tweet Timeline Generation, SIGIR 2015.
[Webber+12] Alternative Assessor Disagreement and Retrieval Depth, CIKM 2012.

Weitere ähnliche Inhalte

Ähnlich wie Evia2017assessors

2004 01 10 Chef Sa V01
2004 01 10 Chef Sa V012004 01 10 Chef Sa V01
2004 01 10 Chef Sa V01
jiali zhang
 
What network simulator questions do users ask? a large-scale study of stack o...
What network simulator questions do users ask? a large-scale study of stack o...What network simulator questions do users ask? a large-scale study of stack o...
What network simulator questions do users ask? a large-scale study of stack o...
nooriasukmaningtyas
 
NavneetSingh_ASP.NET
NavneetSingh_ASP.NETNavneetSingh_ASP.NET
NavneetSingh_ASP.NET
Navneet Singh
 
ARCC National Perspective Panel: XSEDE (Towns)
ARCC National Perspective Panel: XSEDE (Towns)ARCC National Perspective Panel: XSEDE (Towns)
ARCC National Perspective Panel: XSEDE (Towns)
John Towns
 
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
International Society of Service Innovation Professionals
 

Ähnlich wie Evia2017assessors (20)

Howison si2 keynote
Howison si2 keynoteHowison si2 keynote
Howison si2 keynote
 
2004 01 10 Chef Sa V01
2004 01 10 Chef Sa V012004 01 10 Chef Sa V01
2004 01 10 Chef Sa V01
 
SciSoftDays Talk - Howison: Spreading the work in software ecosystems
SciSoftDays Talk - Howison: Spreading the work in software ecosystemsSciSoftDays Talk - Howison: Spreading the work in software ecosystems
SciSoftDays Talk - Howison: Spreading the work in software ecosystems
 
What network simulator questions do users ask? a large-scale study of stack o...
What network simulator questions do users ask? a large-scale study of stack o...What network simulator questions do users ask? a large-scale study of stack o...
What network simulator questions do users ask? a large-scale study of stack o...
 
DataMind Pitch August 2013
DataMind Pitch August 2013DataMind Pitch August 2013
DataMind Pitch August 2013
 
Towards Principled User-side Recommender Systems
Towards Principled User-side Recommender SystemsTowards Principled User-side Recommender Systems
Towards Principled User-side Recommender Systems
 
NavneetSingh_ASP.NET
NavneetSingh_ASP.NETNavneetSingh_ASP.NET
NavneetSingh_ASP.NET
 
ARCC National Perspective Panel: XSEDE (Towns)
ARCC National Perspective Panel: XSEDE (Towns)ARCC National Perspective Panel: XSEDE (Towns)
ARCC National Perspective Panel: XSEDE (Towns)
 
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
20240118 ISSIP_Collab_PSU v1 AI Digital Twins.pptx
 
Lak2018: Scaling Nationally: Seven Lesson Learned
Lak2018:  Scaling Nationally: Seven Lesson LearnedLak2018:  Scaling Nationally: Seven Lesson Learned
Lak2018: Scaling Nationally: Seven Lesson Learned
 
openEQUELLA Q3 2018 Quarterly Briefing
openEQUELLA Q3 2018 Quarterly BriefingopenEQUELLA Q3 2018 Quarterly Briefing
openEQUELLA Q3 2018 Quarterly Briefing
 
AS08Final
AS08FinalAS08Final
AS08Final
 
Sakai Overview 02-12-2004
Sakai Overview 02-12-2004Sakai Overview 02-12-2004
Sakai Overview 02-12-2004
 
Notespane - A community based learning system
Notespane - A community based learning systemNotespane - A community based learning system
Notespane - A community based learning system
 
Embracing AI In Assessment
Embracing AI In AssessmentEmbracing AI In Assessment
Embracing AI In Assessment
 
Stacker's the way you connect the world .pptx
Stacker's the way you connect the world .pptxStacker's the way you connect the world .pptx
Stacker's the way you connect the world .pptx
 
Resume final upload pdf
Resume final upload pdfResume final upload pdf
Resume final upload pdf
 
KAIST Web Engineering Lab Introduction (2017 ver.)
KAIST Web Engineering Lab Introduction (2017 ver.)KAIST Web Engineering Lab Introduction (2017 ver.)
KAIST Web Engineering Lab Introduction (2017 ver.)
 
Cse443 Project Report - LPU (Modern Big Data Analysis with SQL Specialization)
Cse443 Project Report - LPU (Modern Big Data Analysis with SQL Specialization)Cse443 Project Report - LPU (Modern Big Data Analysis with SQL Specialization)
Cse443 Project Report - LPU (Modern Big Data Analysis with SQL Specialization)
 
Planning and Executing Practice-Impactful Research
Planning and Executing Practice-Impactful ResearchPlanning and Executing Practice-Impactful Research
Planning and Executing Practice-Impactful Research
 

Mehr von Tetsuya Sakai

Mehr von Tetsuya Sakai (20)

NTCIR15WWW3overview
NTCIR15WWW3overviewNTCIR15WWW3overview
NTCIR15WWW3overview
 
sigir2020
sigir2020sigir2020
sigir2020
 
ipsjifat201909
ipsjifat201909ipsjifat201909
ipsjifat201909
 
sigir2019
sigir2019sigir2019
sigir2019
 
assia2019
assia2019assia2019
assia2019
 
ntcir14centre-overview
ntcir14centre-overviewntcir14centre-overview
ntcir14centre-overview
 
evia2019
evia2019evia2019
evia2019
 
ecir2019tutorial-finalised
ecir2019tutorial-finalisedecir2019tutorial-finalised
ecir2019tutorial-finalised
 
ecir2019tutorial
ecir2019tutorialecir2019tutorial
ecir2019tutorial
 
WSDM2019tutorial
WSDM2019tutorialWSDM2019tutorial
WSDM2019tutorial
 
sigir2018tutorial
sigir2018tutorialsigir2018tutorial
sigir2018tutorial
 
Evia2017unanimity
Evia2017unanimityEvia2017unanimity
Evia2017unanimity
 
Evia2017dialogues
Evia2017dialoguesEvia2017dialogues
Evia2017dialogues
 
Evia2017wcw
Evia2017wcwEvia2017wcw
Evia2017wcw
 
sigir2017bayesian
sigir2017bayesiansigir2017bayesian
sigir2017bayesian
 
NL20161222invited
NL20161222invitedNL20161222invited
NL20161222invited
 
AIRS2016
AIRS2016AIRS2016
AIRS2016
 
Nl201609
Nl201609Nl201609
Nl201609
 
ictir2016
ictir2016ictir2016
ictir2016
 
ICTIR2016tutorial
ICTIR2016tutorialICTIR2016tutorial
ICTIR2016tutorial
 

Kürzlich hochgeladen

CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Kürzlich hochgeladen (20)

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Evia2017assessors