SlideShare a Scribd company logo
1 of 100
science 2.0
an illustration of good research
practices in a real study
wolf vanpaemel kortrijk, march 9 2015
1. the crisis in psychology
Why can we definitively say that? Because psychology often does not meet the
five basic requirements for a field to be considered scientifically rigorous:
clearly defined terminology, quantifiability, highly controlled experimental
conditions, reproducibility and, finally, predictability and testability.
2% data verzonnen
- mundane 'regular' misbehaviours present greater threats to the scientific
enterprise than those caused by high-profile misconduct cases such as
fraud.
- first assessment of questionable research practices (QRP)
- 2002 assessment: NIH funded research
1768 mid-career (52% response rate)
1479 early-career(43% response rate)
- first assessment of QRP in psychology
- 2155 respondents (36% response rate)
the problems of QRP are widespread, and have very severe
consequences
why is that the case?
“never attribute to malice what can be adequately explained
by incompetence”
the main reasons are lack of guidelines, and the high
publication pressure
i’m not interested in fraud (e.g., diederik stapel who made
up his own data)
preventing fraud requires a different approach
2. science 2.0
a new way of doing science that aims to increase the
confidence in research results
not one, single, coherent whole
a demonstration of science 2.0 with a real study
reference:
Steegen, S., Dewitte, L., Tuerlinckx, F., & Vanpaemel, W.
(2014). Measuring the crowd within again: A pre-registered
replication study. Frontiers in Psychology, 5, 786, 1-8.
doi:10.3389/fpsyg.2014.00786
paper:
http://ppw.kuleuven.be/okp/_pdf/Steegen2014MTCWA.pdf
OSF page:
https://osf.io/ivfu6/
based on some recommendations on good research practices made in the
literature
based on some recommendations on good research practices made in the
literature
• not exhaustive
• non-directive examples
• for inspiration
most recommendations can be implemented separately from each other
• not an all or none package deal
crowd within effect (vul & pashler, 2008)
• averaging multiple guesses from one
person provides a better estimate than
either guess alone
crowd within effect (vul & pashler, 2008)
• averaging multiple guesses from one
person provides a better estimate than
either guess alone
experiment
• 8 general knowledge questions
e.g., what percent of the world's roads
are in India?
• guess 1
guess 2
1. replication
2. registration
3. high power
4. bayesian statistics
5. alpha level
6. estimations
7. co-pilot multi-software approach
8. distinction between confirmatory and exploratory analyses
9. open science
what? how? why?
features of science 2.0
before data collection
after data collection/during data analysis
after data analysis
2.1 replicate!
replication
what?
do the same, following the experimental
and analytical procedure as closely as
possible
 direct replication study
replication
what?
things can never always the same
 indicate the known differences
replication
how?
communicate with the original authors; ask information; and
feedback
ideal for masterproef
not much focus on creativity but more on skill building
replication
why?
- lots of variability between studied phenomena
- lots of variability between labs/replications
- what can we learn from a single study?
2.2 register!
registration
what?
we specified all research details before data
collection
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
• analysis plan
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
• analysis plan
- which exact hypotheses to test
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
• analysis plan
- which exact hypotheses to test
- which variables to use
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
• analysis plan
- which exact hypotheses to test
- which variables to use
- analyses for testing the hypotheses
registration
what?
we specified all research details before data
collection
data collection
• sample size planning (stopping rule; see
below)
• recruitment: how to recruit participants
(e.g., pool)
data analysis
• data cleaning plan (when to delete data)
• analysis plan
- which exact hypotheses to test
- which variables to use
- analyses for testing the hypotheses
• code for the analyses
registration
what?
we specified all research details before data
collection
experimental details (optional)
• experimental materials
- stimuli (questions)
- exact instructions
registration
what?
we specified all research details before data
collection
experimental details (optional)
• experimental materials
- stimuli (questions)
- exact instructions
• experimental procedure
- randomization etc
registration
how?
• Registered Report
- new format of publishing
- review prior to data collection
- accepted papers then are (almost)
guaranteed publication if the authors
follow through with the registered
methodology
 AIMS Neuroscience; Attention,
Perception & Psychophysics; Cortex;
Drug and Alcohol Dependence;
Experimental Psychology, Frontiers in
Cognition; Perspectives on Psychological
Science; Social Psychology; …
registration
how?
• Registered Report
• “independent” pre-registration
e.g., Open Science Framework (OSF)
- open source software project
- free
registration
why?
prevent readers from thinking you might have exploited your
researchers degrees of freedom
extreme flexibility in
• data collection
• eg data peeking
• data analysis
• what is an outlier ?
• when to add covariates ?
• when to transform the data ?
• reporting
• did you report all variables, conditions, experiments, analyses
?
registration
why?
prevent readers from thinking you might have exploited your
researchers degrees of freedom
exploiting researchers degrees of freedom can lead to an increase in
false positives
-- without adjustment, a true hypothesis will always be
rejected if sampling continues long enough
if you can convince readers that you didn’t exploit the researchers
degrees of freedom, they will put more confidence in your result; it
will be seen as more trustworthy
2.3 power up!
high power
what?
among the decisions you have to make and
register in advance is when you’ll stop
collecting data
our stopping rule was based on fixing the
sample size
fixing the sample size was based on a
power calculation
power = P(reject null hypothesis | null
hypothesis is false)
high power
what?
as far as constraining the researchers
degrees of freedom is concerned, low power
is as good as high power
we aimed for high power (95%)
high power
how?
compute sample size needed to achieve
desired power level
- given the statistical test
- given the significance level
- given the effect size (e.g., based on previous
studies)
high power
how?
compute sample size needed to achieve
desired power level
- given the statistical test
- given the significance level
- given the effect size (e.g., based on previous
studies)
G*Power, R packages (pwr), …
high power
why?
• low power reduces the probability of discovering effects that are
there
• low power reduces the probability that a significant result reflects a
true effect (button et al., 2013)
• low power leads to an inflation of estimated effect sizes
• only overestimates will be significant
there are other stopping rules!
sources for how to do decide when to stop collecting data
-when I have a participant with the name of my mother
-availability
---when the day/testweek is over
-when I have a fixed number of participants
---100
--- based on power calculations
--- based on accuracy in parameter estimation
in general, the most important thing is that you do it, more
than how to do it
all these stopping rules are equally valid to constrain the
researchers degrees of freedom
but some will lead to better, research than other
---more informative
---more precise and less biased estimates of e.g.
effect size
2.4 go bayes
NHST & Bayesian testing
what?
we did not just use Null Hypothesis
Significance Testing (NHST i.e. p-values) but
also Bayes factors (the p-value of Bayesian
statistics)
the core of bayesian statistics is bayes’ rule
𝑝 𝑎 𝑏 =
𝑝 𝑏 𝑎 𝑝(𝑎)
𝑝(𝑏)
bayes treats probabilities as degrees of
belief
NHST & Bayesian testing
what?
we can use bayes to compute the belief in
our hypothesis H, given the data d
𝑝 𝐻 𝑑 =
𝑝 𝑑 𝐻 𝑝(𝐻)
𝑝(𝑑)
bayes rule tells us how we should update
our belief about H after observing data
NHST & Bayesian testing
how?
• several online tools (e.g., Rouder’s
website)
• BayesFactor package in R (Morey &
Rouder, 2014)
NHST & Bayesian testing
why?
• p(H|d) seems exactly what science
needs
• evidence for null hypothesis
• intuitive to interpret
• consistent: correct answer in large
sample limit
• exact for small sample size
• clear interpretation of evidence
• based on the observed data, not on
hypothetical replications of experiments
2.5 lower alpha
probabilityofH1
1
.99
.97
.90
.75
.50
2.6 test and estimate
NHST & estimation
what?
we did not just use p-values and Bayes
factors but also effect size estimates and
their confidence intervals
how?
Matlab, R, SPPS, ESCI (Cumming, 2013), …
why?
diverts focus from the presence of an effect
to the more informative size of an effect
and its precision
2.7 co-pilot
co-pilot multi-software approach
what/how?
• two people independently processed and
analyzed the same data …
• … using different software (MATLAB,
SPSS)
why?
decreases the likelihood of errors
errors are easily made:
50% of published papers in psychology
contain reporting errors (bakker &
wicherts, 2011)
e.g, error sample size planning (G*Power)
2.8 distinguish between confirmatory and
exploratory
clear distinction between confirmatory and
exploratory (post hoc) analyses
what?
we indicated whether the analyses where
specified before seeing the data, or based
on the data (see registration)
how?
be transparent
easy when having registered
why?
you still want to report analyses you
thought about too late! they can be useful
for generating hypotheses
2.9 go open
open science
what?
we made our full research output
publicly available to everybody
- experimental materials (stimuli,
questionnaire items, instructions, and so
on)
- raw data
- processed data
- code for data processing
- code for confirmatory analyses
- code for post-hoc analyses
- paper
open science
how?
Open Science Framework (public)
-online repository
-free
-under development
goal: share and find research materials
make study materials (experimental
material, data, code, …) public so that
other researchers can find, use and cite
them
several other sharing possibilities
open science
how?
Open Science Framework (public)
make sure OSF is not the only place
where your stuff is!
who knows what will happen with these
servers in 20 years?
unclear what the best data format is
open science
why?
• the current standards of what is
considered research output (paper with
summary statistics and conclusion) are
not inspired by desiderata for good
science, but rather by arbitrary and
outdated technical constraints (paper +
publishing costs)
•if we would start doing science right
now, in the computer and internet age,
we would probably set a completely
different standard
open science
why?
• facilitates
- replication studies
- follow up studies (e.g., use same
stimuli)
- new or re-analyses
- meta-analyses
- accumulation of scientific
knowledge
- detection of errors or fraud
• yields useful teaching material
open science
why?
• increases visibility
• increases citability
• decreases number of emails about
experiments, data or analyses, …
• is a moral obligation to tax payer
(publicly funded research is a public
good)
3 discussion
3.1 why not?
1. replication
2. registration
3. high power
4. bayesian statistics
5. alpha level
6. estimations
7. co-pilot multi-software approach
8. distinction between confirmatory and exploratory analyse
9. open science
what? how? why?
why not?
features of science 2.0
before data collection
after data collection/during data analysis
after data analysis
replication
why not
-it is impossible!
---things can never always the same (e.g. population)
---the details of the original study are lost (e.g., which questions
used in a post experimental interview)
-it is a waste of time and resources!
---should we value novelty more than truth?
-it is not good for my career
---can I publish this?
registration
why not?
• it takes time, thought and effort
• it is harder than it seems!
• writing the code help a lot
• exploration might be the only possibility
• domain specific (qualitative studies? complex studies?)
high power
why not?
• can be hard to guess expected effect size or trust published effect
size
• often requires large sample size
• collaborate!
• restricted to NHST framework
Bayes it
why not?
• priors
• education?
• Bayes factors are hard to compute
Bayes it
why not?
• priors
• education?
• Bayes factors were are hard to compute
Open up
why not?
sharing data takes time
sharing data might jeopardize a potential future publication
but: embargo period
Other
(co-pilot, alpha, confirmation vs exploration, estimation)
why not?
lack of education
old habits
takes time and is not rewarded
3.2 feasibility
this illustration used a very simple study
• replication study
• easily administered 8-item questionnaire
• basic t test
this made pre-registration, sample size planning, high power,
estimation, bayesian statistics, sharing protocol, code and data,
co-pilot multi software, etc probably much easier than in most
other studies
but everything is also possible (though harder) for non-
replication studies!
feasibility will depend on the type and scope of your research
science 2.0 is no package deal
---you can register, but not share
---you can share, but not use bayes
some practices are graded
--- you can register without code
--- you can estimate without reporting CI
3.3 what should i take home?
• the (psychological) literature is littered with spurious
findings
• which results can you trust?
– has this result been replicated?
– did the researchers exploit their researchers degrees of
freedom?
– is the evidence based on NHST with a liberal alpha level?
– was the analysis correct (e.g., at least, check dfs; better do
the analysis yourself with the shared data and code)
– ???
3.4 is there a crowd within effect?
Is there a crowd within effect?
successful replication
• error guess 1 > error average
• error guess 2 > error average
the end
(or the beginning!)

More Related Content

What's hot

Program theory evaluation
Program theory evaluationProgram theory evaluation
Program theory evaluationMatti Heino
 
Biomarkers for psychological phenotypes?
Biomarkers for psychological phenotypes?Biomarkers for psychological phenotypes?
Biomarkers for psychological phenotypes?Dorothy Bishop
 
Statistics in the age of data science, issues you can not ignore
Statistics in the age of data science, issues you can not ignoreStatistics in the age of data science, issues you can not ignore
Statistics in the age of data science, issues you can not ignoreTuri, Inc.
 
What is the reproducibility crisis in science and what can we do about it?
What is the reproducibility crisis in science and what can we do about it?What is the reproducibility crisis in science and what can we do about it?
What is the reproducibility crisis in science and what can we do about it?Dorothy Bishop
 
Machine Learning for Preclinical Research
Machine Learning for Preclinical ResearchMachine Learning for Preclinical Research
Machine Learning for Preclinical ResearchPaul Agapow
 
Digital Scholar Webinar: Open reproducible research
Digital Scholar Webinar: Open reproducible researchDigital Scholar Webinar: Open reproducible research
Digital Scholar Webinar: Open reproducible researchSC CTSI at USC and CHLA
 
The End of the Drug Development Casino?
The End of the Drug Development Casino?The End of the Drug Development Casino?
The End of the Drug Development Casino?Paul Agapow
 
Shing Lee MedicReS World Congress 2015
Shing Lee MedicReS World Congress 2015Shing Lee MedicReS World Congress 2015
Shing Lee MedicReS World Congress 2015MedicReS
 
Sample size and power
Sample size and powerSample size and power
Sample size and powerChristina K J
 
Zubin Master MedicReS World Congress 2015
Zubin Master MedicReS World Congress 2015Zubin Master MedicReS World Congress 2015
Zubin Master MedicReS World Congress 2015MedicReS
 
Root Cause Analysis
Root Cause AnalysisRoot Cause Analysis
Root Cause Analysistqmdoctor
 
Behaviour change and intervention research
Behaviour change and intervention researchBehaviour change and intervention research
Behaviour change and intervention researchMatti Heino
 
Why the EPV≥10 sample size rule is rubbish and what to use instead
Why the EPV≥10 sample size rule is rubbish and what to use instead Why the EPV≥10 sample size rule is rubbish and what to use instead
Why the EPV≥10 sample size rule is rubbish and what to use instead Maarten van Smeden
 
Biomedical Research_House of Cards_Editorial
Biomedical Research_House of Cards_EditorialBiomedical Research_House of Cards_Editorial
Biomedical Research_House of Cards_EditorialRathnam Chaguturu
 
Scientific method
Scientific methodScientific method
Scientific methodas271210
 
Collin O´Neil MedicReS 5th World Congress 2015
Collin O´Neil MedicReS 5th World Congress 2015Collin O´Neil MedicReS 5th World Congress 2015
Collin O´Neil MedicReS 5th World Congress 2015MedicReS
 
Research design By Mr Peng Kungkea
Research design By Mr Peng KungkeaResearch design By Mr Peng Kungkea
Research design By Mr Peng KungkeaKungkea Peng
 

What's hot (20)

Program theory evaluation
Program theory evaluationProgram theory evaluation
Program theory evaluation
 
Biomarkers for psychological phenotypes?
Biomarkers for psychological phenotypes?Biomarkers for psychological phenotypes?
Biomarkers for psychological phenotypes?
 
Statistics in the age of data science, issues you can not ignore
Statistics in the age of data science, issues you can not ignoreStatistics in the age of data science, issues you can not ignore
Statistics in the age of data science, issues you can not ignore
 
What is the reproducibility crisis in science and what can we do about it?
What is the reproducibility crisis in science and what can we do about it?What is the reproducibility crisis in science and what can we do about it?
What is the reproducibility crisis in science and what can we do about it?
 
Nursing research
Nursing researchNursing research
Nursing research
 
Machine Learning for Preclinical Research
Machine Learning for Preclinical ResearchMachine Learning for Preclinical Research
Machine Learning for Preclinical Research
 
Digital Scholar Webinar: Open reproducible research
Digital Scholar Webinar: Open reproducible researchDigital Scholar Webinar: Open reproducible research
Digital Scholar Webinar: Open reproducible research
 
The End of the Drug Development Casino?
The End of the Drug Development Casino?The End of the Drug Development Casino?
The End of the Drug Development Casino?
 
Habib, Researcher Awareness + Perception: A Year in Review
Habib, Researcher Awareness + Perception: A Year in ReviewHabib, Researcher Awareness + Perception: A Year in Review
Habib, Researcher Awareness + Perception: A Year in Review
 
Shing Lee MedicReS World Congress 2015
Shing Lee MedicReS World Congress 2015Shing Lee MedicReS World Congress 2015
Shing Lee MedicReS World Congress 2015
 
Sample size and power
Sample size and powerSample size and power
Sample size and power
 
Zubin Master MedicReS World Congress 2015
Zubin Master MedicReS World Congress 2015Zubin Master MedicReS World Congress 2015
Zubin Master MedicReS World Congress 2015
 
Root Cause Analysis
Root Cause AnalysisRoot Cause Analysis
Root Cause Analysis
 
Behaviour change and intervention research
Behaviour change and intervention researchBehaviour change and intervention research
Behaviour change and intervention research
 
Why the EPV≥10 sample size rule is rubbish and what to use instead
Why the EPV≥10 sample size rule is rubbish and what to use instead Why the EPV≥10 sample size rule is rubbish and what to use instead
Why the EPV≥10 sample size rule is rubbish and what to use instead
 
Biomedical Research_House of Cards_Editorial
Biomedical Research_House of Cards_EditorialBiomedical Research_House of Cards_Editorial
Biomedical Research_House of Cards_Editorial
 
Scientific method
Scientific methodScientific method
Scientific method
 
Collin O´Neil MedicReS 5th World Congress 2015
Collin O´Neil MedicReS 5th World Congress 2015Collin O´Neil MedicReS 5th World Congress 2015
Collin O´Neil MedicReS 5th World Congress 2015
 
Effectiveness of New, Informationist-led Curriculum Changes at the College of...
Effectiveness of New, Informationist-led Curriculum Changes at the College of...Effectiveness of New, Informationist-led Curriculum Changes at the College of...
Effectiveness of New, Informationist-led Curriculum Changes at the College of...
 
Research design By Mr Peng Kungkea
Research design By Mr Peng KungkeaResearch design By Mr Peng Kungkea
Research design By Mr Peng Kungkea
 

Similar to sience 2.0 : an illustration of good research practices in a real study

Meta-Analysis -- Introduction.pptx
Meta-Analysis -- Introduction.pptxMeta-Analysis -- Introduction.pptx
Meta-Analysis -- Introduction.pptxACSRM
 
Clinical Research Statistics for Non-Statisticians
Clinical Research Statistics for Non-StatisticiansClinical Research Statistics for Non-Statisticians
Clinical Research Statistics for Non-StatisticiansBrook White, PMP
 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testingpraveen3030
 
Audit and stat for medical professionals
Audit and stat for medical professionalsAudit and stat for medical professionals
Audit and stat for medical professionalsNadir Mehmood
 
Statistics for DP Biology IA
Statistics for DP Biology IAStatistics for DP Biology IA
Statistics for DP Biology IAVeronika Garga
 
321423152 e-0016087606-session39134-201012122352 (1)
321423152 e-0016087606-session39134-201012122352 (1)321423152 e-0016087606-session39134-201012122352 (1)
321423152 e-0016087606-session39134-201012122352 (1)Iin Angriyani
 
Roche_open_science_NIOO_KNAW_workshop_NL
Roche_open_science_NIOO_KNAW_workshop_NLRoche_open_science_NIOO_KNAW_workshop_NL
Roche_open_science_NIOO_KNAW_workshop_NLDominique Roche
 
Meta analysis ppt
Meta analysis pptMeta analysis ppt
Meta analysis pptSKVA
 
محاضرة د.سعاد
محاضرة د.سعادمحاضرة د.سعاد
محاضرة د.سعادresearchcenterm
 
CORE: Quantitative Research Methodology: An Overview
CORE: Quantitative Research Methodology: An OverviewCORE: Quantitative Research Methodology: An Overview
CORE: Quantitative Research Methodology: An OverviewTrident University
 
The Uneven Future of Evidence-Based Medicine
The Uneven Future of Evidence-Based MedicineThe Uneven Future of Evidence-Based Medicine
The Uneven Future of Evidence-Based MedicineIda Sim
 
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draft
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draftBurger_SSIB_Open_Sci_NutriXiv_7_2019_draft
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draftKyle S. Burger
 
Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Daniel Quintana
 
Surviving statistics lecture 1
Surviving statistics lecture 1Surviving statistics lecture 1
Surviving statistics lecture 1MikeBlyth
 
The Simulacrum, a Synthetic Cancer Dataset
The Simulacrum, a Synthetic Cancer DatasetThe Simulacrum, a Synthetic Cancer Dataset
The Simulacrum, a Synthetic Cancer DatasetCongChen35
 
Not just for STEM: Open and reproducible research in the social sciences
Not just for STEM: Open and reproducible research in the social sciencesNot just for STEM: Open and reproducible research in the social sciences
Not just for STEM: Open and reproducible research in the social sciencesUoLResearchSupport
 

Similar to sience 2.0 : an illustration of good research practices in a real study (20)

Meta-Analysis -- Introduction.pptx
Meta-Analysis -- Introduction.pptxMeta-Analysis -- Introduction.pptx
Meta-Analysis -- Introduction.pptx
 
Clinical Research Statistics for Non-Statisticians
Clinical Research Statistics for Non-StatisticiansClinical Research Statistics for Non-Statisticians
Clinical Research Statistics for Non-Statisticians
 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testing
 
Audit and stat for medical professionals
Audit and stat for medical professionalsAudit and stat for medical professionals
Audit and stat for medical professionals
 
Statistics for DP Biology IA
Statistics for DP Biology IAStatistics for DP Biology IA
Statistics for DP Biology IA
 
321423152 e-0016087606-session39134-201012122352 (1)
321423152 e-0016087606-session39134-201012122352 (1)321423152 e-0016087606-session39134-201012122352 (1)
321423152 e-0016087606-session39134-201012122352 (1)
 
Roche_open_science_NIOO_KNAW_workshop_NL
Roche_open_science_NIOO_KNAW_workshop_NLRoche_open_science_NIOO_KNAW_workshop_NL
Roche_open_science_NIOO_KNAW_workshop_NL
 
Intro scikitlearnstatsmodels
Intro scikitlearnstatsmodelsIntro scikitlearnstatsmodels
Intro scikitlearnstatsmodels
 
Meta analysis ppt
Meta analysis pptMeta analysis ppt
Meta analysis ppt
 
محاضرة د.سعاد
محاضرة د.سعادمحاضرة د.سعاد
محاضرة د.سعاد
 
Jsm big-data
Jsm big-dataJsm big-data
Jsm big-data
 
CORE: Quantitative Research Methodology: An Overview
CORE: Quantitative Research Methodology: An OverviewCORE: Quantitative Research Methodology: An Overview
CORE: Quantitative Research Methodology: An Overview
 
The Uneven Future of Evidence-Based Medicine
The Uneven Future of Evidence-Based MedicineThe Uneven Future of Evidence-Based Medicine
The Uneven Future of Evidence-Based Medicine
 
196309903 q-answer
196309903 q-answer196309903 q-answer
196309903 q-answer
 
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draft
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draftBurger_SSIB_Open_Sci_NutriXiv_7_2019_draft
Burger_SSIB_Open_Sci_NutriXiv_7_2019_draft
 
محاضرة 4
محاضرة 4محاضرة 4
محاضرة 4
 
Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?Meta analysis: Mega-silly or mega-useful?
Meta analysis: Mega-silly or mega-useful?
 
Surviving statistics lecture 1
Surviving statistics lecture 1Surviving statistics lecture 1
Surviving statistics lecture 1
 
The Simulacrum, a Synthetic Cancer Dataset
The Simulacrum, a Synthetic Cancer DatasetThe Simulacrum, a Synthetic Cancer Dataset
The Simulacrum, a Synthetic Cancer Dataset
 
Not just for STEM: Open and reproducible research in the social sciences
Not just for STEM: Open and reproducible research in the social sciencesNot just for STEM: Open and reproducible research in the social sciences
Not just for STEM: Open and reproducible research in the social sciences
 

Recently uploaded

Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsSérgio Sacani
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and ClassificationsAreesha Ahmad
 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Servicenishacall1
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusNazaninKarimi6
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)Areesha Ahmad
 
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Silpa
 
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 62, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verifiedDelhi Call girls
 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Servicemonikaservice1
 
chemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdfchemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdfTukamushabaBismark
 
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort ServiceCall Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort Serviceshivanisharma5244
 
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Silpa
 
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Young
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai YoungDubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Young
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Youngkajalvid75
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxFarihaAbdulRasheed
 
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxPSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxSuji236384
 
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verified
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verifiedConnaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verified
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verifiedDelhi Call girls
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryAlex Henderson
 

Recently uploaded (20)

Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
9999266834 Call Girls In Noida Sector 22 (Delhi) Call Girl Service
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)GBSN - Microbiology (Unit 3)
GBSN - Microbiology (Unit 3)
 
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
Locating and isolating a gene, FISH, GISH, Chromosome walking and jumping, te...
 
Site Acceptance Test .
Site Acceptance Test                    .Site Acceptance Test                    .
Site Acceptance Test .
 
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 62, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 62, Noida Call girls :8448380779 Model Escorts | 100% verified
 
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts ServiceJustdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
Justdial Call Girls In Indirapuram, Ghaziabad, 8800357707 Escorts Service
 
chemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdfchemical bonding Essentials of Physical Chemistry2.pdf
chemical bonding Essentials of Physical Chemistry2.pdf
 
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort ServiceCall Girls Ahmedabad +917728919243 call me Independent Escort Service
Call Girls Ahmedabad +917728919243 call me Independent Escort Service
 
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.Molecular markers- RFLP, RAPD, AFLP, SNP etc.
Molecular markers- RFLP, RAPD, AFLP, SNP etc.
 
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Young
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai YoungDubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Young
Dubai Call Girls Beauty Face Teen O525547819 Call Girls Dubai Young
 
Clean In Place(CIP).pptx .
Clean In Place(CIP).pptx                 .Clean In Place(CIP).pptx                 .
Clean In Place(CIP).pptx .
 
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptxCOST ESTIMATION FOR A RESEARCH PROJECT.pptx
COST ESTIMATION FOR A RESEARCH PROJECT.pptx
 
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptxPSYCHOSOCIAL NEEDS. in nursing II sem pptx
PSYCHOSOCIAL NEEDS. in nursing II sem pptx
 
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verified
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verifiedConnaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verified
Connaught Place, Delhi Call girls :8448380779 Model Escorts | 100% verified
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
 
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and SpectrometryFAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
FAIRSpectra - Enabling the FAIRification of Spectroscopy and Spectrometry
 

sience 2.0 : an illustration of good research practices in a real study

  • 1. science 2.0 an illustration of good research practices in a real study wolf vanpaemel kortrijk, march 9 2015
  • 2. 1. the crisis in psychology
  • 3. Why can we definitively say that? Because psychology often does not meet the five basic requirements for a field to be considered scientifically rigorous: clearly defined terminology, quantifiability, highly controlled experimental conditions, reproducibility and, finally, predictability and testability.
  • 4.
  • 5.
  • 6.
  • 8. - mundane 'regular' misbehaviours present greater threats to the scientific enterprise than those caused by high-profile misconduct cases such as fraud. - first assessment of questionable research practices (QRP) - 2002 assessment: NIH funded research 1768 mid-career (52% response rate) 1479 early-career(43% response rate)
  • 9.
  • 10. - first assessment of QRP in psychology - 2155 respondents (36% response rate)
  • 11.
  • 12. the problems of QRP are widespread, and have very severe consequences why is that the case? “never attribute to malice what can be adequately explained by incompetence” the main reasons are lack of guidelines, and the high publication pressure
  • 13. i’m not interested in fraud (e.g., diederik stapel who made up his own data) preventing fraud requires a different approach
  • 15. a new way of doing science that aims to increase the confidence in research results not one, single, coherent whole
  • 16.
  • 17. a demonstration of science 2.0 with a real study reference: Steegen, S., Dewitte, L., Tuerlinckx, F., & Vanpaemel, W. (2014). Measuring the crowd within again: A pre-registered replication study. Frontiers in Psychology, 5, 786, 1-8. doi:10.3389/fpsyg.2014.00786 paper: http://ppw.kuleuven.be/okp/_pdf/Steegen2014MTCWA.pdf OSF page: https://osf.io/ivfu6/
  • 18. based on some recommendations on good research practices made in the literature
  • 19. based on some recommendations on good research practices made in the literature • not exhaustive • non-directive examples • for inspiration most recommendations can be implemented separately from each other • not an all or none package deal
  • 20.
  • 21. crowd within effect (vul & pashler, 2008) • averaging multiple guesses from one person provides a better estimate than either guess alone
  • 22. crowd within effect (vul & pashler, 2008) • averaging multiple guesses from one person provides a better estimate than either guess alone experiment • 8 general knowledge questions e.g., what percent of the world's roads are in India? • guess 1 guess 2
  • 23. 1. replication 2. registration 3. high power 4. bayesian statistics 5. alpha level 6. estimations 7. co-pilot multi-software approach 8. distinction between confirmatory and exploratory analyses 9. open science what? how? why? features of science 2.0 before data collection after data collection/during data analysis after data analysis
  • 25. replication what? do the same, following the experimental and analytical procedure as closely as possible  direct replication study
  • 26. replication what? things can never always the same  indicate the known differences
  • 27. replication how? communicate with the original authors; ask information; and feedback ideal for masterproef not much focus on creativity but more on skill building
  • 28. replication why? - lots of variability between studied phenomena - lots of variability between labs/replications - what can we learn from a single study?
  • 29.
  • 31. registration what? we specified all research details before data collection
  • 32. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below)
  • 33. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool)
  • 34. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data)
  • 35. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data) • analysis plan
  • 36. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data) • analysis plan - which exact hypotheses to test
  • 37. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data) • analysis plan - which exact hypotheses to test - which variables to use
  • 38. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data) • analysis plan - which exact hypotheses to test - which variables to use - analyses for testing the hypotheses
  • 39. registration what? we specified all research details before data collection data collection • sample size planning (stopping rule; see below) • recruitment: how to recruit participants (e.g., pool) data analysis • data cleaning plan (when to delete data) • analysis plan - which exact hypotheses to test - which variables to use - analyses for testing the hypotheses • code for the analyses
  • 40. registration what? we specified all research details before data collection experimental details (optional) • experimental materials - stimuli (questions) - exact instructions
  • 41. registration what? we specified all research details before data collection experimental details (optional) • experimental materials - stimuli (questions) - exact instructions • experimental procedure - randomization etc
  • 42. registration how? • Registered Report - new format of publishing - review prior to data collection - accepted papers then are (almost) guaranteed publication if the authors follow through with the registered methodology  AIMS Neuroscience; Attention, Perception & Psychophysics; Cortex; Drug and Alcohol Dependence; Experimental Psychology, Frontiers in Cognition; Perspectives on Psychological Science; Social Psychology; …
  • 43. registration how? • Registered Report • “independent” pre-registration e.g., Open Science Framework (OSF) - open source software project - free
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50. registration why? prevent readers from thinking you might have exploited your researchers degrees of freedom extreme flexibility in • data collection • eg data peeking • data analysis • what is an outlier ? • when to add covariates ? • when to transform the data ? • reporting • did you report all variables, conditions, experiments, analyses ?
  • 51.
  • 52. registration why? prevent readers from thinking you might have exploited your researchers degrees of freedom exploiting researchers degrees of freedom can lead to an increase in false positives -- without adjustment, a true hypothesis will always be rejected if sampling continues long enough if you can convince readers that you didn’t exploit the researchers degrees of freedom, they will put more confidence in your result; it will be seen as more trustworthy
  • 54. high power what? among the decisions you have to make and register in advance is when you’ll stop collecting data our stopping rule was based on fixing the sample size fixing the sample size was based on a power calculation power = P(reject null hypothesis | null hypothesis is false)
  • 55. high power what? as far as constraining the researchers degrees of freedom is concerned, low power is as good as high power we aimed for high power (95%)
  • 56. high power how? compute sample size needed to achieve desired power level - given the statistical test - given the significance level - given the effect size (e.g., based on previous studies)
  • 57. high power how? compute sample size needed to achieve desired power level - given the statistical test - given the significance level - given the effect size (e.g., based on previous studies) G*Power, R packages (pwr), …
  • 58. high power why? • low power reduces the probability of discovering effects that are there • low power reduces the probability that a significant result reflects a true effect (button et al., 2013) • low power leads to an inflation of estimated effect sizes • only overestimates will be significant
  • 59. there are other stopping rules! sources for how to do decide when to stop collecting data -when I have a participant with the name of my mother -availability ---when the day/testweek is over -when I have a fixed number of participants ---100 --- based on power calculations --- based on accuracy in parameter estimation
  • 60. in general, the most important thing is that you do it, more than how to do it all these stopping rules are equally valid to constrain the researchers degrees of freedom but some will lead to better, research than other ---more informative ---more precise and less biased estimates of e.g. effect size
  • 62. NHST & Bayesian testing what? we did not just use Null Hypothesis Significance Testing (NHST i.e. p-values) but also Bayes factors (the p-value of Bayesian statistics) the core of bayesian statistics is bayes’ rule 𝑝 𝑎 𝑏 = 𝑝 𝑏 𝑎 𝑝(𝑎) 𝑝(𝑏) bayes treats probabilities as degrees of belief
  • 63. NHST & Bayesian testing what? we can use bayes to compute the belief in our hypothesis H, given the data d 𝑝 𝐻 𝑑 = 𝑝 𝑑 𝐻 𝑝(𝐻) 𝑝(𝑑) bayes rule tells us how we should update our belief about H after observing data
  • 64. NHST & Bayesian testing how? • several online tools (e.g., Rouder’s website) • BayesFactor package in R (Morey & Rouder, 2014)
  • 65. NHST & Bayesian testing why? • p(H|d) seems exactly what science needs • evidence for null hypothesis • intuitive to interpret • consistent: correct answer in large sample limit • exact for small sample size • clear interpretation of evidence • based on the observed data, not on hypothetical replications of experiments
  • 68. 2.6 test and estimate
  • 69. NHST & estimation what? we did not just use p-values and Bayes factors but also effect size estimates and their confidence intervals how? Matlab, R, SPPS, ESCI (Cumming, 2013), … why? diverts focus from the presence of an effect to the more informative size of an effect and its precision
  • 71. co-pilot multi-software approach what/how? • two people independently processed and analyzed the same data … • … using different software (MATLAB, SPSS) why? decreases the likelihood of errors errors are easily made: 50% of published papers in psychology contain reporting errors (bakker & wicherts, 2011) e.g, error sample size planning (G*Power)
  • 72. 2.8 distinguish between confirmatory and exploratory
  • 73. clear distinction between confirmatory and exploratory (post hoc) analyses what? we indicated whether the analyses where specified before seeing the data, or based on the data (see registration) how? be transparent easy when having registered why? you still want to report analyses you thought about too late! they can be useful for generating hypotheses
  • 75. open science what? we made our full research output publicly available to everybody - experimental materials (stimuli, questionnaire items, instructions, and so on) - raw data - processed data - code for data processing - code for confirmatory analyses - code for post-hoc analyses - paper
  • 76. open science how? Open Science Framework (public) -online repository -free -under development goal: share and find research materials make study materials (experimental material, data, code, …) public so that other researchers can find, use and cite them several other sharing possibilities
  • 77.
  • 78. open science how? Open Science Framework (public) make sure OSF is not the only place where your stuff is! who knows what will happen with these servers in 20 years? unclear what the best data format is
  • 79. open science why? • the current standards of what is considered research output (paper with summary statistics and conclusion) are not inspired by desiderata for good science, but rather by arbitrary and outdated technical constraints (paper + publishing costs) •if we would start doing science right now, in the computer and internet age, we would probably set a completely different standard
  • 80. open science why? • facilitates - replication studies - follow up studies (e.g., use same stimuli) - new or re-analyses - meta-analyses - accumulation of scientific knowledge - detection of errors or fraud • yields useful teaching material
  • 81. open science why? • increases visibility • increases citability • decreases number of emails about experiments, data or analyses, … • is a moral obligation to tax payer (publicly funded research is a public good)
  • 82.
  • 85. 1. replication 2. registration 3. high power 4. bayesian statistics 5. alpha level 6. estimations 7. co-pilot multi-software approach 8. distinction between confirmatory and exploratory analyse 9. open science what? how? why? why not? features of science 2.0 before data collection after data collection/during data analysis after data analysis
  • 86. replication why not -it is impossible! ---things can never always the same (e.g. population) ---the details of the original study are lost (e.g., which questions used in a post experimental interview) -it is a waste of time and resources! ---should we value novelty more than truth? -it is not good for my career ---can I publish this?
  • 87. registration why not? • it takes time, thought and effort • it is harder than it seems! • writing the code help a lot • exploration might be the only possibility • domain specific (qualitative studies? complex studies?)
  • 88. high power why not? • can be hard to guess expected effect size or trust published effect size • often requires large sample size • collaborate! • restricted to NHST framework
  • 89. Bayes it why not? • priors • education? • Bayes factors are hard to compute
  • 90. Bayes it why not? • priors • education? • Bayes factors were are hard to compute
  • 91. Open up why not? sharing data takes time sharing data might jeopardize a potential future publication but: embargo period
  • 92. Other (co-pilot, alpha, confirmation vs exploration, estimation) why not? lack of education old habits takes time and is not rewarded
  • 94. this illustration used a very simple study • replication study • easily administered 8-item questionnaire • basic t test this made pre-registration, sample size planning, high power, estimation, bayesian statistics, sharing protocol, code and data, co-pilot multi software, etc probably much easier than in most other studies but everything is also possible (though harder) for non- replication studies! feasibility will depend on the type and scope of your research
  • 95. science 2.0 is no package deal ---you can register, but not share ---you can share, but not use bayes some practices are graded --- you can register without code --- you can estimate without reporting CI
  • 96. 3.3 what should i take home?
  • 97. • the (psychological) literature is littered with spurious findings • which results can you trust? – has this result been replicated? – did the researchers exploit their researchers degrees of freedom? – is the evidence based on NHST with a liberal alpha level? – was the analysis correct (e.g., at least, check dfs; better do the analysis yourself with the shared data and code) – ???
  • 98. 3.4 is there a crowd within effect?
  • 99. Is there a crowd within effect? successful replication • error guess 1 > error average • error guess 2 > error average
  • 100. the end (or the beginning!)

Editor's Notes

  1. some of these slides are based on slides made by francis tuerlinck, sara steegen, gert storms
  2. many labs project
  3. wetzels et al 2011
  4. agenda for open research