Written and presented by Tom Ingraham (F1000), at the Reproducible and Citable Data and Model Workshop, in Warnemünde, Germany. September 14th -16th 2015.
4. Enabling early access to results
Pre-print server OA Journal
Early access to results & data Editorial input
Precedence for authors Peer review & indexing
Multiple versions Publication ethics
No Editor in Chief Perfected paper OA
12. Credit for data and software
Data availability:
Repository: Data entry title, PID [Ref]
[Ref] Author Names. Data entry title [v].
Repository. Year. PID
Cite the data or the Data Article?
Clear accessibility info and actively encourage data/software citation
Software availability:
Software access (opt): URL
Latest source code (opt): VCS URL
Archived entry: Repository: Software
title, PID [Ref]
Licence: OSI
[Ref] Author Names. Software title [v].
Repository. Year. PID
17. SUMMARY: F1000RESEARCH’S APPROACH TO OPEN SCIENCE
• Near-immediate publication
Open, invited peer review conducted after
publication
• Transparent peer-review
Signed referee reports and author responses
published alongside each article
• No editorial bias
Publication of all findings, including null results,
software tools and data notes.
• Data included
All research articles accompanied by the data on
which they are based
TL : DR F1000Research is the love child of arxiv, github and an open access journal
All articles
‘New research’ rarely new by the time it’s published
Peer review the rate-limiting step
Reject for silly reason like scope or not interesting to that particular editor. For a discipline that aims for objectivity, the degree of subjectivity in science publishing is incredible. Alternatives: Peer print
F1000Research combines the advantages of a pre-print server with the advantages of an OA journal on a single, open platform
No splintering of discussions etc.
Referees are invited experts
Referee reports are visible and named
Referee reports are citable: credit
We have had reviews longer that the articles themselves
Peer review should be a productive discussion, not a ruthless take-down.
Reviews put paper in context, strengths and limitations
Reduces bias among reviewers (social pressure for objectivity)
More constructive, quality reviews
Published reports can help teach young researchers
Proves peer review has been done! (Bohannon sting)
Benefits for reviewers
Display informed opinion of the work
Demonstrate experience as a reviewer
Credit for their report (reports citable)
Average rate of programming errors 15 – 50 per 1000 lines of delivered code (professionals)
Papers and peer reviews cc by
Prevent attribution stacking, but still encourage dataset citation (including within article)
Reject if not open
Data usage: To date of Figshare 421 uploads, 43000 views (just through Figshare not counting views through F1KR), 1300 downloads. 61% US, 11% UK.
This will appear everywhere – all your colleagues on our project will see it. Colleagues don’t have to be subscribers to participate in projects (they just can’t create their own)