8. Bad news for researchers?
• You’re under pressure to justify
– Yourself
– Your research
• Both internally and externally
9. Good news for researchers?
Funders and institutions are increasingly
looking for or considering other types of:
• Impact
• Research output
• Contribution
12. Example: social & mainstream
media Blogs, reviews,
commentsIncluding Faculty of 1000, PubPeer,
MathOverflow and the world’s
largest curated index of academic
blogs.
Newspapers &
magazinesInternational titles, both mainstream
and niche.
Social media
13. Example: policy documents
World Health Organization (WHO)
“WHO policy on collaborative TB/HIV activities:
guidelines for national programmes and other
stakeholders”
National Institute for Health and
Care Excellence (NICE)
“Delivering Accident Prevention at local level in the
new public health system: Road safety policy and
links to wider objectives”
Intergovernmental Panel on
Climate Change (IPCC)
“Managing the Risks of Extreme Events and
Disasters to Advance Climate Change Adaptation”
14. Example: popular non-fiction
Gulp
“’America’s funniest science writer’ (Washington Post)
takes us down the hatch on an unforgettable tour.”
The Black Swan
“Since being published in 2007, as of February 2011
has sold close to 3 million copies. It spent 36 weeks
on the New York Times Bestseller list list; 17 as
hardcover and 19 weeks as paperback. It was
published in 32 languages.”
Thinking Fast and Slow
“The basis for his Nobel prize, Kahneman developed
prospect theory to account for experimental errors he
noticed in Daniel Bernoulli's traditional utility theory.”
17. • To gauge the overall popularly of the article
• 87% of respondents strongly agreed or agreed
• To discover and network with researchers who are interested in
the same area of their work
• 77% strongly agreed or agreed
• To understands a paper’s influence on the scientific community
• 66% strongly agreed or agreed
• To determine what journal to submit their next paper to
• 60% of respondents strongly agreed or agreed
• To determine areas of research to explore
• Only 37% of respondents strongly agreed or agreed
21. People are very keen to relate it to citations!
Scholarly altmetrics correlate with citations.
Public engagement / policy & practice
altmetrics don’t.
25. "The more any
quantitative social
indicator (or even
some qualitative
indicator) is used for
social decision-
making […] the more
apt it will be to distort
and corrupt the social
processes it is
intended to monitor.”
Donald Campbell,
1976
38. What can be done?
• Make underlying data available, visible
• Only track sources that can be audited
– Some interesting sources fail this test e.g.
downloads and private Facebook activity
• Automatically flag up suspicious activity, then
manually curate
• Have a standard process in place to deal with
gamed articles, notify the journal
So altmetrics means different things to different people
As an author service the data is used in a different way when the paper comes out than when it comes to prepping for a grant or a department head looking at it
Here’s one way altmetrics is used… in an ego driven way. You’re an author, you wrote something, you want to see a reaction. It’s natural.
But there is another, more powerful force driving altmetrics adoption.
First saw this named by Paul Wouters who is head of CWTS in Leiden, his blog
As a quick aside – you see altmetrics crop up all over the place nowadays in relation to this. And the driver has been publishers – its’ publishers that have been pushing this forward and exposing people to the data. PLoS, the people we work with… Elsevier are doing an awful lot of interesting work like Mike will say.
And article level metrics are definitely not ‘alternative’ any more in STM publishing, which is crazy when you think about the fact that Jason only coined the phrase in 2010
At an article level, but also at a group or institutional level
While we might aggregate by journal or DOI prefix
You may imagine public scientific discourse to be relatively polite
But actually… it can be pretty brutal
Donald Campbell was a social psychologist in the 70s. Sometimes people quote Goodheart’s law, but I like this version better.]
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
We only count 1 mention from each person per source, so if you tweet about the same paper more than once, we will ignore everything but the first.
A newspaper article contributes more to the score than a blog post, which contributes more than a tweet.
at whether or not there's any bias towards a particular journal or publisher and at who the audience is.
e.g., A doctor sharing a link with other doctors counts for far more than a journal account pushing the same link out automatically.
Cited 15 times, but by far the most popular paper in the internet in terms of mentions. It actually broke our db, because the record size was a max of 64MB large.
And some are more important than others to different audiences, news is a big one across the board
…. But authors can take that to an extreme
Here’s an example of ‘incidental’ gaming.
We don’t see that much gaming – but it does occasionally happen. We do notify journals when we see dodgy activity, but I don’t think any of them have imposed sanction. It’s difficult: what was the intention? Was it the author or somebody else?