3 like 0 dislike
1.5k views
in Open Science by (65 points)

In this question, Gavin Simpson links to the SF Declaration. While the document is clear its desire to deemphasize journal impact factors, it recommends exploring new article-level metrics.

What are some examples of article metrics or methods that would serve a similar quality-control function as a peer-reviewed impact factor?



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (515 points)
0 0
This is an important question. It's fun to entertain ideas about how we could implement the next generation of tools for aggregating and consuming scientific literature: the "Amazon" or "StackExchange" of science, complete with rating and review systems. But as much as JIF is flawed and the current academic system can be gamed, any new system has the potential to be gamed as well, in ways that we may not initially anticipate. In my opinion, a real revolution in open science requires an in-depth dialogue on this and related questions.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (820 points)
0 0
If you are reducing this to metrics then I believe you are doing it wrong. As David Colquhoun (UCL) is quick to mention (in reference to academic appointments) *the best places read papers, ignore journals*. That pithy remark sums up my feelings here too. Even altmetrics, of which Colquhoun has little to no time for at all, fail at the first step of evaluating the merits of the work. instead they quantify those who shout loudly and/or have good social media presence.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (65 points)
0 0
@GavinSimpson Not restricted to actual metrics; edited for clarity.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (820 points)
0 0
@PatW. Right-o. Thanks for the clarification. I'll try to convert my comment here into an answer

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (155 points)
0 0
If you think that this thread should be migrated to Academia or another SE site because the OpenScience beta is closing, please edit the list of questions shortlisted for the migration [here](http://meta.openscience.stackexchange.com/questions/73/).

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)

2 Answers

4 like 0 dislike
by (820 points)

The most appropriate method is the one that almost surely will not be widely accepted or used. If you want to understand the quality and impact of a piece or body of work then you need to read that work and evaluate it within the context of the field within which it was published.

Arguments against this are the workload involved, but having impartial experts judge contributions to a field is likely the least gameable of the potential options.

The problem with automated metrics such as the one @Jure Triglav mentions is that links or citations in and of themselves do not constitute agreement with nor ascribe merit to a work. You only need to look at the top result of a Google search on the term "what happened to the dinosaurs", which is this piece of tripe. At one point, Google even made special mention, quoting from that piece of tripe in a card in the search results: see this comment piece for how is used to look

Further problems relate to the vagaries of citations;

  • scientists are often lazy when it comes to citing past work;
  • the number of citations is often limited by journals;
  • scientists often forget literature older than a few years;
  • citations are often not allowed to data or software.

Links need to be made to publications and that is done through citations and all the difficulties they bring.

Whilst altmetrics can provide some support for contributions beyond the traditional scientific literature, such as for software, slide decks etc., at best they are supplementary to a proper evaluation of the unique contribution that a researcher has made to a field. At the moment that requires some considerable human intervention.



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (140 points)
0 0
I agree that having expert reviews is the least gameable option, but it needs to be said that it's still not ungameable (ugh), as the definition of "expert" relies on individual gameable things. That aside, ideally, these reviews would be publicly searchable, collected and displayed in an accessible fashion, to prevent duplication of work (PubPeer?).

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
by (820 points)
0 0
@JureTriglav Agreed; all kinds of bias, subconscious or otherwise, can and do creep into these evaluations so mechanisms need to be in place to disincentivize such behaviour or biases. Open reviews is one action in this regard.

This post has been migrated from the Open Science private beta at StackExchange (A51.SE)
2 like 0 dislike
by (140 points)

Any system can be gamed, but some are harder to manipulate, or require colluding with a large and thusly brittle group.

A good article-level metric would be a representation of the assigned "PageRank" variant score. A paper linked not only to a large number of papers, but linked to a large number of well-linked papers is undoubtedly an important paper. While it's possible that it's important in a negative sense, that outcome is far less likely, and becomes less likelier still, as the score grows.

Would be very interesting to compare PageRank-oid scores with classic metrics such as number of citations, and also other newer metrics, such as number of views, downloads, tweets, likes, etc.

In other words, the world already fairly successfully relies on PageRank for the vast majority of sorting by importance or merit in information lookups, and science is merely a specific variant of this.



This post has been migrated from the Open Science private beta at StackExchange (A51.SE)

Ask Open Science used to be called Open Science Q&A but we changed the name when we registered the domain ask-open-science.org. Everything else stays the same: We are still hosted by Bielefeld University.

If you participated in the Open Science beta at StackExchange, please reclaim your user account now – it's already here!

E-mail the webmaster

Legal notice

Privacy statement

Categories

...