Public+Contribution+Rating

A BETTER WAY TO JUDGE SCIENTISTS?
=To edit collaborative proposals, just click the ' Edit This Page ' button on the right!!! = =To be more communal, please register by clicking ' Join this Space ' on the left. = =To make comments without becoming a member, use the 'discussion ' tab on any individual page. =


 * __Posted 1 May 2008 by Noam Y. Harel__**
 * Fairly apportioning credit without creating incentives for secrecy and other counterproductive practices represents the most difficult challenge facing SCIEnCE. Objective ratings should be the basis for job hiring, promotion, tenure, and of course grant funding. SCIEnCE hopes to soon be able to distribute its own funding to scientists/labs based on their proposals and Public Contribution Ratings (PCR).**

Just as with baseball statistics, there is no perfect way to objectively rate a scientist's overall contributions. One thing is for sure - the number of first-author publications and grant dollars won is not enough to go by. We need a metric based on a more comprehensive set of criteria by which to grade scientists (and their institutions).

To come up with the best formula for PCR, we need a communal effort. We need consensus on the types of criteria to be included and their relative importance. We need algorithm-minded scientists (the equivalent of [|SABRmetricians] in baseball) to convert these qualitative criteria into a more quantitative figure that can be automatically updated in real time.

**Again, the importance of establishing an objective, comprehensive, automatically-updated rating factor for scientists cannot be understated. The PCR aims to take much of the mystery and subjectivity out of the process of faculty hiring, promotion, and grant funding.

Below are SCIEnCE's initial thoughts on PCR criteria. Anything, including the PCR name/acronym itself, is up for modification. Please contribute your thoughts here! **

SCIEnCE proposes PCR as a cumulative points total rather than some sort of 'average'. Below are preliminary proposals for methods of assigning points for various types of contributions to science:

 Peer-reviewed publication criteria

 * Each peer-reviewed, primary research paper is worth 100 total points.
 * For any paper with //n// authors, each author will get (1/(n+2))*100 points, with an extra (1/(n+2))*100 points for both the first and last authors. Authors that contribute equally to a paper split points accordingly.
 * **Example: ** Authors A, B, C, D, and E. Each author gets (1/(5+2))*100 or 14.3 points, with an extra (1/(5+2))*100 or 14.3 points (28.6 total) for Authors A and E. If Authors A and B are co-first authors, then each would get 21.4 points.
 * **Issues/Responses:**
 * Different scientific disciplines feature papers with different numbers of authors.
 * Within individual disciplines, scientists will be on relatively equal ground in terms of number of authors per paper.
 * Authors on papers produced by larger/collaborative groups will receive less credit, defeating the purpose of SCIEnCE to inspire more group collaboration.
 * A minimum of say, 10 points per author (20 for first or last author) could be instituted. Yes, this would add up to >100 points for papers with 9 or more authors, but it would encourage the type of large group collaboration that SCIEnCE envisions.
 * However, a 10-point minimum and >100 points per paper might motivate senior authors to gratuitously add labmates to the paper to help the lab's overall PCR.
 * This could magnify the importance of first-authorship even more than the current system.
 * True. On the other hand, co-first-authors listed second in the actual publication will at least get a tangible equal share of credit even though the paper becomes known as "1st co-1st-author et al."
 * Assigning points by author order does not define the nature of the contribution made by each author.
 * SCIEnCE couldn't agree more strongly. Ideally, journals could get together and agree on a standardized set of categories by which to break down author contributions on each paper. For example (inspired by requirements stipulated by journals such as Archives of Neurology):
 * Research Design
 * Data Collection
 * Data Interpretation
 * Writing
 * Editing
 * Obtaining Funds
 * Each of these categories would be graded exactly as above for the overall paper (with a sub-score of 100 points per category). In this way, PIs would generally have high scores in the Design, Writing, and Funds categories, whereas technicians would generally score high in the Data Collection category, etc.
 * What about book chapters? Reviews?
 * Same as above, but 75 total points per publication.
 * Should the impact factor of the journal be used to weight the paper's score, eg (100 points)*(journal Impact Factor)?
 * Probably not, as this would tend to exacerbate 'CNS Disease' (Harold Varmus' apt terminology for the obsession with publishing in Cell, Nature, or Science).

**<span style="color: rgb(255, 0, 217)"><span style="color: rgb(255, 255, 255)">Citation criteria: **

 * Rather than values such as the [|h-index], perhaps a cumulative point total should be assigned for citations of an author's work:
 * <span style="color: rgb(255, 0, 217)">25 points for each time a paper is cited, divided among authors using the above formula (1/(n+2))*25.
 * <span style="color: rgb(255, 0, 217)">5 points for each time a paper is cited to the second degree (the paper citing the paper gets cited by a subsequent paper); again divided among authors using the above formula.
 * **Issues/Responses:**
 * Citations by authors to a previous paper by any of the same authors do not earn points.

**<span style="color: rgb(255, 0, 0)"><span style="color: rgb(255, 255, 255)">Replication/Refutation criteria: **

 * The reproducibility of a paper's findings is critical. How to turn this into a number is difficult...**any opinions?**

<span style="color: rgb(255, 0, 0)"><span style="color: rgb(0, 0, 0); font-size: 140%">Collaborative research criteria:
<span style="color: rgb(0, 0, 0)">Wiki-style sites such as ShareScienceIdeas will allow users to post new research proposals as well as make suggestions and collaborate on others' proposals. Contributions will be judged by the scientific community in the style of [|DIGG] or [|Amazon]. Here is a preliminary breakdown for each type of contribution: <span style="color: rgb(255, 0, 0)"><span style="color: rgb(255, 255, 255)">
 * First of all, no negative points will be applied for negative ratings. This should avoid vengeful flamewars. However, the text of negative comments will still be posted publicly.
 * Each new proposal that meets minimum requirements (for completing required fields and avoiding obvious plagiarism or pseudoscience) will receive a baseline score of say, <span style="color: rgb(255, 0, 217)">100 points (same as a peer-reviewed publication, to encourage public proposal development).
 * Each new comment/suggestion/revision on an existing proposal will earn anywhere from <span style="color: rgb(255, 0, 217)">5-50 points, scaled by the type of contribution. For example, adding a sentence to a Background section should earn less than adding a research design section for an entire Specific Aim.
 * How should contributions be weighted differently (if at all) according to whether the author is contributing to a field outside his or her own area of expertise?
 * Other collaborative factors that should earn credit:
 * Peer-review publications in Open Access journals.
 * Conducting Open Notebook Science

<span style="font-size: 140%; color: rgb(0, 0, 0)">Other criteria that need scoring:

 * Grant reviews
 * Paper reviews
 * Journal editors
 * Teaching
 * Patents

<span style="color: rgb(255, 0, 0)"><span style="color: rgb(0, 0, 0); font-size: 140%">Group PCR:
PCRs will be tabulated for groups as well. Group ratings will pyramid from labs, to divisions, to departments, to entire institutions. The higher the cumulative score, the more likely that lab/department/institution will be to receive grant funding for equipment, reagents, and personnel. Of course, the goal of group PCR scores is to motivate PIs, Chairpersons, and Deans to induce more members of the scientific community to join the Open Science movement.