Dear Editor,

The strongest predictor of a journal’s F1000 score is simply the number of article evaluations submitted by F1000 faculty reviewers. Irrespective of their reviewer scores, the number of article evaluations can explain more than 91% of the variation in FFJs (R2=0.91; R=0.96). In contrast, the Impact Factor of the journal can only explain 32% of FFJ variation (R2=0.32; R=0.57).

The rankings of journals based on F1000 scores also reveals a strong bias against larger journals, as well as a bias against journals that have marginal disciplinary overlap with the biosciences.

Larger journals, represented by bigger circles in Figure 1, consistently rank lower than smaller journals receiving the same number of article evaluations. This is most apparent in the “inverted ice-cream cone” shapes in the lower left quadrant of the graph. As I argued previously [1], the method of calculating the F1000 Journal Factor makes it sensitive to enthusiastic reviewers of small journals. This method placed the Journal of Sex and Marital Therapy, which received 12 reviews for its 24 articles in 2010 far above Physical Review Letters, which received just 3 reviews for 3,099 articles.

Phil Davis.
Ithaca NY.


1. PM Davis (2011). F1000 Journal Rankings — The Map Is Not the Territory. The Scholarly Kitchen (Oct 5).

VN:F [1.9.22_1171]
Rating: 0.0/10 (0 votes cast)