It seems the mathematicians don’t believe the numbers, at least when it comes to bibliometrics.
That’s according to a report released in June by the International Mathematical Union, which warns that the increasingly prevalent reliance on statistics — often derived from citation data — as being superior to more “complex judgments” in assessing scientific research is quite simply “unfounded.”
The IMU’s committee on quantitative assessment of research, chaired by John Ewing, executive director of the American Mathematical Society, was charged with commenting on the trend of using algorithmic evaluation as a way to measure quality.
The report,
Citation Statistics, looked at measures like journal impact factors, which rate research according to the standing of the journal in which it is published, and citation counts, which purport to measure output based on how often an academic’s publications are cited by peers.
The use of “simple and objective” methods like bibliometrics has grown with the belief that hard numbers are inherently more accurate that any opinion derived from more complex and possibly subjective judgments flowing from peer review.
The committee found: relying on statistics is not more accurate because data may be misapplied and lead to misleading information; the objectivity of numbers can be “illusory,” and citation statistics can be even more subjective than any peer review; and sole reliance on citation data is at best an incomplete and often shallow understanding of research.
Although not entirely dismissive of citation statistics as an appropriate tool to assess research, the committee noted that “numbers are not inherently superior to sound judgments,” and that using statistics alone because they are “readily available” is not justifiable.
“Citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused,” the report concluded.