A review to find out if academic journal rankings are manipulated (essay)

0

Are publishers manipulating citation scores in order to inflate the status of their publications? Do they corrupt the ranking of scholarly journals?

While any allegations of cheating or other academic bickering are cause for concern, journal rankings to date continue to provide a rough but useful source of information to a wide range of audiences.

Journal rankings help authors answer the ubiquitous “Where to publish?” Question. Incumbency review boards also use rankings as evidence of visibility, recognition and even quality in the academic review process, especially for junior applicants. For them, journal ranking becomes an indicator when other more direct measures of recognition and quality are not available. Since many tenure-callers have recent publications, journal ranking becomes a proxy measure for the eventual visibility of that research.

Yet it is easy to place undue reliance on quantitative ratings. The problem arises when journal rankings become a benchmark for research quality. In many fields, the quality of research is a multifaceted concept that cannot be reduced to a single quantitative metric. For example, imposing a single rule – for example, that first quartile journals count as “high quality” journals while others do not – assigns more weight to journal rankings than they deserve. and generates the temptation to inflate journal scores.

In an editorial of the journal Research policy, Editor-in-Chief Ben R. Martin expressed concern that manipulation of journal impact factors undermines the validity of the Thompson / Reuters Journal Citation Reports (JCR). He concludes that “… in light of the ever more devious tricks of the editors, the JIF [journal impact factor] indicator has lost most of its credibility. The impact factor of a journal represents the average number of citations per article. The standard one-year impact factor is calculated by adding citations of articles published in a journal in the past year, divided by the number of articles published.

I share the distrust and unease that many academics feel about over reliance on journal impact ratings for the purposes of academic assessment and tenure decisions. Yet while I’m not a fan of impact scores calculated over a one-year period, my research on journal rankings leads me to conclude that Martin’s concerns are overstated.

The two main sources of manipulation Martin talks about are coercive citations (where editors ask authors to add citations to the journal in question) and creating an online article queue, which artificially inflates the number of citations per published article. While any intentional manipulation of journal rankings is reprehensible, to date the overall effect of this type of behavior in practice is quite limited. I come to this optimistic conclusion after exploring a variety of indexes and data sources in an upcoming review of journals in my own area of ​​sociology.

A clear hierarchy of sociology journals is evident, regardless of the data source (Web of Science or Google Scholar) used. There is a large degree of similarity between the measures in describing this gradient, although many poorly ranked journals are grouped together with quite similar scores. Manipulating the one-year data hasn’t changed the overall picture much (at least not yet) as the five-year measurements give very similar rankings. And even to manipulate the one-year impact factor, editors should insist that new authors cite the most recently published articles in this journal.

Basically, I doubt there is much manipulation in sociology journals because, firstly, the raw scores have not swelled over time and secondly, the relative ranking of over 100 journals has been enough. stable. Individual journals here and there have evolved slightly, but these changes are much more easily attributable to shifts in researchers’ level of interest in particular subfields and editorial choices than to the efforts of any individual editor. to play with the system.

The main reason I dismiss concerns about manipulation is that different approaches to journal ranking produce a broadly similar picture of inequality. In my study, I use data from Google Scholar to calculate the h index of journals. This metric focuses on the most cited articles over an extended period rather than the average citation over a short period. It would not be easy for journal publishers to manipulate this measure, even if they knew how to use it.

Take quotes from Martin’s own diary, Research policy, for example. I get a hr of 246 over the period 2000 to 2015. This means that 246 articles cited at least 246 times were published in this journal during this period. This is an impressive score, exceeding the visibility of American Economic Review (h = 227 over the same period) and the American sociological journal (h = 162). (I calculated all the numbers with AW Harzing’s Publish or Perish 2015 software using data from Google Scholar.)

The statistics just cited reflect the remarkable visibility of these leading journals. It would be quite difficult to develop strategies to artificially generate enough citations to significantly alter these scores. I prefer using h as a measure because it attempts to capture the biased nature of scientific scholarship. However, the fact remains that the overall hierarchy of journals is broadly similar, whether one uses the h index or the conventional impact factor.

In their 2012 study, Allen W. Wilhite and Eric A. Fong present worrying data regarding the prevalence of coercive citations. The coercive citation pattern was particularly pronounced in lower level journals, and particularly in the business and management arena. Once again, I doubt that the general regime of journals will be appreciably modified by stratagems of questionable editorial games. Wilhite and Fong identify eight journals in which this practice might be common enough to be of importance (over 10 reports of coercive rankings), but none of these journals reached the top in the field (as measured in the JCR ranking). In other words, with a relentless, long-term commitment to manipulation, some third quartile journals might be able to find their way into the second quartile by manipulating scores, but this is unlikely to change. the overall contours of the domain.

If a large group of low visibility journals made a major effort to increase their citations, it would make them as a group more difficult to distinguish from the best journals. In the field of sociology, there is no indication that mid-level and lower-level journals reduce the distance from the most frequently cited journals. Indeed, this persistent gap is interesting in itself, insofar as it suggests that search engines do not increase the visibility of journals to which few people subscribe.

At the same time, we must remember that journal rankings are only a rough approximation of the visibility and recognition of individual articles. In other words, articles published in the same journal will vary in the frequency with which they are cited. In my analysis of 140 sociology journals over the period 2010-2014, most of the 10 most frequently cited articles were not published in the top ranked journals. Thus, substantial variability in visibility (citations) within journals coexists with globally stable inequality models between journals.

In addition, the list of most cited articles is largely impervious to self-citation. It is simply too difficult to cite enough to take your research to this level of visibility. For example, the 10 most cited journal articles in sociology from 2010 to 2014 had 400 or more citations. To catapult your own article into this stratosphere of citations, you would have to publish hundreds of articles in just a few years. No one could post frequently enough and cite regularly enough to affect inclusion in the most cited articles list. And anyone prolific enough to implement such a strategy wouldn’t need to mess with the system.

Authors have a natural desire to seek out opportunities that will increase the visibility of their research. In the field of sociology, this implies a choice to pursue the most selective generalist journals, the best journals in each specialty area of ​​the field, the second-rank generalist journals, then the other specialized publications and the remaining interdisciplinary journals. The use of journal ranking data may be marginally useful in informing such choices. Other important factors include the particular focus of each review, its selectivity, timeframe, policies regarding second and third rounds of reviews, etc.

Journal rankings are likely to stick with us because these rankings are of interest to so many parties, as research by Wendy Nelson Espeland and Michael Sauder suggests, although their value is likely to remain disputed. Perhaps a clearer recognition of the inherent imprecision of journal rankings will mean that they will be used judiciously, as a complement rather than a substitute for important and difficult academic reviews. And perhaps using a variety of different journal indexes will reduce the temptation to mess with the system and redirect efforts towards selecting high-quality research for review by the scientific community.


Source link

Share.

About Author

Leave A Reply