Thomson Reuters Speaks with David Tempest, Elsevier

by ‎06-26-2008 03:31 PM - edited ‎06-16-2010 04:15 PM

Thomson Reuters Speaks with David Tempest, Elsevier

As Associate Director of Research and Academic Relations for Elsevier, one of the world’s foremost scholarly publishers, bibliometrician David Tempest and his team provide the analysis to advise Elsevier’s 2,000 journals on maximizing their quality. He is a frequent lecturer on the uses (and misuses) of Impact Factor. Today, David talks with Thomson Reuters about the relationship between Impact Factor and journal quality, and Elsevier’s commitment to best practice with the use of bibliometrics.

 

Why is Impact Factor so widely used?

Well, simply because people love a number, and Impact Factor has been around for decades. It seems that because it has been so widely used over the years, people have continued to use it for more and more purposes, including those beyond its intended use.

 

I hear people from Thomson say all the time, “it was never intended to be used as a personalized tool.” But, unfortunately, bibliometric numbers will always be used for a multitude of purposes, not all of which will be appropriate.

 

Such as being used to evaluate individual researchers?

Yes. Individuals using Impact Factor for their own “ratings” is pretty widespread. One of the principal reasons that researchers use Impact Factor is for their funding applications. In fact, I’ve heard stories of researchers taking the Impact Factors of the five articles they’ve published over a two-year period, adding them up, dividing by five to get their own personal Impact Factor. This is an average of an average, and it’s not a good thing.

 

The papers that an individual published could be zero-cited in a journal with an Impact Factor of 50. Taking the journal’s position as a proxy for individual quality can be misleading. That’s why so many research bodies are trying to move toward individual citation metrics from Web of Science or Scopus. Or they use actual indicators that have been developed for individuals, such as h-index. So it’s starting to change.

 

So you are seeing more of a move towards individual metrics than in previous years?

Yes. Much of that is due to h-index, which has really taken off, and the provision of h-index by Thomson Reuters and Scopus has helped this. H-index is a good indicator, and so far it is being used for its intended purpose. The problem is that a lot of people misunderstand its meaning or do not understand its limitations, which all metrics have.

 

As a publisher, how much does Elsevier consider Impact Factor in its decision making?

First, let me state categorically that, as a publisher, Elsevier does not encourage manipulating Impact Factor. We would never work with our editors to see how we can artificially inflate Impact Factor.

 

At the end of the day, the Impact Factor is the most widely recognized proxy for journal quality; it is in our interest to ensure that the metric is identified for quality and not something that can be manipulated. We strongly advocate increasing Impact Factor through publishing the best articles we can.

A lot of publishers have gotten a lot of bad press about manipulation of Impact Factor, and nine times out of 10, they do not have a centralized policy to manipulate the system. We don’t do that, nor do the majority of publishers.

 

So you have seen evidence of people “gaming the system” — manipulating Impact Factor?

I’ve been studying Impact Factor for 11 years now, and I have seen probably the widest variety of ways people have tried to “game” Impact Factor. “Impact Factor engineering,” as someone once called it.

 

You have to understand that there are some people in our industry for whom, when Impact Factors are published, it’s their life … it’s what they’ve been waiting for all year. It’s bigger than Christmas. And people will do their utmost to get the highest Impact Factor possible.

 

Like an inordinate amount of self-citations?

Sometimes. Though self-citations are healthy when done correctly and policed well, I have seen a number of purposeful, manipulative self-citation practices over the past few years that have been put in place simply to increase the number. I know that Thomson Reuters identified and began tracking self-citation a few years ago, and I’m glad to see that it has become identified within the Journal Citation Reports.

 

Have you seen other ways of “gaming the system”?

The manipulation of article types was quite prevalent about five or six years ago. Some people were changing the article types of those published in their journals to those that would not be counted in the Impact Factor algorithm, thus increasing the numerator (citations) and decreasing the denominator (articles).

 

We’re glad to see that now it is practically impossible to do that because of the level at which Thomson works to ensure that these types of manipulations don’t appear in the Journal Citation Reports.

 

Perhaps the most common way to raise Impact Factor — and this is not really a manipulation — is to publish more review articles. We know that review articles are cited three times more, on average, than a regular article. So many publications will publish more reviews to raise their Impact Factors.

This is a legitimate manipulation — if you even want to call it a “manipulation” — because that is purposely trying to get better quality and inform your readership more and get more citations through publishing good review material.

 

There’s also been an idea that publishing your best research at the beginning of the year will help boost Impact Factors. The rationale is that articles have more time to be cited within that JCR year. Research in this area has shown that there is only a marginal increase in citation in these cases, but people have certainly tried it.

 

As a publisher, do you ever challenge the Impact Factor rankings published by Thomson?

We do a lot of analysis on our journals and work with Thomson Reuters JCR team to make sure that our article counts are comparable and represented accurately within Thomson’s article counts.

 

All in all, we’ve had a very positive working relationship. I would certainly say to publishers that if they have any doubts, work with Thomson to get some agreement. We’ll never get 100 percent agreement, but we will get a much better understanding of each other’s position.

Announcements
Welcome to the Thomson Reuters community forums. Please refer to the usage guidelines before posting.