Thomson Reuters speaks with Jim Pringle, about Impact Factor

by ‎06-26-2008 03:26 PM - edited ‎06-16-2010 04:17 PM

Today, Jim Pringle, Thomson Reuters Vice President of Product Development, shares with us his insights into research performance metrics, including his unique perspective of the much-talked-about Journal Impact Factor.

TS: Most bibliometricians would agree that Impact Factor has grown to be one of the most widely used evaluative research metrics. While Impact Factor is not without its share of criticism, has Impact Factor positively contributed to the industry?

JP: Absolutely. People tend to overlook the good things that Impact Factor has achieved in the journal publishing industry, particularly in terms of encouraging sound editorial practice.

An article recently appeared in The Journal of the Royal Society of Medicine called “Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994-2005) and their Editors’ views.” The author posed the question to several former medical journal editors, “What did you do in your tenure to raise your journal’s Impact Factor?” They responded by saying things like, “hiring editorial staff,” “courting researchers,” “careful article selection,” “improving ‘services to authors,’ such as ‘fast-track’ publication.” I tend to think these are the components of good editorial practice. And, lo and behold, in doing those, their journals’ Impact Factors went up over time.

TS: What are your feelings about institutions making funding decisions based on Impact Factor?

JP: Generally speaking, institutions can ask themselves two questions when using Impact Factor, and the distinction is significant. The first question is: “Are the researchers at my institution publishing in journals that have significant Impact Factors?” That’s a question about the journal, not a question about individual achievement. Do they get into the top journals in their fields? That is a legitimate question, and one that Journal Citation Reports can answer.

The other question is: What is the impact of the individual’s work? And that’s where the possible “misuse” comes in. Drawing conclusions about individual performance is not the proper way to interpret Impact Factor.

Suppose an institution rewards its researchers for publishing in high-impact journals. This is a controversial practice in the publishing world, and while I agree that it probably should be discouraged, the institution is at least approaching Impact Factor correctly — it is not making a correlation between Impact Factor and the work of the individual.

TS: Lately, there has been a lot of talk about individuals manipulating Impact Factor. How big of a concern is this?

JP: “Manipulation” implies that there’s a concerted effort to distort Journal Impact Factor, and frankly, we don’t know how common that is. If anyone is actually manipulating Impact Factor, then it’s safe to say it is a very small percentage. But in the public discourse, people seem to inflate the exception and bring it to the fore. It isn’t accurate to assume that people are “gaming the system.”

TS: What should users remember when using Impact Factor in their evaluations?

JP: The most fundamental thing to remember is that all of the metrics in Journal Citation Reports can be used only to evaluate journals. It sounds simple, but it’s so important. All the comparisons you make should be about journals — not individuals, not departments.

Another important caveat is that journals should be compared only with like journals — particularly journals within the same discipline. Not only do Impact Factors vary greatly by discipline, but citation patterns and trends can vary greatly across industries.

It’s also important to remember that any journal that is included in the database is good. We have a high standard for inclusion in the database. If you’re Impact Factor is .002 percent higher than mine, then they are different, but not materially different. What’s more important than the actual number is the journal’s general rank and category.

TS: Do we need more metrics for evaluating journal and individual performance?

JP: Generally speaking, the more metrics, the better. That is, the more metrics, the more opportunities to find an answer to the question you’re exploring. Over the years, we have created a number of metrics within our products to give customers a more well-rounded picture.

And we keep our eyes open for other new metrics that can give us new perspective. For example, we were among the first in the industry to embrace h-index.

H-index is a very interesting metric, and it has certain properties that make it a good way to get a standardized measure of individual research output. But it’s only beginning to be tested. The bibliometric community has just gotten hold of it. How it’s normalized across disciplines, what it really can be used for … that remains to be seen.

Message Edited by ThomsonReuters on 07-15-2008 02:23 PM

Comments
by menexis on ‎04-05-2009 12:30 AM
I once had the opportuity to meet Jim Pringle at a conference he was speaking at in Los Angeles. Very smart man.
Announcements
Welcome to the Thomson Reuters community forums. Please refer to the usage guidelines before posting.