Menü

The Journal Impact Factor and alternatives

When it comes to selecting a journal, the Journal Impact Factor (JIF) often plays a key role. The JIF is often interpreted as reflecting the quality of a journal and the articles it contains.  However, there is some controversy as to whether this is actually true.

Background and sources of information

Journal Impact Factors are published on an annual basis in the Journal Citation Reports (JCR) which are marketed as a commercial product. The JIFs are calculated on the basis of the journals listed in the multidisciplinary citation database "Web of Science" from the so called “Core Collection” and the citation frequencies recorded in the database. It therefore only represents a portion of the world's journals and not all the academic journal titles published worldwide. In addition, the citation frequency is determined by the citations in the Web of Science database rather than all conceivable citations. 

More and more open access journals and journals with an open access option are endeavouring to become part of the Web of Science in order to obtain a JIF. 

It is important to remember that open access journals are often likely to have been launched relatively recently. However, the Journal Impact Factor cannot be calculated for a journal until it has been in the Web of Science for at least three years.

Numerous university libraries subscribe to the Journal Citation Reports. These can then be accessed through the Web of Science, though only by users who are in the IP range of the subscriber institution or users who have remote access or VPN.

The Journal Citation Reports tool allows users to filter the list of results by open access status.

Another way of determining which open access journals have a JIF is by consulting the Ulrichsweb periodicals directory. This website allows users to filter the list of journals by freely accessible (open access) publications and Journal Citation Reports, i.e. journals which have a JIF, though it does not provide access to the actual Journal Impact Factor ratings themselves. 

Ulrichsweb is also a commercial product, which means it operates under the same general conditions as Web of Science: access is only available if the university library has subscribed to the database and if the user is located within the institution’s IP range or, alternatively, if the necessary steps have been taken to grant users access remotely or via VPN.

Publishers with open access programmes and overviews of impact factors

An overview of impact factors is also provided by many publishers with open access programmes. This information is either provided in the form of a list or is available on the websites of each journal.
Examples:

Bentham Science
BMJ
Copernicus

Criticism of the Journal Impact Factor

  • The Journal Impact Factor was originally developed to assist libraries in selecting the best journals to subscribe to in their subject areas. That means it is ultimately a tool for evaluating academic journals. However, nowadays it is also used to evaluate the research work of scientists and academics.
  • The frequency with which articles are cited is often taken as an indication of the quality of a journal and the articles it contains. Citation frequency can only be interpreted as an indication of a scholarly article's impact, and not as a solid assessment of the quality of the results themselves.
  • The Journal Citation Reports with the Journal Impact Factors for each year are always issued in the summer of the following year and are based on the two years that precede the reporting year. For example, the JIF for a journal for the year 2021 is calculated by counting the number of citations in 2021 to articles from journal issues in 2020 and 2019 divided by the number of articles that actually appeared in the journal in 2020 and 2019. The numerator in this function includes citations to every single article that appeared in the journal over the course of those two years, but the denominator only includes certain types of articles.
  • A Journal Impact Factor of 1.9 means that each article from 2020 and 2019 was cited an average of 1.9 times in 2021. But since the JIF is based on only a certain percentage of the articles that actually appeared, this method of taking the arithmetic mean is actually statistically inappropriate. In reality, few articles will have been cited more frequently than 1.9 times, and many of the articles will have been cited more infrequently than that, or indeed not at all. Thus, the JIF says nothing about the frequency with which an individual article has been cited.
  • In addition, the JIF is not 'normalised' across disciplines, in other words it does not take into account differing field-dependent factors of citation analysis from one discipline to the next. This invalidates comparisons of journal JIFs across disciplines. And citation behaviour can vary even within the different fields of research of one discipline, so there is limited use in even comparing the JIFs of journals in the same discipline. 
  • The default citation window of two years which is taken as a basis for the calculations is too short to judge the impact of publications in many disciplines. In some disciplines it takes longer for scientific results to be viewed and assessed. Even the additional citation window of five years which is used in the Journal Citation Reports is still too short for some disciplines, especially those in the social sciences and humanities, though it is probably acceptable for life sciences.
  • The Journal Impact Factor is also susceptible to manipulation, because it can be artificially boosted by journals citing their own articles.
  • The citation impact of a journal also depends on which types of documents it publishes. For example, review articles are cited significantly more frequently than other types of documents because they offer a concise overview of the key subject areas of a discipline that are frequently addressed in academic papers. Thus, journals that publish lots of review articles can increase their citation count and raise their impact factor.

These criticisms mean that a JIF is not adequate as the sole metric for selecting a journal, and is certainly not a suitable means of evaluating the work of individual scientists or scholars.

The German Association of Scientific Medical Societies (AWMF) has also spoken out against the sole use of Journal Impact Factors to evaluate research work (see the AWMF paper).

Criticism of the Journal Impact Factor and of its use in assessing the merits of research has also come from other institutions in the scientific community, such as the German Science and Humanities Council and the German Research Foundation. These institutions have called on the community to move away from journal-based metrics as a central pillar of the evaluation process. Initiatives such as the “San Francisco Declaration on Research Assessment” also argue that other research results should be incorporated in the evaluation process, such as software and research data. Moreover, some experts have called for greater transparency in constructing the databases and calculating the indicators used for research evaluations (see, for example, the "Leiden Manifesto for research metrics“).

Other citation-based metrics and altmetrics

Criticism of the Journal Impact Factor has led to the development of other metrics, some of which are presented below: 

SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP)

SJR and SNIP are also journal metrics, in this case based on data from the bibliographic database Scopus. Although Scopus is a commercial database, the indicator values are freely available online. Unlike the Journal Impact Factor, both these measures work with a three-year citation window. 

The SJR indicator is calculated in a similar way to the impact factor: the number of citations in the reporting year of articles from the three preceding years is counted and then divided by the number of the articles that appeared in the three preceding years. However, the SJR indicator also incorporates an additional factor, namely the prestige of the journals which the citations come from (How frequently is this journal cited by others?). 

The SNIP indicator controls for the differences in citation behaviour in different scientific disciplines by calculating the mean citation frequency of the articles in a journal and normalising this to the mean citation frequency of articles in the field. This makes comparisons between journals more accurate. A SNIP value of 1 means the journal is average in its field in terms of citation rates.

Although both metrics endeavour to eliminate the weaknesses of the Journal Impact Factor system by making targeted modifications, they can still only be used to compare journals. They are not suitable for evaluating the work of individual scholars or scientists.

h-index or Hirsch index

The h-index is a metric that focuses on the productivity and citation impact of individual scientists' work. It is calculated by ranking a scientist's papers by the number of citations they have received. The point at which the ranking and citation frequency are identical is the author's h-index. The table below illustrates how the h-index is calculated: 

Rank

Citation frequency

1

45

2

30

3

23

4

10

5

8

6

6

7

4

8

2

 

An h-index of 6 means that the author has at least six papers that have been cited at least six times. The h-index is also not without its flaws – for example, it does not take into account the differences in citation behaviour between different scientific disciplines. That means it is not necessarily feasible to compare authors from different disciplines. In addition, two different authors may have entirely different publishing histories and yet still have the same h-index. This makes it particularly difficult to compare scientists who have been involved in research for significantly different lengths of time, as can be seen from the following example:

 

Rank

Author A

Author B

1

63

14

2

53

12

3

43

11

4

34

10

5

25

9

6

16

8

7

7

7

8

1

6

9

 

4

10

 

3

11

 

2

12

 

1

 

A has published eight papers, B has published 12. Although the top-ranked papers published by A have been cited significantly more frequently than those published by B, both authors have an h-index of 7. For this reason, the h-index is not a suitable tool for making comparative evaluations of individual authors’ careers and achievements.

In principle, the h-index can also be used for other quantities of documents. For example, it could be applied to entire journals or subject areas. To do this, the articles published in a journal or a specific subject area are once again ranked by citation frequency as in the examples above. A number of databases, including Scopus, provide access to h-index rankings.

Alternative Metrics: Altmetrics

Based on the problems of citation-based impact measurements described above, alternative non-traditional metrics have recently been proposed as a means of measuring research work and publication impact. These are often referred to as altmetrics. The term is an amalgamation of the words alternative and metrics and is used to refer to an alternative means of calculating the impact of a published article. Altmetrics depend heavily on social media to determine the impact of academic publications (e.g. dissemination of an article via Twitter, Facebook or bookmarking systems such as Mendeley). Altmetrics are currently at an early stage of development, and there is still a lack of clarity as to how alternative metrics can be accurately interpreted. The general goal, however, is to persist in trying out alternatives to traditional citation impact metrics. Altmetrics also offer a means of gathering data on other works in the fields of science and research (e.g. the publication of research data, or blog posts) and to measure their dissemination in near-real-time. Altmetrics are metrics about individual items, i.e. they are used to measure the impact of a specific item such as a scholarly article or blog post. In contrast, other metrics such as the Journal Impact Factor measure the impact of a journal and can only give averaged values for individual scholarly articles. A range of tools is available to help researchers measure alternative metrics about their publications. 

Tools for gathering altmetrics

Altmetric Explorer
ImpactStory
Plum Analytics 
Webometric Analyst

Publishers with alternative metrics

PLOS ONE  
Wiley
Taylor & Francis

Disclaimer

Important note: The information and links provided here do not represent any form of binding legal advice. They are solely intended to provide an initial basis to help get you on the right track. ZB MED – Information Centre for Life Sciences has carefully checked the information included in the list of FAQs. However, we are unable to accept any liability whatsoever for any errors it may contain. Unless indicated otherwise, any statements concerning individual statutory norms or regulations refer to German law (FAQ updated 08/2022).

Contact

Jasmin Schmitz,

Dr. Jasmin Schmitz
Head of Publication Advisory Services

Phone: +49 (0)221 478-32795
Send mail

References

Herrmann-Lingen, C. et al. (2014). Evaluation of medical research performance – position paper of the Association of the Scientific Medical Societies in Germany (AWMF). GMS Ger Med Sci, 12:Doc11.

Recommendations on the Transformation of Academic Publishing: Towards Open Access of January 2022, Wissenschaftsrat.

Deutsche Forschungsgemeinschaft | AG Publikationswesen (2022). Academic Publishing as a Foundation and Area of Leverage for Research Assessment. Zenodo.

San Francisco Declaration on Research Assessment of 16 Dezember 2012, DORA. (accessed 29/11/2022)

Hicks, D. et al. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429–431.

Related links

Further information

Schmitz, J. (2022). Tipps & Tricks: Den Journal Impact Factor für Zeitschriften ermitteln, in denen man publiziert hat. ZB MED-Blog, 17. Oktober 2022. (German only)

 

Reputation in science - do the evaluation parameters still apply? (in German, English subtitles)