Menü

The Journal Impact Factor and alternatives

When it comes to selecting a journal, the Journal Impact Factor (JIF) often plays a key role. The JIF is often interpreted as reflecting the quality of a journal and the articles it contains.  However, there is some controversy as to whether this is actually true.

Background and sources of information

Journal Impact Factors are published annually in the Journal Citation Reports (JCR), a commercial product distributed by Clarivate, a provider of bibliometric and research data. JIFs are calculated on the basis of the journals listed in the multidisciplinary Web of Science citation database, specifically its Core Collection, and the citation frequencies recorded there. As a result, the JIF only reflects a subset of the world’s journals rather than all academic titles published globally. In addition, citation frequencies are derived solely from citations indexed in Web of Science, not from all possible citation sources.

An increasing number of open access journals and journals with an open access option are seeking to be indexed in Web of Science in order to obtain a JIF.

Open access journals are often relatively new titles. However, the Journal Impact Factor cannot be calculated for a journal until it has been indexed in the Web of Science for at least three years. The detailed view within the Journal Citation Reports also displays the Immediacy Index, which tracks the current trend in the journal’s average citation rate.

Numerous university libraries subscribe to the Journal Citation Reports. These can be accessed via Web of Science, but only by users within the institution’s IP range or those with remote or VPN access.

The Journal Citation Reports tool allows users to filter the list of results by open access status.

Another way of determining which open access journals have a JIF is by consulting the Ulrichsweb periodicals directory. This website allows users to filter the list of journals by freely accessible (open access) publications and Journal Citation Reports, i.e. journals which have a JIF, though it does not provide access to the actual Journal Impact Factor ratings themselves.

Ulrichsweb is also a commercial product, which means it operates under the same general conditions as Web of Science: access is only available if the university library has subscribed to the database and if the user is located within the institution’s IP range or, alternatively, if the necessary steps have been taken to grant users access remotely or via VPN.

Publishers with open access programmes and overviews of impact factors

An overview of impact factors is also provided by many publishers with open access programmes. This information is either provided in the form of a list or is available on the websites of each journal.
Examples:

Bentham Science
BMJ
Copernicus

Criticism of the Journal Impact Factor

  • The Journal Impact Factor was originally developed to assist libraries in selecting the best journals to subscribe to in their subject areas. That means it was essentially designed to evaluate academic journals. However, nowadays it is also used to evaluate the research outputs of scientists and academics.
  • The frequency with which articles are cited is often taken as an indication of the quality of a journal and the articles it contains. Citation frequency can only indicate the impact of a scholarly article; it does not provide a robust assessment of the quality of its results.
  • The Journal Citation Reports with the Journal Impact Factors for each year are always issued in the summer of the following year and are based on the two years that precede the reporting year. For example, the JIF for a journal for the year 2021 is calculated by counting the number of citations in 2021 to articles from journal issues in 2020 and 2019 divided by the number of articles that actually appeared in the journal in 2020 and 2019. The numerator in this function includes citations to every single article that appeared in the journal over the course of those two years, but the denominator only includes certain types of articles.
  • A Journal Impact Factor of 1.9 means that each article from 2020 and 2019 was cited an average of 1.9 times in 2021. But since the JIF is based on only a certain percentage of the articles that actually appeared, this method of taking the arithmetic mean is actually statistically inappropriate. In reality, few articles will have been cited more frequently than 1.9 times, and many of the articles will have been cited more infrequently than that, or indeed not at all. Thus, the JIF says nothing about the frequency with which an individual article has been cited.
  • In addition, the JIF is not normalised across disciplines; it does not account for discipline-specific differences in citation practices. This invalidates comparisons of journal JIFs across disciplines. Citation behaviour can also vary between subfields within a single discipline, which further limits the usefulness of comparing JIFs even within the same subject area. 
  • The default citation window of two years, which is taken as a basis for the calculations, is too short to judge the impact of publications in many disciplines. In some disciplines it takes longer for scientific results to be viewed and assessed. Even the additional citation window of five years which is used in the Journal Citation Reports is still too short for some disciplines, especially those in the social sciences and humanities, though it is generally considered acceptable for the life sciences.
  • The Journal Impact Factor is also susceptible to manipulation, because it can be artificially boosted by journals citing their own articles.
  • The citation impact of a journal also depends on which types of documents it publishes. For example, review articles are cited significantly more frequently than other types of documents because they offer a concise overview of the key subject areas of a discipline that are frequently addressed in academic papers. As a result, journals that publish many review articles can increase their citation count and raise their impact factor.

Taken together, these criticisms suggest that the JIF is not adequate as the sole metric for selecting a journal, and is certainly not a suitable means of evaluating the work of individual scientists or scholars.

The German Association of Scientific Medical Societies (AWMF) has also spoken out against the sole use of Journal Impact Factors to evaluate research (see the AWMF paper).

Criticism of the Journal Impact Factor and of its use in assessing the merits of research has also come from other institutions in the scientific community, such as the German Science and Humanities Council and the German Research Foundation. These institutions have called on the community to move away from journal-based metrics as a central pillar of the evaluation process. Initiatives such as the “San Francisco Declaration on Research Assessment” also argue that other research results should be incorporated in the evaluation process, such as software and research data. Moreover, some experts have called for greater transparency in constructing the databases and calculating the indicators used for assessing research (see, for example, the "Leiden Manifesto for research metrics“). Initiatives such as the Coalition for Advancing Research Assessment (CoARA) have likewise urged a general shift towards qualitative indicators in research assessment and advocate a more considered use of quantitative metrics.

Other citation-based metrics and altmetrics

Criticism of the Journal Impact Factor has led to the development of other metrics, some of which are presented below: 

SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP)

SJR and SNIP are also journal metrics, in this case based on data from the bibliographic database Scopus. Although Scopus is a commercial database, the indicator values are freely available online. Unlike the Journal Impact Factor, both these measures work with a three-year citation window. 

The SJR indicator is calculated in a similar way to the impact factor: the number of citations in the reporting year of articles from the three preceding years is counted and then divided by the number of the articles that appeared in the three preceding years. However, the SJR indicator also incorporates an additional factor, namely the prestige of the journals which the citations come from (How frequently is this journal cited by others?). 

The SNIP indicator controls for the differences in citation behaviour in different scientific disciplines by calculating the mean citation frequency of the articles in a journal and normalising this to the mean citation frequency of articles in the field. This makes comparisons between journals more accurate. A SNIP value of 1 means the journal is average in its field in terms of citation rates.

Although both metrics endeavour to eliminate the weaknesses of the Journal Impact Factor system by making targeted modifications, they can still only be used to compare journals. They are not suitable for evaluating the work of individual scholars or scientists.

h-index or Hirsch index

The h-index is a metric that focuses on the productivity and citation impact of individual scientists' work. It is calculated by ranking a scientist's papers by the number of citations they have received. The point at which the ranking and citation frequency are identical is the author's h-index. The table below illustrates how the h-index is calculated:

Rank

Citation frequency

1

45

2

30

3

23

4

10

5

8

6

6

7

4

8

2

 

An h-index of 6 means that the author has at least six papers that have been cited at least six times. The h-index is also not without its flaws – for example, it does not take into account the differences in citation behaviour between different scientific disciplines. That means it is not necessarily feasible to compare authors from different disciplines. In addition, two different authors may have entirely different publishing histories and yet still have the same h-index. This makes it particularly difficult to compare scientists who have been involved in research for significantly different lengths of time, as can be seen from the following example:

 

Rank

Author A

Author B

1

63

14

2

53

12

3

43

11

4

34

10

5

25

9

6

16

8

7

7

7

8

1

6

9

 

4

10

 

3

11

 

2

12

 

1

 

A has published eight papers, B has published 12. Although the top-ranked papers published by A have been cited significantly more frequently than those published by B, both authors have an h-index of 7. For this reason, the h-index is not a suitable tool for making comparative evaluations of individual authors’ careers and achievements.

In principle, the h-index can also be used for other quantities of documents. For example, it could be applied to entire journals or subject areas. To do this, the articles published in a journal or a specific subject area are once again ranked by citation frequency as in the examples above. A number of databases, including Scopus, provide access to h-index rankings.

Field-normalised citation rates

The Journal Impact Factor and h-index also face criticism because they fail to account for citation behaviours specific to a given field. In contrast, field-normalised citation rates compare an article’s citation rate against the average for its discipline. A value greater than 1 indicates that the article is cited more frequently than the average. Values for the Field Citation Ratio (FCR) can be retrieved from the Dimensions database. A basic version of this database is available free of charge, though registration is required.

Alternative Metrics: Altmetrics

Given the limitations of citation-based impact measures described above, alternative, non-traditional metrics have recently been proposed to assess research and publication impact. These are often referred to as altmetrics. The term is an amalgamation of the words alternative and metrics and is used to refer to an alternative means of calculating the impact of a published article. Altmetrics depend heavily on social media to capture the impact of academic publications (e.g. dissemination of an article via Mastodon or BlueSky, Facebook or bookmarking systems such as Mendeley). The general goal is to persist in trying out alternatives to traditional citation impact metrics. Altmetrics also offer a means of gathering data on other works in the fields of science and research (e.g. the publication of research data, or blog posts) and to measure their dissemination in near-real-time. Altmetrics are metrics about individual items, i.e. they are used to measure the impact of a specific item such as a scholarly article or blog post. In contrast, other metrics such as the Journal Impact Factor measure the impact of a journal and can only give averaged values for individual scholarly articles. A range of tools is available to help researchers measure alternative metrics about their publications.

Tools for gathering altmetrics

Altmetric Explorer
ImpactStory
Plum Analytics 

Publishers with alternative metrics

PLOS ONE  
Wiley
Taylor & Francis
German Medical Science – GMS

See also

Reputation and research assessment in academia – a brief introduction
Journal quality and standing: which aspects are relevant to open access?

Disclaimer

Important note: The information and links provided here do not represent any form of binding legal advice. They are solely intended to provide an initial basis to help get you on the right track. ZB MED – Information Centre for Life Sciences has carefully checked the information included in the list of FAQs. However, we are unable to accept any liability whatsoever for any errors it may contain. Unless indicated otherwise, any statements concerning individual statutory norms or regulations refer to German law (FAQ updated 12/2025).

Contact

Jasmin Schmitz,

Dr. Jasmin Schmitz
Head of Publication Advisory Services

Phone: +49 (0)221 999 892 665
Send mail

References

Herrmann-Lingen, C. et al. (2014). Evaluation of medical research performance – position paper of the Association of the Scientific Medical Societies in Germany (AWMF). GMS Ger Med Sci, 12:Doc11.

Recommendations on the Transformation of Academic Publishing: Towards Open Access of January 2022, Wissenschaftsrat.

Deutsche Forschungsgemeinschaft | AG Publikationswesen (2022). Academic Publishing as a Foundation and Area of Leverage for Research Assessment. Zenodo.

San Francisco Declaration on Research Assessment of 16 Dezember 2012, DORA. (accessed 29/11/2022)

Hicks, D. et al. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429–431.

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings Of The National Academy Of Sciences, 102(46), 16569–16572.

CoARA – Coalition for Advancing Research Assessment. (accessed 15/01/2026)

Related links

Journal Citation Report
Immediacy Index
Ulrichsweb
Scopus Journal Metrics
SCImago Journal Rank (SJR)
Source Normalized Impact per Paper (SNIP)
Field Citation Ratio (FCR)
Dimensions

Further information

Schmitz, J. (2022). Tipps & Tricks: Den Journal Impact Factor für Zeitschriften ermitteln, in denen man publiziert hat. ZB MED-Blog, 17. Oktober 2022. (German only)


Jasmin Schmitz in video podcast on reputation in science

We need your consent to show videos here!

We use YouTube as third-party software in order to be able to present videos to you here. By clicking on ‘Accept’ you agree to the data processing by YouTube.