Menü

Reputation and research assessment in academia – a brief introduction

This article offers a brief overview of the topic. Individual factors and subject-specific aspects would require a broader examination, and cannot be considered in depth here.

Introduction

When it comes to building and maintaining a scientific reputation, the following elements are key:

  • the quality of research proposals; 
  • the number of publications (output);
  • the overall number of citations (impact);
  • the acquisition of projects and third-party funding.

Other factors that can help build a reputation include the quality of teaching activities; the number of Bachelor’s, Master's and doctoral theses supervised; the degree of participation in the respective scientific community (in the form of editorships, for example); and the ability to make research results accessible to the general public.

Quality of research proposals and results

One of the basic prerequisites for building a reputation is the ability to come up with ideas for research that will yield new insights and/or solve a particular problem in the respective field while also taking into account the current state of research and employing suitable methods. Whether or not this goal has been accomplished is determined in the peer review process, in which the research work is assessed by experts in the relevant field.

Number of publications (output)

Publications are an important tool for disseminating scholarly knowledge and are therefore essential to building and enhancing a scientific reputation. If a scientist does not publish their research, they have almost no chance of being recognised as a scholar in their academic community. Publishing remains the bedrock of an academic career, as summed up by the popular adage “publish or perish”. 
Once a work is published, its impact will also be determined by the type of publication and the reputation of the publishing medium or publisher. In the life sciences, it is important to publish papers in peer-reviewed academic journals that have achieved a certain standing in the respective scientific community. In some life-science communities, it is also important to verify two other points: firstly, that the journal is indexed in the PubMed database and, secondly, that the journal has a Journal Impact Factor (JIF), which is a measure of the average number of times an article is cited in this journal within a certain period of time. 
In other fields, reputation-building may require a different approach: in the field of computer science, for example, conference papers for relevant conferences play a key role, while in the social sciences, humanities and law, much of a scholar’s reputation stems from publishing books with prestigious publishers.
Publications not only boost the reputation and prestige of the author, but also that of the academic institution they work for. Research funding bodies are also keen to see the projects they support generating output in the form of publications.

Number of citations (impact)

The effect of a publication on a scholar’s reputation also depends on the extent to which the published findings lead to a gain in knowledge and to other scientists building on these findings to achieve further developments in the field. One way to measure this is by looking at citation counts – that is, how many times a scholarly publication has been cited. This information is systematically tracked by citation databases such as Web of Science and Scopus. Google Scholar also records the citation frequency of articles. The number of citations an author has received is then used to calculate metrics such as the h-index or for benchmarks such as the field-weighted citation impact. Citation rates depend on the citation behaviour of each particular discipline. Furthermore, citation rates can only be measured for publications that are indexed in the corresponding databases. It is not possible to draw conclusions on the quality of a publication from the number of times it has been cited.

Acquisition of projects and third-party funding

Research projects require a level of resources that is not generally available to institutions of higher education and non-university research institutions. To bring their projects to fruition, researchers therefore apply for funding from third-party bodies such as the German Federal Ministry of Education and Research (BMBF) and the German Research Foundation (DFG). The purpose of a funding application is to set out the research proposal and describe the funds required to put it into practice. Each application is then evaluated based on a number of criteria – for example, the extent to which the project meets the objectives of the respective funding scheme; the insights that will be generated, or the problems that will be solved; the question of whether suitable methods have been chosen to achieve the corresponding results; and the chance that the project will achieve the desired results if the requested funding is approved. Typically, the number of applications received by a funding programme exceeds the funds available, so a project is only likely to be selected if it satisfies the evaluation criteria. The number of projects approved and the amount of funds received can therefore have a positive effect on a scientist’s reputation, since these figures offer an indication of whether their research proposals are solution-oriented and/or innovative enough to be granted funding.
As well as submitting a proposal that meets the funding criteria, researchers also need to have relevant research experience; once again, this is demonstrated by factors such as the number and impact of their publications.

Research assessment

Assessments of a scholar’s research and reputation take place in various contexts within the scientific setting, for example:

  • assessment of researchers who are applying for appointments or research funding, for example (career assessment);
  • funds approved for projects by funding bodies;
  • assessment of funded research projects (ongoing or completed);
  • assessment of institutions (for comparisons, rankings, etc.).

Research assessment is necessary because the scientific community has scarce resources; in other words, there are generally more applications than appointments or funds available. In a sense, then, this assessment process serves to keep research running smoothly. At the same time, the nature of these assessments also has an impact on the culture and quality of research activities.
Research assessment can be divided into two categories: qualitative assessment using peer review, and quantitative assessment. This latter form of assessment is based on bibliometrics that measure metrics such as publication output and impact in order to generate corresponding indicators.

Criticism of research-assessment practices and calls for reform

Research-assessment practices have faced mounting criticism in recent years, especially due to their focus on quantitative assessment and the various problematic developments that have emerged in this respect. Steps to reduce the number of quantitative parameters are regarded as particularly concerning, especially since many of these measures are considered to be flawed. Journal-based indicators such as the Journal Impact Factor, and those that combine input and output measures into a single indicator, such as the h-index, have also come in for criticism. Journal-based indicators are seen as dubious because they equate the average citation frequency of articles in a particular journal with the quality of the journal and the articles published in it. Moreover, some experts argue that the h-index is not fit for purpose because it gives such a limited picture of a scientist’s accomplishments and offers almost no useful way to compare the careers of multiple scholars. 
There is also concern that scientists seeking to build their reputation will focus solely on increasing their research output and impact, since these are the factors on which assessment systems are based. The worry is that this creates false incentives that cause the focus to drift away from quality assurance and compliance with good research practice.
A further criticism of quantitative assessment is that it is limited to certain types of publication, namely those that are indexed by the databases that are used to calculate indicators. Other types of publication that also count as research output – such as research data, blog posts and software – are given almost no recognition or consideration.
Critics also point to the lack of consideration for work that contributes to the quality assurance of research results. In particular, this includes open-science practices – in other words, the tools and practices that seek to open up the various stages of the research cycle in order to promote the transparency and reproducibility of results and thus increase the credibility of scientific findings.
Initiatives such as DORA, the Leiden Manifesto and CoARA are therefore campaigning for a major overhaul of current research-assessment practices. In particular, they see a need to move away from journal-based metrics as indicators of research quality. Other key calls for change include:

  • Broadening the qualitative assessment of research accomplishments and expanding individual contributions to this in the form of peer review, thus relegating quantitative indicators to a supporting role.
  • Greater recognition of open-science practices and procedures that are designed for the quality assurance of scholarly work, such as the pre-registration of research studies.
  • Broadening the scope of what should be recognised as scientific output.
  • Expanding the way in which impact is measured to encompass more than just the number of citations – for example, by evaluating how significant the research is for the respective subject area, or by measuring its social impact by calculating the number of mentions on social-media platforms, new sites and blogs and in altmetric reports.
  • Boosting the transparency of research assessment itself; this is especially important when employing quantitative indicators, which should be based on data and calculation methods that are both transparent and verifiable.
  • Far greater recognition of the diversity of research accomplishments and other tasks that are carried out in academia such as teaching, student supervision, committee work, communication of results outside the respective scientific community, collaboration and peer review.

The goal is to promote the practice of responsible research assessment. Despite the drawn-out discussions on research assessment and the slow pace of change, it is fair to assume that the existing systems of assessment and incentives will eventually be rethought and restructured. For example, the German Research Foundation has already changed its curriculum-vitae template for funding applications to take a more holistic approach to a researcher’s accomplishments. The new template also allows applicants to provide narrative information on their career and to include additional publication formats such as preprints, data sets and software packages in their list of research outcomes and findings.

Ranking systems can help with reputation-building

There are numerous ranking systems that can provide scientists with useful guidance on how to build their career and reputation, including the following:

  • Institutional rankings such as the Times Higher Education World University Rankings and the CHE University Ranking, which aim to highlight the best places to take an undergraduate or postgraduate degree, to conduct research, or to find employment.
  • Journal rankings that aim to help researchers find the right journal to communicate their academic findings. Examples include discipline-specific lists, such as the ABDC Journal Quality List for business journals, and rankings that cover multiple disciplines such as the Journal Citation Reports (JCR) and its Journal Impact Factor (JIF).
  • Conference rankings, which can also help scientists find the right place to communicate their scientific findings. Examples include the CORE Conference Ranking, which provides assessments of major conferences in the computing disciplines.

The main criticism levelled at ranking systems is the lack of transparency with which some of the lists are compiled. If you decide to use a ranking system as a decision-making aid, it is therefore important to take a careful look at the underlying methodology to check it is robust, and to determine whether the ranking system takes into account the criteria that matter in your own decision.
It is generally not a good idea to use rankings as the sole basis for making a decision. When it comes to choosing a journal or a conference, for example, it is important to check whether it is the right fit – in other words, whether your own scientific findings match the nature and overall thrust of the respective journal or conference. 

Promoting your own reputation and increasing your visibility

When scientists apply for funding or for an academic post, they are explicitly required to provide information on their career and their reputation. However, it is also possible to make this information freely accessible on a more general basis in order to increase your standing and increase the visibility of your scholarly work.
Options include the following, though there are also many other possibilities:

  • Create and maintain an ORCID profile: An Open Researcher and Contributor ID (ORCID) is a persistent identifier – a unique number – that makes it easy to distinguish between scholars and ensure they receive appropriate acknowledgement of their works. As well as listing your publications, an ORCID profile page can also be used to enter further information on your career, your projects, and the types of articles you have submitted to each publication. In many cases, an ORCID is now required for manuscript submissions to journals and conferences; all scholars should therefore seriously consider creating an ORCID profile.
  • Maintaining additional author profiles: Once they have indexed enough publications, databases such as Web of Science or Scopus and search engines such as Google Scholar  create author profiles generated by algorithms. These are then automatically enriched with additional information such as the author’s h-index and the citation frequency of their individual publications. It is important to remember that such profiles generally only include publications that appear in journals or books series that are indexed by the respective database. The same applies to the number of citations. With a certain amount of patience, it is generally possible to manually correct this information, though it may be necessary to contact the service provider.
  • Publishing your CV together with a list of your publications and – if applicable – lectures on your own website or on your institution’s website: When posting a list of publications, it is important to distinguish between papers that have already been published and those that have not yet been published, as well as between peer-reviewed and non-peer-reviewed works such as preprints. You can also provide other information such as projects funded by third parties, committee work, peer-review activities and editorships.
  • Boosting visibility on social networks and blogs: Although the profile pages on social-media channels are not specifically designed to showcase the full extent of your scientific reputation, it is nonetheless possible to use these channels to disseminate publications, lectures and other scientific output and increase their visibility. Social-media platforms also offer a useful channel for presenting projects and demonstrating expertise. Blog posts can be useful way to present your research findings in a different light and make them more accessible to readers outside your particular scientific community. 
    Although such platforms do not fall within the remit of the institution where you work, it is nevertheless important to comply with your institution’s social-media policies and publishing guidelines when posting content related to your own work, as well as respecting any confidentiality agreements that may apply to projects or similar undertakings. When posting full texts that have already been published, it is important to comply with the rights of use and to cite the source work accordingly. 
    Regularly posting work to different channels can be a time-consuming process. It is therefore advisable to decide in advance which networks are worth engaging with, which content you intend to post there, and what purpose it should serve.

Reputation monitoring

Citation databases allow researchers to set up notifications to send an alert when one of their articles is cited. This is a useful tool for “reputation monitoring”, the process of checking the context in which your own work has been cited while also identifying research groups that are working on similar topics and that may be of interest for collaborative projects.
If the journal or publisher that published your work incorporates altmetrics from appropriate providers on their website, you can use the DOI to determine which social-media platforms, blogs and news sites have mentioned your publications. This is a good way to find out whether your scientific findings have been referenced in other contexts outside the realm of science communication.

See also

The Journal Impact Factor and alternatives
Selecting a journal: How to find a suitable journal for publishing research results
DOI, ORCID and ROR: What makes persistent identifiers so useful?
What is open science?
Peer Review: Why is it important?​​​​​​​

Disclaimer

Important note: The information and links provided here do not represent any form of binding legal advice. They are solely intended to provide an initial basis to help get you on the right track. ZB MED – Information Centre for Life Sciences has carefully checked the information included in the list of FAQs. However, we are unable to accept any liability whatsoever for any errors it may contain. Unless indicated otherwise, any statements concerning individual statutory norms or regulations refer to German law (FAQ updated 11/2023).

Jasmin Schmitz,

Dr. Jasmin Schmitz

Phone: +49 (0)221 478-32795
Send mail

References

Position Statement and Recommendations on Research Assessment Processes of July 2020, Science Europe. (accessed 16/11/2023)
Towards a reform of the research assessment system – Scoping report  of November 2021, European Commission, Directorate-General for Research and Innovation. (accessed 16/11/2023)
Academic Publishing as a Foundation and Area of Leverage for Research Assessment of May 2022, DFG Executive Committee Working Group on Publications. (accessed 16/11/2023)
Agreement on Reforming Research Assessment​​​​​​​ of July 2022. (accessed 16/11/2023)

Curry, S. et al (2020). The changing role of funders in responsible research assessment: progress, obstacles and the way ahead (RoRI Working Paper No.3). London, Research on Research Institute. (accessed 16/11/2023)
 
Package of Measures to Support a Shift in the Culture of Research Assessment of September 2022, DFG. (accessed 16/11/2023)

Related Links

PubMed
Web of Science
Scopus
Google Scholar
Field Weighted Citation Impact of 10 August 2020, Metrics Toolkit. (accessed 16/11/2023)  
Funding by the Federal Ministry of Education and Research (German only)
DFG – All Funding Programmes 
San Francisco Declaration on Research Assessment of 16 December 2012, DORA. (accessed 16/11/2023)

Hicks, D. et al. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429–431.
 
World University Rankings
CHE University Ranking 
ABDC Journal Quality List
Journal Citation Reports
CORE Ranking Portal
ORCID