Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-05T07:49:08.589Z Has data issue: false hasContentIssue false

The Precariousness of Academic Publishing in a Digital World

Published online by Cambridge University Press:  30 September 2024

Pil Maria Saugmann*
Affiliation:
The European Council of Doctoral Candidates and Junior Researchers (Eurodoc), Brussels, Belgium
Rights & Permissions [Opens in a new window]

Abstract

The world around us is growing increasingly digital and data-intensive, affecting our lives and practices as citizens and researchers in a multitude of ways. We have to ask how we ensure that academic research remains trustworthy and transparent as digitalization disrupts our practices. This article draws attention to the multifaceted nature of the challenges early-career researchers face with academic publishing in the digital era. Thus, rather than zooming in on one aspect, and losing track of the complexity of the problem, it addresses (1) the purpose of academic publishing, (2) the type of material to be published, (3) the role and use of AI and data in research, (4) the entanglement of academic publishing and research assessment, (5) the role of Open Science, and (6) what makes early-career researchers as a group different from other researchers.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Academia Europaea Ltd

Introduction

When I was first invited to speak on behalf of the European Council of Doctoral Candidates and Junior Researchers (Eurodoc) at the Wenner-Gren symposium, ‘Publishing in Academia – Digital Challenges’, the title puzzled me. Somehow, it seemed to me that this title indicated that digital challenges are a niche set of challenges when it comes to academic publishing and can be separated from non-digital ones. As an early-career researcher (ECR) who qualifies as a millennial, my world is fundamentally digital, and such a distinction between digital and non-digital challenges struck me as artificial.

In the months since the symposium, I have come around; I now find that this title is timely and well chosen, and it points towards the future. The world around us is growing increasingly digital and data-intensive, affecting our lives and practices as citizens and researchers in a multitude of ways. I have titled this article ‘The precariousness of academic publishing in a digital world’ because, as I see it, the digital world where everyone has access to almost any information at any time makes the foundation of academic publishing precarious.

One of the ways I see this precariousness is that there is a tendency to focus on text material, such as articles or books, when discussing academic publishing. These are objects that previously could only be published in physical copies, yet one was still able to mass-produce them and distribute them at a larger scale. Today, such materials are published both in a physical and digital format. For me, an ECR who has never experienced another version of academic publishing, the challenges with this form of publishing are not digital per se. These are simply the standard challenges with academic publishing. Genuinely digital challenges arise when we instead look at other formats of research materials that can only be shared effectively on a large scale in a digital format, such as images, audio, and videos and, as society and research are growing more data-intensive, the research data. If we focus mostly on the publishing of articles or books, we fail to recognize how different digital challenges can look depending on the type of material to publish. We have to ask how we ensure that academic publishing remains trustworthy and transparent as digital publishing disrupts what can be published and how we can publish.

These days, to speak about data inevitably introduces a discussion about artificial intelligence, typically in the form of large language models such as ChatGPT, and their use. Such a discussion quickly entails that one has to address privacy concerns, and we are all encouraged as private citizens to be careful with whom and what we share our private data. At the same time, we, as researchers, are encouraged to share our research as openly as possible, including our research data. These two viewpoints are not necessarily at odds with one another. However, there is a tension and potential challenge on how to merge potential privacy concerns of research subjects with Open Science policies.

Returning to the topic of the symposium, I want to draw attention to digital challenges ECRs face with academic publishing; they are multifaceted, which entails that if one zooms in on one aspect, then the complexity of the challenge is not seen. Thus, for me to address what digital challenges ECRs face regarding academic publishing entails addressing (1) the purpose of academic publishing, (2) the type of material to be published, (3) the role and use of AI and data in research, (4) the entanglement of academic publishing and research assessment, (5) the role of Open Science, and (6) what makes ECRs as a group different from other researchers.

It might have been natural to start with what makes ECRs unique as a group of researchers. However, I wish to begin elsewhere, namely in the next section, ‘What is the purpose of academic publishing’, with a discussion about the purpose of academic publishing, and then second, in the section ‘The digital elephant in the room’, with some reflections on how the increased use of AI in research needs to lead to reflections about how to manage data responsibly. Then, in the section ‘What makes early career researchers special’, I address the issue of how precariousness and its possible consequences make the situation of ECRs unique, and, in the context of adopting Open Science practices, I discuss how this precarity affects the open science practices of early career researchers. In the section ‘Open Research Data’, I elaborate on what it is that makes the publishing of research data and in particular Open Research Data so complex. Finally, in the section ‘Three thoughts on the future’, as the title suggests, I provide three reflections about challenges that need to be addressed.

What is the Purpose of Academic Publishing?

With academic articles and books in mind, it is relatively easy to provide a generally accepted answer to this question, such as, ‘to communicate research findings’ or ‘the purpose of academic publishing is to facilitate the dissemination of new knowledge and research findings to the research community and even beyond’. The publishing of academic articles and books plays a vital role by providing a platform for researchers to share their research with not only other researchers but also with students, policymakers, and the public. Through peer-reviewed journals and books, academic publishing contributes to the accumulation of collective knowledge. However, publishing and sharing research also allows other researchers to check, challenge, and rigorously review the research. If we consider the motivations for sharing research data, then the rationale behind it is that by doing so, researchers contribute to the transparency, reproducibility, and credibility of their research. When research data is open, it becomes a resource that not only other researchers can reuse but also educators, policymakers, innovators, and the general public.

Academic publishing has evolved from handwritten manuscripts to today’s digital publishing. However, with the re-invention of the printing press in the fifteenth century, publishing, including academic publishing, was revolutionized by allowing for the mass production of written material. Thus, since the seventeenth century, academic journals have been a cornerstone of scholarly communication. Similarly, with the invention of the internet, a second revolution in academic publishing began, and today academic publishing involves both traditional print and electronic formats (Fyfe Reference Fyfe2019).

As research in many cases relies on public funding, it can be argued that our collective knowledge is public property and that researchers should share their findings openly, so that everyone is ensured access without barriers, which is in alignment with the principles of Open Science that ‘Science is to be as open as possible and only as closed as necessary’ (UNESCO 2021). Open Science can, as a value, serve as a compass for what good science is, but at the same time, Open Science is also seen as a concrete set of practices that individual researchers must implement to realize the vision described above.

Academic publishing does not exist in a vacuum. Since the 1980s, academic publishing has been an integral part of research assessment, and the emphasis in today’s research assessment system on journal impact factors poses a significant problem that initiatives, such as the San Francisco Declaration on Research Assessment (DORA), aim at addressing (DORA 2022).

The lack of recognition of research that aims to reproduce previous research results in combination with the publish-or-perish culture is a factor that contributes to what has been called the reproducibility crisis, where researchers struggle to reproduce the findings of others (Baker Reference Baker2016; Ioannidis Reference Ioannidis2005). It downplays the importance of the work of peer reviewers, which, as argued by Flaherty (Reference Flaherty2022) and others, can be considered one of the reasons it is hard to find peer reviewers.

In a growing digital world, it is striking how the current criteria used in research assessment fail to acknowledge the impact of other formats, such as data or code. The strong focus in research assessment on publications in the form of articles or books makes it easy to neglect the importance of communicating research findings in ways other than the traditional ones (Khan et al. Reference Khan, Almoli, Franco and Moher2022).

The Digital Elephant in the Room

As researchers and citizens, we all face the challenge of addressing the changes that artificial intelligence (AI) is bringing. In the media, over the past year, it has brought countless articles that discuss how AI in the form of ChatGPT, Dall-E 2, and the like will disrupt the labour market and change the educational system. The use of AI in our daily lives is not new, and most of us already rely on email spam filters or recommendation systems in streaming services to make our lives easier. All of these tools are constructed using similar techniques, namely machine learning (ML) (an introduction to AI can be found at the University of Helsinki MOOC centre).

Traditional computer programs can be viewed as a rule system; an algorithm receives data, and then, according to the rules, it can process the data to give us an output. If we already know the relation between the input data and the output, which can be expressed as a mathematical formula, then in principle, the algorithm is just automation of something we could have done ourselves. The advantage of using the computer program is that it is faster and can process more data than we can, but what the computer program does is fully explainable to us.

However, the examples mentioned, from recommendation systems and spam filters to ChatGPT, are constructed differently. Here, one initially does not know the relation between input data and the output. Instead, using ML techniques, such relations are statistical correlations found through training. The specific training method can vary; generally, one speaks of supervised, unsupervised, or reinforced learning methods.

All three methods of training modify the initial model. The information stored in the training data is inherited into the modified model. When presented with new data, the model has now learned to process this data, and depending on the type of learning algorithm, it can transform the input data into different forms of outputs, such as recommendations or solutions to optimization problems. However, the exact relation between input and output can only be retrieved if the structure of the model is very simple or the dataset is small, and in contrast to traditional computer programs, it will remain unexplainable.

At the very heart of such models lies the fact that they need to be trained on data in order to be effective, and thus, the quality of the training data is crucial for the quality of the final model. This implies that if the outcome of the AI model is to be trustworthy, the training has to be unbiased. If the training data are intentionally or unintentionally biased, this bias will be inherited into the final model. For recommendation systems used to suggest what movies you should watch tonight, such a bias might only pose a slight annoyance. However, if the AI model is used instead to make medical treatment recommendations or produce court sentences, any such bias can not only have enormous consequences for the individuals but also affect our trust in the medical system, the juridical system, and democracy (Mittermaier et al. Reference Mittermaier, Raza and Kvedar2023; Hamilton Reference Hamilton2023).

The Use of AI in Research

Moving closer to home, AI tools could be used for initial peer reviewing and thus would be used to judge what is published. However, the problem of bias is encountered here, because, as before, the tool is trained on a dataset, and if it or the initial model is biased, then this will carry over into the sentencing, whether in the court or the editors’ room.

If we take it a step further, we can think of using AI tools for grammar checking our articles, finding references for specific paragraphs or identifying where counter-arguments are needed. Leaving aside the question about where the line is to be drawn for what is good scientific practice and what is not, there also is a challenge due to the risk of the tool inheriting a bias from the training data that must be addressed.

In addition to the above examples, which were chosen to illustrate how AI tools can support researchers in work by identifying different forms of gaps in their research but where the researcher still is left to do the actual research themselves, it is straightforward to think of situations where AI tools help the researcher with doing the research, such as writing suggestions for paragraphs for articles or books or suggesting counter-arguments. The exact line between scientifically acceptable and unacceptable practices is a discussion that needs to be had, but it is not what I wish to highlight here.

Instead, I want to highlight here that using any AI tools relying on machine learning techniques comes with questions and worries about potential bias in the data. Sometimes, these questions are quickly answered, and the worries can be dispelled, but sometimes, this will not be the case.

The above examples work well as examples of how AI tools can be used (and misused) within most research fields, but it still is only the tip of the iceberg regarding the potential uses of AI in research. If we have enough data, we can also use the same techniques to study the world around us and uncover new knowledge.

In physics, ML techniques have been used to rediscover Newton’s laws of gravity, and we can envision how they might be used to find unknown laws of physics (Li Reference Li2021; de Silva et al. Reference de Silva, Higdon, Brunton and Kutz2020). The status of such physical laws certainly comes with epistemic questions attached. The issue of the potential epistemic value of correlations found using AI based on ML techniques is not a problem unique to physics.

The examples provided by Mitterrmaier et al. (Reference Mittermaier, Raza and Kvedar2023) and Hamilton (Reference Hamilton2023) of how AI can be used to diagnose patients or to suggest sentencing for criminal offences can be reformulated to take the form of research. Say that researchers discover a correlation between where people live or their employment situation and how they are sentenced in court. Is the research finding to be shared a correlation or a bias, and how can this be determined?

As in other cases, working with data in research comes with ethical considerations. Among these are privacy concerns, which must be addressed to ensure responsible use and potential data sharing.

In the case of ML models, privacy concerns can be used and misused as an argument against sharing research data. However, one can also turn the argument around and instead use it to contend that the data that AI models are trained on and the code behind them need to be made open. As the code has trained on the data, the data are inherently in the tool, and thus, there is no guarantee that the privacy-sensitive data cannot be retrieved from the tool. Thus, one could also argue that if training data should not be shared publicly owing to privacy issues, then such data should not be used to begin with.

Responsible use and sharing of data in research that relies on ML models is a challenge, and it needs to be addressed. As many research fields that before were not considered data-heavy grow more reliant on data, it means that more researchers need to be trained to handle these challenges, and this is especially important when it comes to ECRs, as they are the ones with the longest part of their career in front of them.

How do we ensure that AI models are used responsibly in research? To what degree should we expect that researchers understand the AI models they use? To what degree do we need to train this and the next generation of researchers in the responsible use of AI models, data handling, and Open Science practices? These questions must be tackled to ensure that research is to be reproducible and that research practices are to be transparent in the future.

Researchers need to be trained on these topics, and research needs to be trained on how AI will shape research.

What Makes Early-career Researchers Special?

The challenges mentioned above are challenges that impact not only early-career researchers but more senior researchers as well. However, depending on where you are in your academic career, they will affect you differently.

A shift in research practices has taken place over the last 30 years. Compared with 30 years ago, more early-career researchers publish in academic journals during their doctoral education. While doing so, doctoral candidates will often publish in the same journals and on the same conditions as senior researchers; thus, the work they submit meets the professional standards of the field (Kendal et al. Reference Kendal, Lee, Soanes and Threlfall2022).

In 2020, there were around 650,000 doctoral candidates and just under 2 million other researchers in Europe (the numbers can be found through Eurostat 2023). Of these 2 million, around 30% were employed in the higher-education sector, comprising not only researchers with permanent employment but also other ECRs than doctoral candidates, such as postdocs. This means that the number of ECRs likely exceeds the number of researchers employed in the higher-education sector with permanent employment, and, as was also pointed out in a recent editorial in F1000, ECRs are not a homogeneous group (Mohammed Reference Mohammed2023). Thus, it should be remembered that different groups of ECRs also face different challenges when it comes to publishing.

However, there are two conditions that most ECRs have in common, namely that they have precarious working conditions and that many of them are likely to leave academia (Hnatkova et al., Reference Hnatkova, Degtyarova, Kersschot and Boman2022; Boman et al. Reference Boman, Brecko and Berzelak2017). These two conditions are particularly interesting to remember when discussing any challenges ECRs face, as these conditions make them, as a group, significantly different from more senior researchers.

Precariousness is not only a question of the lack of permanent employment; it also concerns what this entails for the individual at their workplace and society. Precarious employment can entail reduced access to social security, such as sick leave, parental leave, unemployment benefits, or pension savings, compared with what would be considered the norm. Precariousness in the form of not having access to parental leave or not being able to get a mortgage due to non-permanent employment creates a lack of plannability in your professional and private life. Depending on your particular situation in life, precariousness has different consequences that not everyone can equally well afford. Thus, precariousness is a barrier to diversity (OECD 2021).

Academia is the workplace of ECRs, and their types of contract and funding influence the access to support at their workplace and their working conditions. If your working life consists of a sequence of short-term contracts or scholarships, you will likely have shorter or longer periods without a contract or a scholarship. While you have employment or a scholarship, you will likely have an official affiliation with an academic institution. It is likely that the institution, say through its library, provides a range of services related to publishing. However, when your contract or scholarship runs out, you will likely lose the right to use these services.

Whether an ECR is employed or financed by a scholarship can also have consequences on which services are provided by the university with which they have an affiliation. In some places, career guidance programmes will only be offered to those employed; similarly, affiliation can influence whether they are eligible to be a member of the labour union or not, and thus whether they have access to professional help if they have disputes at their workplace (OECD 2021; Tress Academic 2022).

No doubt, many universities wish to provide a good working environment for their ECRs. However, as an ECR, I experience that the consequences of precarious working conditions are forgotten or treated as an outlier problem. At the same time, ECRs, in many situations, have limited representational rights compared with others and, therefore, also lack a formal platform for raising their issues and concerns (Pizzolato et al. Reference Pizzolato, Elizondo, Bonn, Taraj, Roje and Konach2023; Kent et al. Reference Kent, Holman, Amoako, Antonietti, Azam, Ballhausen, Bediako, Belasen, Carneiro, Chen, Compeer, Connor, Crüwell, Debat, Dorris, Ebrahimi, Erlich, Fernandez-Chiappe, Fischer and Weissgerber2022).

The point I wish to stress with the above is that when discussing challenges that ECRs face compared with other researchers, one always has to ask how precarious working conditions influence these challenges in the best- and worst-case scenarios.

ECRs and Open Science

Several studies have been done to explore the Open Science practices of ECRs (Berezko et al. Reference Berezko, Medina, Malaguarnera, Almeida, Żyra, Seang, Björnmalm, Hnatkova and Tata2021; Nicholas et al. Reference Nicholas, Jamali, Herman, Xu, Boukacem-Zeghmouri, Watkinson, Bravo, Abrizah, Świgoń and Polezhaeva2020; Gownaris et al. Reference Gownaris, Vermeir, Bittner, Gunawardena, Kaur-Ghumaan, Lepenies, Ntsefong and Zakari2022; Toribio-Flórez et al. Reference Toribio-Flórez, Anneser, deOliveira-Lopes, Pallandt, Tunn and Windel2021, among others). ECRs are found to be favourably inclined towards Open Science practices, and they, in general, see many benefits with Open Science (see Gownaris et al. Reference Gownaris, Vermeir, Bittner, Gunawardena, Kaur-Ghumaan, Lepenies, Ntsefong and Zakari2022, and Allen and Mehler Reference Allen and Mehler2019). However, they do not necessarily practise what they preach, meaning that they do not necessarily publish with open access.

When asked about which challenges they experience with open science, three themes are repeated: lack of impact, lack of financing, and lack of knowledge. Others have discussed and analysed these challenges thoroughly, and I refer the reader to the references mentioned above for such an in-depth discussion. What I want to focus on here is what it entails to consider these three issues through the lens of the precarious conditions that ECRs experience.

While publishing with open access can lead to higher citation rates (Lawrence Reference Lawrence2001; Langham-Putrow et al. Reference Langham-Putrow, Bakker and Riegelman2021; MacCallum and Parthasarathy Reference MacCallum and Parthasarathy2006), ECRs still experience that it comes at the cost of a lack of impact, meaning that they experience that open-science practices will not be valued or rewarded or even considered in the research assessment practices, as discussed by Khan et al. (Reference Khan, Almoli, Franco and Moher2022). If your chances of obtaining funding for your next employment depend on the journal impact factor of where you published your current research, then the indirect cost of choosing open-access publishing can be unaffordable.

However, there are not only indirect costs associated with publishing with open access. There are also direct costs associated, such as in the form of Article Processing Charges (APCs). However, the question is then: who is to pay the fee? Some institutions cover these fees centrally, but if this is not the case, the question of who pays remains. One option is that the fees are covered by the researcher’s funding. However, for many ECRs, the use of this funding will not be theirs to decide upon as they will not be the primary grant holder. And, if all other options fail, do we expect ECRs to pay such fees out of pocket? It must be acknowledged that financial barriers pose an obstacle for some ECRs when it comes to adopting Open Science practices, and, as argued by Bahlai et al. (Reference Bahlai, Bartlett, Burgi, Fournier, Keiser, Poisot and Whitney2019), Open Science is not equally open to everyone.

At the 2023 Wenner-Green symposium, in a discussion about whether publishers can legally forbid researchers from sharing preprints and thereby pose a legal challenge to Open Science, it was expressed that this would be interesting to test and settle legally.

As an ECR, it struck me that it was assumed that the imaginary researcher in the example would be protected by labour laws and that the university, as the employer, would protect and support the researcher. Not only does this directly assume that the researcher is employed by the university, but indirectly, it also means that this employment has to be permanent. For me and other ECRs, the question remains: if such a dispute were to arise, where would this leave us if we were financed by scholarships, were between employments, or had moved on to a career outside of academia?

Most ECRs have little legal training and cannot answer such questions themselves. The situation described above worries me. In general, when we discuss the legal protections that researchers have in their professions, many of those are actually tied to regular labour laws or are in other ways connected to stable if not permanent employment, and for the majority of researchers in academia today this is not the case. And this is something that academic institutions need to address better, whether it regards potential conflicts with publishers, access to research data, or something else.

Open Research Data

Open-access publishing is only one of many open-science practices relevant to ECRs. Open science, in general, aims at increasing the transparency in research. Other practices such as open research data, open code, open hardware, open infrastructure, and open educational resources must also be considered (Dolinar et al. Reference Dolinar, Dahle, Rutkowska, Dengo, Mueller and Saugmann2023).

Open Research Data refers to the practice of making the research data openly accessible to other researchers and the public, and in its fullest version, this is done without restrictions, barriers, or limitations. However, this is easier said than done. Research data are the raw, factual information collected, observed, or generated during research activities. Even though numerical values and/or text are typical data formats, it should be remembered that research data can take various other forms. Data are as varied and diverse as research itself. For example, in the humanities, data will include cultural artefacts and textual materials, ranging from historical documents to modern visual artworks. On the other hand, in the natural sciences, such as physics, data include experimental measurements and observations that may be generated from particle collisions in high-energy experiments, astronomical observations, or computational simulations. As the examples show, data can be quantitative and qualitative, created with research in mind or for other purposes, and come in various formats. Speaking about implementing open research data practices encompasses all of this.

Historically, having access to data meant having access to where the research data were physically stored, and this limited who could have access. Today, much of what can be regarded as data can exist in a digital format, and thus, anyone, in theory, can have access to almost all data at any time.

One widely discussed way to support this move is to implement data management plans (DMPs) more rigorously. A DMP is a document that outlines essential aspects of research data management throughout a research project and after it has ended. DMPs outline how research data will be collected, organized, stored, shared, and preserved. By specifying metadata standards, data formats, access protocols, and preservation strategies, DMPs are a tool to ensure that the data are findable, accessible, interoperable, and reusable. Thus, it aligns with the FAIR (Findable, Accessible, Interoperable, and Reusable) principles, which aim at enhancing research data’s (re-)usability. In the broader scope of Open Science, DMPs align with the movement’s emphasis on transparency, reproducibility, and accountability.

However, while some studies show that open research data are also likely to increase the use of the data and related citation rates (e.g., Woods and Pinfield Reference Woods and Pinfield2022, Piwowar et al. Reference Piwowar, Day and Fridsma2007; Piwowar Reference Piwowar2013), it also adds another item to the list that ECRs and other researchers must do. ECRs already report facing challenges in adopting open data practices owing to limited resources, lack of training and education, and concerns about data privacy and intellectual property rights. Thus, moving to open research data is a complex task. If it is to work, then early career researchers must be trained and supported in doing so.

Three Thoughts on the Future

I want to finish by returning to the title of this article, The precariousness of academic publishing in a digital world, and offer my perspective as an ECR on how to address this precariousness.

I start with the easiest one, and that is the concrete suggestion that university libraries implement a much more comprehensive online guide to support the training of ECRs and other researchers in adopting online practices.

My second point is to highlight the importance of reforming the research assessment system. This is not an easy task, but right now the research community has a platform through the Coalition of Advancing Research Assessment (CoARA) that offers a unique possibility to do so − and this opportunity should be used.

The final point I wish to make is that when we discuss academic publishing, we should acknowledge that it has strong ties to academic freedom. Freedom to learn, freedom to teach, and freedom to do research require that research is published and shared. Across Europe we see that academic freedom is being threatened, and this should worry us and make us question our current practices, because academic freedom in all its forms is a prerequisite for democracy (West Reference West2022). For me this is by far the hardest challenge to address, but I see the need for ECRs to be included and considered when it comes to academic freedom.

I have chosen these three topics, because, as I see it, one is straightforward to address, the second is timely, and the third, albeit hard, is absolutely necessary to address for the future of democracy. In the following subsections, I turn to each of these topics in greater detail.

The Role of Libraries in the Training of ECRs in Open Science Practices

Learning how and where to get your research published is part of being a doctoral candidate and often something you undertake with the support of your supervisor. In general, it is not uncommon for ECRs to follow senior researchers’ advice about where to publish. This is not problematic in itself, and many senior researchers do an excellent job of supervising and supporting ECRs when they are new to publishing. However, if the senior researchers are not familiar or comfortable with open-science practices, then we cannot expect them to be the ones who train ECRs in such practices either.

Thus, the existing challenges with training ECRs in open-access publishing are likely to increase when it comes to training them in open research data practices. Tools such as DMPs will become necessary in research fields where it previously would have been considered overkill and, in many situations, it will be just as likely, if not more likely, that it is the ECR that will support the senior researchers in such practices rather than the other way around.

When ECRs report that they lack knowledge about Open Science practices and need better training on such practices, it should be taken seriously, but one straightforward path to addressing this challenge is to expand the (online) support that university libraries offer.

While many universities have adopted comprehensive open-science policies, which are often easily accessible through either the university website or the university library’s website, the same cannot be said about guides and support. For inspiration on what such comprehensive online support could look like on open-access practices in general, I recommend considering the support offered by the Europe-wide initiative FOSTER and Leiden University in the Netherlands. However, it should be mentioned that FOSTER is no longer maintained. When it comes to inspiration on how to find support on the topic of Open Research Data, Leiden University, Oxford University in the United Kingdom, and Aarhus University in Denmark all offer comprehensive online guides.

These guides are useful to varying degrees, and they will definitely not suit all researchers perfectly; however, they can serve as a good starting point. I would, however, recommend including local ECRs and more senior researchers, from different research fields, when developing and maintaining such guides.

A Necessary Reform of Research Assessment Systems

The challenges researchers face with academic publishing are, however you phrase it, entangled with the challenges of the current research assessment paradigm. This connection must be addressed to tackle these challenges and implement better and more sustainable practices to avoid repeating the problems with the current academic publishing systems (Deutsche Forschungsgemeinschaft 2022).

Many people have argued that it is necessary to reform the European research assessment system. However, the open question is what such a reform should lead to regarding actual changes. The heavy overemphasis on one metric has proved problematic not only as it can fail to recognize contributions in the form of peer review, reproducibility research, the sharing of data, and other Open Science practices but also for many other reasons. It has also been argued that it acts as a barrier to increasing diversity (Swidor-Cios et al. Reference Swidor-Cios, Solymosi and Srinivas2021), that it fails to recognize different kinds of impacts, such as the public value of research (Molas-Gallart Reference Molas-Gallart2014) or researchers’ engagement with society (Rauchfleisch et al. Reference Rauchfleisch, Schäfer and Siegen2021).

Some suggestions have been made to address this issue. It has been suggested, and also implemented in several places, that one should limit the focus on journal impact factors in research assessment by limiting the number of articles included in the assessment process (Kendal et al. Reference Kendal, Lee, Soanes and Threlfall2022). A number of best practices for addressing issues with current hiring practices, promotion, and tenure of researchers can be found in Moher et al. (Reference Moher, Naudet, Cristea, Miedema, Ioannidis and Goodman2018). It has been argued that a reform of the research assessment system needs to address all aspects, which entails that it should include considering what is being assessed, the procedure behind the assessment, who the assessors are and what their roles are, the environments that the research takes place in, as well as the coordination of all of this (Aubert Bonn and Bouter Reference Aubert Bonn and Bouter2021). A point also worth mentioning is that what constitutes responsible research assessment is likely to be continuously adjusted (Nature 2022).

I want to draw attention to the Coalition for Advancing Research Assessment (CoARA), as I believe that it currently provides the best platform for reforming European research assessment. With the launch of its first ten working groups and five national chapters, CoARA could provide the necessary drive to change the research assessment system.

Each of the ten working groups represents an essential perspective on what reforming research assessment must address. Nevertheless, as an ECR, I believe that it is essential to include an early- (and mid-) career researcher perspective in such a discussion, because they are the ones who will be subject to research assessment the longest. Therefore, I find the working group ‘Early- and Mid-Career Researchers (EMCRs) – Assessment and Research Culture’ particularly interesting.

However, as this year’s topic of the Wenner-Gren symposium was ‘Publishing in Academia – Digital Challenges’, it is also worth mentioning that the working groups ‘Recognizing and Rewarding Peer Review’, ‘Recognizing and Rewarding Peer Review’, and ‘Multilingualism and Language Biases in Research Assessment’ focus on issues brought up and discussed explicitly during the symposium.

So far, only five national chapters have been formed, and if this is taken to be a sign of the national interest in the topic across Europe, then there is reason for concern. I hope this is not the case, but the answer will depend on how much traction the working groups gain.

However, as the community of researchers, academic institutions, and other stakeholders have argued that the current research assessment system needs to be reformed, we also have a shared responsibility to make this happen. I am not suggesting that every one of us join one of CoARA’s working groups actively − that would likely make them highly dysfunctional. However, I want to argue that if we care about research, then we all need to contribute just a little.

If you know nothing or close to nothing about what CoARA is and does, then either reading the agreement or looking into how your university or other organization is contributing is an excellent place to start. However, if you already have some knowledge, consider what you can do to spread this knowledge, and consider how your organization can contribute to the ten working groups, as they are likely to need contributions in different forms from the larger research community if they are to be successful.

The academic institution must extend the invitation to participate in reforming the research assessment system to the researchers who will be most affected by it, namely doctoral candidates, postdocs, and other early career researchers. As a group, we have a long career in academia ahead of us, and thus, those who remain will feel the full impact of the research assessment system for many years to come.

The Question of Academic Freedom

To conclude, the greatest digital challenge with academic publishing we face today is how digital the world has become. The emergence of AI models puts another layer of pressure on agreeing on how data are used and shared responsibly, putting pressure on implementing Open Research Data practices to ensure that the data are unbiased and the research reproducible. Making researchers adopt Open Science practices will require training and implementation of the necessary infrastructure and that such practices are acknowledged and rewarded in the research assessment system. As a growing number of the researchers working in academia have (very) precarious working conditions, doing all of this requires that this precarity is appropriately taken into account and that measures are taken to avoid the negative consequences of this situation.

However, academia does not exist in a vacuum outside of society. Academia educates society, by fostering critical thinking and inquiry-based learning and in the digital world, where anyone can access almost any information at any time. To ensure this role, and that research is as unbiased as possible, that research methods are transparent, and that knowledge is a public property, are key pillars of European democracy (West Reference West2022).

Fulfilling this role requires academic freedom, but at the same time is a prerequisite for that same academic freedom. As researchers, we should not only focus on our own individual academic freedom, but also stand up for that of others. This includes standing up for citizens’ access to research and knowledge, for students’ right to an education that supports their critical thinking, for our colleagues’ right to teach, conduct research, and do outreach without fear of repercussions, and for their right to disagree with us.

I like to think of academic freedom as a conversation and as both a right and a responsibility we all share in a democracy, but, of course, it looks different depending on who you are and whether you are a researcher, a student, or another citizen. However, I find the phrasing by Blessinger and de Wit (Reference Blessinger and de Wit2018), that academic freedom is a common good in a democracy, to be right on point.

It is a conversation we have in the decision-making bodies that contributes to securing institutional autonomy, and here ECRs and other researchers without permanent employment are often excluded as they can lack the representational rights enjoyed by students and researchers with permanent employment. From the perspective of an ECR, it is clear that this should be addressed. It should be a right and a responsibility of all researchers to partake in the conversation that is academic freedom, and ECRs should not be denied either the right or the responsibility.

Acknowledgements

The author would like to acknowledge and thank Irina Dumitru and Sara Pilia for helpful comments and suggestions for improvement.

About the Author

Pil Maria Saugmann holds a PhD in theoretical physics from Stockholm University. Her research has been directed towards condensed-matter physics and particularly different lattice systems that could serve as a route to realizing quantum simulators. Her academic interests also include issues related to diversity and equal opportunity, theory of science, citizen science, research communication, digitalization of society, and conditions for research, researchers, and research education. She is currently the Vice-President of the European Council of Doctoral Candidates and Junior Researchers (Eurodoc).

References

Allen, C and Mehler, DMA (2019) Open science challenges, benefits and tips in early career and beyond. PLOS Biology 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246 CrossRefGoogle Scholar
Aubert Bonn, N and Bouter, L (2021) Research assessments should recognize responsible research practices: narrative review of a lively debate and promising developments. In Valdess E and Lecaro JA (eds). Handbook of Bioethical Decisions. Vol. II Scientific Integrity and Institutional Ethics. Springer, pp. 441–472. https://doi.org/10.1007/978-3-031-29455-6 CrossRefGoogle Scholar
Bahlai, B, Bartlett, LJ, Burgi, KR, Fournier, AMV, Keiser, CN, Poisot, T and Whitney, KS (2019) Open science isn’t always open to all scientists. American Scientist 107(2), 7882. https://www.americanscientist.org/article/open-science-isnt-always-open-to-all-scientists CrossRefGoogle Scholar
Baker, M (2016) 1,500 scientists lift the lid on reproducibility. Nature 533(7604), 452454. https://doi.org/10.1038/533452a CrossRefGoogle Scholar
Berezko, O, Medina, LMP, Malaguarnera, M, Almeida, IaT, Żyra, A, Seang, S, Björnmalm, M, Hnatkova, E and Tata, M (2021) Perspectives on Open Science and scholarly publishing: a survey study focusing on early career researchers in Europe. F1000Research 10 1306. https://doi.org/10.12688/f1000research.74831.1 CrossRefGoogle Scholar
Blessinger, P and de Wit, H (2018) Academic freedom is essential to democracy. University World News 6 April 2018. https://www.universityworldnews.com/post.php?story=20180404101811251 Google Scholar
Boman, J, Brecko, B and Berzelak, N (2017) 2017 Career Tracking Survey of Doctorate Holders. Project report. Strasbourg: European Science Foundation (ESF).Google Scholar
de Silva, BM, Higdon, D, Brunton, SL and Kutz, JN (2020) Discovery of physics from data: universal laws and discrepancies. Frontiers in Artificial Intelligence 3. https://doi.org/10.3389/frai.2020.00025 CrossRefGoogle Scholar
Deutsche Forschungsgemeinschaft (2022) Academic Publishing as a Foundation and Area of Leverage for Research Assessment. AG Publikationswesen Zenondo. https://doi.org/10.5281/zenodo.6538163 CrossRefGoogle Scholar
Dolinar, M, Dahle, S, Rutkowska, J, Dengo, N, Mueller, K and Saugmann, PM (2023) Eurodoc statement on Open Science 2023. Zenodo. https://doi.org/10.5281/zenodo.8105597 CrossRefGoogle Scholar
DORA (2022) San Francisco Declaration on Research Assessment. https://sfdora.org/read/ (accessed 27 September 2023).Google Scholar
Flaherty, C (2022) Peer-review crisis creates problems for journals and scholars. Inside Higher Ed, 22 June 2022. https://www.insidehighered.com/news/2022/06/13/peer-review-crisis-creates-problems-journals-and-scholars Google Scholar
Fyfe, A (2019) What the history of copyright in academic publishing tells us about Open Research. LSE Impact Blog, 15 November 2019. https://blogs.lse.ac.uk/impactofsocialsciences/2019/06/03/what-the-history-of-copyright-in-academic-publishing-tells-us-about-open-research/ Google Scholar
Gownaris, NJ, Vermeir, K, Bittner, M, Gunawardena, L, Kaur-Ghumaan, S, Lepenies, R, Ntsefong, GN and Zakari, IS (2022) Barriers to full participation in the Open Science life cycle among career researchers. Data Science Journal 21. https://doi.org/10.5334/dsj-2022-002 CrossRefGoogle Scholar
Hamilton, M (2023) A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up. The Conversation 26 July 2023. https://theconversation.com/a-black-box-ai-system-has-been-influencing-criminal-justice-decisions-for-over-two-decades-its-time-to-open-it-up-200594 Google Scholar
Hnatkova, E, Degtyarova, I, Kersschot, M and Boman, J (2022) Labour market perspectives for PhD graduates in Europe. European Journal of Education 57(3), 395409. https://doi.org/10.1111/ejed.12514 CrossRefGoogle Scholar
Ioannidis, JPA (2005) Why most published research findings are false. PLOS Medicine 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124 CrossRefGoogle ScholarPubMed
Kendal, D, Lee, KE, Soanes, K and Threlfall, CG (2022) ‘The great publication race’ vs ‘abandon paper counting’: benchmarking ECR publication and co-authorship rates over past 50 years to inform research evaluation. F1000Research 11, 95 https://doi.org/10.12688/f1000research.75604.1 CrossRefGoogle Scholar
Kent, BA, Holman, C, Amoako, E, Antonietti, A, Azam, J, Ballhausen, H, Bediako, Y, Belasen, AM, Carneiro, CFD, Chen, Y, Compeer, EB, Connor, CAC, Crüwell, S, Debat, HJ, Dorris, ER, Ebrahimi, H, Erlich, JC, Fernandez-Chiappe, F, Fischer, F and Weissgerber, TL (2022) Recommendations for empowering early career researchers to improve research culture and practice. PLOS Biology 20(7), e3001680. https://doi.org/10.1371/journal.pbio.3001680 CrossRefGoogle Scholar
Khan, H, Almoli, E, Franco, MC and Moher, D (2022) Open science failed to penetrate academic hiring practices: a cross-sectional study. Journal of Clinical Epidemiology 144 (April), 136143. https://doi.org/10.1016/j.jclinepi.2021.12.003 CrossRefGoogle Scholar
Langham-Putrow, A, Bakker, C and Riegelman, A (2021) Is the open access citation advantage real? A systematic review of the citation of open access and subscription-based articles. PLOS ONE 16(6), e0253129. https://doi.org/10.1371/journal.pone.0253129 CrossRefGoogle Scholar
Lawrence, S (2001) Free online availability substantially increases a paper’s impact. Nature 411(6837), 521. https://doi.org/10.1038/35079151 CrossRefGoogle ScholarPubMed
Li, Z (2021) From Kepler to Newton: Explainable AI for science. arXiv.org 24 November 2021. https://arxiv.org/abs/2111.12210 Google Scholar
MacCallum, C and Parthasarathy, H (2006) Open access increases citation rate. PLOS Biology 4(5), e176. https://doi.org/10.1371/journal.pbio.0040176 CrossRefGoogle ScholarPubMed
Mittermaier, M, Raza, MM and Kvedar, JC (2023) Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digital Medicine 6(1). https://doi.org/10.1038/s41746-023-00858-z CrossRefGoogle Scholar
Mohammed, F (2023) Postdocs meet publishers: what can be done to make publishing better for ECRs? F1000 17 February 2023. https://f1000.com/open_thinking/postdocs-publishing-better-ecrs/ Google Scholar
Moher, D, Naudet, F, Cristea, IA, Miedema, F, Ioannidis, JPA and Goodman, SN (2018) Assessing scientists for hiring, promotion, and tenure. PLOS Biology 16(3), e2004089. https://doi.org/10.1371/journal.pbio.2004089 CrossRefGoogle ScholarPubMed
Molas-Gallart, J (2014) Research evaluation and the assessment of public value. Arts and Humanities in Higher Education 14(1), 111126. https://doi.org/10.1177/1474022214534381 CrossRefGoogle Scholar
Nature (2022) Editorial. Research evaluation needs to change with the times. Nature 601(7892), 166. https://doi.org/10.1038/d41586-022-00056-z CrossRefGoogle Scholar
Nicholas, D, Jamali, HR, Herman, E, Xu, J, Boukacem-Zeghmouri, C, Watkinson, A, Bravo, BR, Abrizah, A, Świgoń, M and Polezhaeva, T (2020) How is open access publishing going down with early career researchers? An international, multi-disciplinary study. Profesional De La Informacion 29(6). https://doi.org/10.3145/epi.2020.nov.14 Google Scholar
OECD (2021) Reducing the precarity of academic research careers. OECD Science, Technology and Industry Policy Papers, No. 113. Paris: OECD Publishing. https://doi.org/10.1787/0f8bd468-en CrossRefGoogle Scholar
Piwowar, H (2013) Data reuse and the open data citation advantage. PeerJ 1, e175. https://doi.org/10.7717/peerj.175 CrossRefGoogle Scholar
Piwowar, H, Day, R and Fridsma, DB (2007) Sharing detailed research data is associated with increased citation rate. PLOS ONE 2(3), e308. https://doi.org/10.1371/journal.pone.0000308 CrossRefGoogle ScholarPubMed
Pizzolato, D, Elizondo, AR, Bonn, NA, Taraj, B, Roje, R and Konach, T (2023) Bridging the gap – how to walk the talk on supporting early career researchers. Open Research Europe 3, 75. https://doi.org/10.12688/openreseurope.15872.1 CrossRefGoogle Scholar
Rauchfleisch, A, Schäfer, MS and Siegen, D (2021) Beyond the ivory tower: measuring and explaining academic engagement with journalists, politicians and industry representatives among Swiss professors. PLOS ONE 16(5) e0251051. https://doi.org/10.1371/journal.pone.0251051 CrossRefGoogle Scholar
Swidor-Cios, E, Solymosi, K and Srinivas, M (2021) Why science needs a new reward and recognition system. Nature 595(7869), 751753. https://doi.org/10.1038/d41586-021-01952-6 CrossRefGoogle Scholar
Toribio-Flórez, D, Anneser, L, deOliveira-Lopes, FN, Pallandt, M, Tunn, I and Windel, H (2021) Where do early career researchers stand on open science practices? A survey within the Max Planck Society. Frontiers in Research Metrics and Analytics 5, 586992. https://doi.org/10.3389/frma.2020.586992 CrossRefGoogle ScholarPubMed
Tress Academics (2022) #106: Precariously employed: the daily hassles of Early Career Researchers. 9 October 2022. Tress Academic. https://tressacademic.com/precariously-employed/ Google Scholar
UNESCO (2021) Recommendation on Open Science. Paris: UNESCO. https://doi.org/10.54677/mnmh8546 CrossRefGoogle Scholar
West, DM (2022) Why academic freedom challenges are dangerous for democracy. Brookings, 8 September 2022. https://brookings.edu/articles/why-academic-freedom-challenges-are-dangerous-for-democracy/ Google Scholar
Woods, HB and Pinfield, S (2022) Incentivising research data sharing: a scoping review. Wellcome Open Research 6, 355. https://doi.org/10.12688/wellcomeopenres.17286.2 CrossRefGoogle Scholar