Fork me on GitHub

Abstract

Recent years have seen a great deal of experimentation around the basic concept of the journal. This chapter overviews some of the more novel or interesting developments in this space, developments which include new business models; new editorial models, and new ways in which the traditional functions of the journal can be disaggregated into separate services.

It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.
Charles Darwin

Introduction

For a long period following the invention of the Academic Journal in the 17th Century, the journal evolved very little, and certainly did not change substantially from the original format. It was perhaps with the formalization of the Peer Review process (as we have come to know it) that the concept of the journal made its greatest leap. Other than that development, very little changed in over 300 years…

However, since the advent of the internet, the ‘traditional’ concept of the Academic Journal has been under pressure. Although subscription titles moved online quite early in the internet era, it has only been with the more recent, and still accelerating, move towards Open Access (an innovation which was made possible thanks to the enabling technology of the internet) that there has been a considerable amount of experimentation around what a journal is, or could be.

Because of the size of this industry and the scale of experimentation which is underway, the examples listed in this chapter are not intended to represent a comprehensive list, but simply to highlight some of the representative experiments that are happening today.

Novelty in the Business Model

Clearly the predominant business model in journal publishing has always been the subscription model, however there have been several recent developments which aim to provide alternatives to this established model. The most obvious alternative in today’s marketplace is the Open Access ‘author pays’ APC (Article Publication Charge) model which has been successfully used by the likes of BioMed Central, PLOS, Hindawi, and Frontiers, to name a few. This model will be well understood by most readers, so we shall not dwell on it here. Instead it is interesting to consider some of the other Open Access business models which are in the marketplace today:

Free to Publish

eLife is a highly selective, open access journal which is entirely financed by research funders (Max Planck Society, the, Wellcome Trust and the HHMI). Because of this funding, the journal does not charge authors a publication fee (hence it is free to publish, as well as free to read). eLife is perhaps the most recent, and visible, example of a ‘free-free’ journal but clearly many other journals are funded by entities such as Institutions, or Societies to the extent that their costs can also be entirely covered.

Individual Membership Models

PeerJ1 (see Fig. 1) is an open access journal which offers authors a Lifetime Membership, in return for which authors can publish freely for life. Membership fees start at $99 (allowing authors to publish once per year) and rise to $299 (which allows authors to freely publish as many articles as they wish). All co-authors need to be paying members with the correct Membership level. Membership fees can be paid either before editorial Acceptance; or authors can submit for free and become a Member only at the time of Editorial Acceptance (for a slight price increase). This model is very new, and clearly orthogonal to a traditional APC fee (as it shifts payment from a “fee per publication” to a “fee per author”) - therefore it is being watched with some interest, to see how the model might affect the approach of the publisher towards its ‘customers’ (who are now Members).

Figure 1. The PeerJ homepage.
Figure 1. The PeerJ homepage.

APC Fees Centrally Negotiated

SCOAP3 (the Sponsoring Consortium for Open Access Publishing in Particle Physics) has been a multi year project by a consortium of high energy physics laboratories aiming to flip their entire field from a subscription basis to open access. SCOAP3 has raised committed funding from major libraries and research centers worldwide such that it has been able to take that commitment and approach the publishers of the main subscription titles in their field, requesting that they tender for the Open Access business of the community. The arrangement comes along with a promise that committed institutions will no longer subscribe to journals, but will instead fund the open access fees of the researchers in the field.

Novelty in the Editorial Model

For a very long period there was no formal peer review process in the sciences and as recently as the 1970s, many journals did not formally peer review everything they published. However, since that time, the process of peer review has become firmly intertwined with the concept of the scholarly journal. The peer review which has evolved in this way has largely tried to do two things at the same time: (i) to comment on the validity of the science, and suggest ways in which it could be improved; and (ii) to comment on the interest level (or ‘impact’) of that science, in an attempt to provide a sorting, or filtering mechanism on the mass of submitted content. However, with the advent of the internet; the development of online only ‘author pays’ business models; and the development of improved methods of search and discovery, this second aspect of peer review has become increasingly less important.

It was in this environment that PLOS ONE (from the Public Library of Science) was conceived (* Disclosure - the author of this chapter previously ran PLOS ONE) - a journal which would peer review all content, but only ask the reviewers to comment on the scientific and methodological soundness, not on the level of interest (or ‘impact’, or ‘degree of advance’) of the article. PLOS ONE has proven to be wildly successful - it launched in December 2006, and in 2012 alone it published 23,464 articles (approximately 2.5% of all the content indexed by PubMed in that year, and making it the largest journal in the world by several multiples) and continues to grow year on year. This success did not go unnoticed - PLOS ONE coined a new category of journal, the “Open Access Megajournal” and, with it, gave rise to a number of similar journals (employing similar editorial criteria).

The archetypal features of a MegaJournal seem to be:

  1. editorial criteria which judges articles only on scientific soundness;
  2. a very broad subject scope (for example the whole of Genetics, the whole of Social Sciences);
  3. an open access model employing a payment model (typically APC) which is set up for each article to pay its own costs;
  4. a large editorial board of academic editors (as opposed to a staff of professional editors).

Although some megajournals are not completely transparent about their editorial criteria (perhaps for fear of harming their brand), a non-exhaustive list of these ‘MegaJournals’ would currently include (in no particular order)

In addition, it has been argued that the BMC Series, the “Frontiers in…” Series, and the ISRN Series from Hindawi (all of which apply similar editorial criteria, but attempt to retain subject-specific identity to each journal in their Series) should also be considered megajournals, although they typically seem to downplay that association.

Novelty in the Peer Review Model

Although the megajournals have successfully implemented a single change to the peer review process (not judging based on impact), there are other innovators who have journals (or ‘journal-like’) products which are trying other variations with their peer review. Some illustrative examples are:

Hindawi’s ISRN Series algorithmically identifies suitable peer reviewers (mainly from its pool of Editorial Board members), and then operates an anonymous peer review process which culminates in a ‘vote’ between the peer reviewers. If all peer reviewers are in agreement to either accept or reject the article then it is accepted or rejected. However, if there is any disagreement as to the decision, then the reviews are shown to all the peer reviewers and they are invited to change their opinion. After this second round, the decision goes with the majority opinion (even if not unanimous). Once accepted, the authors are given the option of voluntarily revising their manuscript based on the feedback. This process is described, for example, at Hindawi’s Website.

By contrast, F1000 Research receives submissions and places them publicly online before peer review. Reviews are then solicited (typically from the Editorial Board) and reviewers must sign their reports. The reports can be ‘approved’, ‘approved with reservations’, or ‘not approved’ (Fig 2). Provided an article accrues at least one ‘approved’ and two ‘approved with reservations’ (or simply two ‘approved’ reports) then it is considered to be ‘published’ and subsequently it is indexed in PubMed Central, and Scopus. Authors are not obliged to revise their manuscripts in light of feedback. Because content is posted online even before peer review has started, F1000 Research is thus composed of content in varying states of peer review, and as such can be thought of as a cross between a journal and a preprint server.

Figure 2. The article listing for F1000 Research, highlighting the way that evaluations are presented.
Figure 2. The article listing for F1000 Research, highlighting the way that evaluations are presented.

The PLoS Currents series of 6 titles can also be thought of as a cross between a journal and a preprint server. There is no publication fee and any content posted to a PLOS Current is submitted using an authoring tool which automatically generates XML. The content is evaluated by an editorial board, and these boards typically look to make very rapid decisions as to whether or not to publish the content (they are not typically recommending revisions or performing a full peer review). Because of this process, articles can go online within a day or two of submission, and are then indexed by PubMed Central. Although an interesting concept, to date PLOS Currents have not received a large number of submissions.

And then, of course there are the true ‘PrePrint Servers’. Two defining features of the preprint server are that (i) the content is typically un-peer reviewed, and (ii) they are usually free (gratis) to submit to. Because a preprint server is rarely thought of as a ‘journal’ (and this chapter is devoted to innovations around the journal concept) we shall not spend much time on them, other than to provide an illustrative list which includes the arXiv (in Physics and Mathematics), PeerJ Preprints (in the Bio and Medical sciences), Nature Preceedings (now defunct), FigShare (a product which is ‘preprint-like’, and covering all of academia); the Social Sciences Research Network (SSRN) and Research Papers in Economics (RePEc).

The Disaggregation of the Journal Model

Historically, a journal has performed several functions such as ‘archiving’, ‘registration’, ‘dissemination’, and ‘certification’ (there are others, but these four have historically defined the journal). However it is increasingly evident that these functions do not need to all take place within a single journal ‘container’ – instead each function can be broken off and handled by separate entities.

Several interesting companies are arising to take advantage of this way of thinking. Although each service can be thought of as an attempt to ‘disaggregate’ the journal, they are also being used by existing journals to to enhance and extend their features and functionality. As with preprint servers, none of these products are really thought of as a journal, but it is certainly useful to be aware of them when considering the future development of the journal model:

Rubriq is a company which attempts to provide “3rd party peer review” – authors submit to Rubriq and they then solicit (and pay) appropriate peer reviewers to provide structures reports back to the author. Another example of this kind of thinking is with the Peerage of Science. By using services like this, authors can either improve their articles before then submitting to a journal; or they can attempt to provide their solicited peer reviews to the journal of their choice in the hope that this will shorten their review process. (* Disclosure - the author of this chapter is on the (unpaid) Advisory Panel of Rubriq)

Figure 3. The Peerage of Science homepage.
Figure 3. The Peerage of Science homepage.

Altmetrics’, or ‘Article-Level Metrics’ (ALM) are tools which aim to provide 3rd party ‘crowdsourced’ evaluation of published articles (or other scholarly content). If we envision a future state of the publishing industry where most content is open access, and a large proportion of it has been published by a megajournal, then it is clear that there are considerable opportunities for services which can direct readers to the most relevant, impactful, or valuable content for their specific needs. Pioneered by the likes of PLOS[^6] (who made their ALM program entirely open source), the field is being pushed forward by newly started companies. The major players in this space are ImpactStory (previously Total Impact), Altmetric and Plum Analytics and all of them attempt to collate a variety of alt-metrics from many different sources, and present them back again at the level of the article (Fig 4).

Figure 4. The Impact Story homepage, highlighting which sources they index and how they present metrics for a single article.
Figure 4. The Impact Story homepage, highlighting which sources they index and how they present metrics for a single article.

One particularly interesting provider of ‘alt metrics’ is F1000Prime (from the “Faculty of 1000”). Previously known simply as ‘Faculty of 1000’, F1000Prime makes use of a very large board of expert academics who are asked to evaluate and rate published articles regardless of the publisher. In this way, they form an independent review board which attempts to direct readers towards the articles of most significance. Although F1000Prime only evaluates perhaps 2% of the literature (and hence is not comprehensive), it is clear that an approach like this fits well with a concept of disaggregating the role of ‘content evaluation’ (or ‘content filtering’) from the journal process.

Conclusion

As can be seen from this chapter, there is a great deal of experimentation happening in the journal space today. It remains to be seen which experiments will succeed and which will fail, but it is certainly the case that the last decade has seen more experimentation around the concept of the journal than its entire 350 year history prior to this point. We live in interesting times.


  1. Disclosure: The author of this chapter is a Co-Founder of PeerJ


Next chapter: The Public Knowledge Project: Open Source Tools for Open Access to Scholarly Communication

James MacGregor, Kevin Stranack & John Willinsky