Open-process Academic Publishing


Publishing and knowledge production in academia can be significantly improved if aspects of cooperative models developed in software and networking communities are adopted. Open Access movement does that partially, by focusing on the openness of the final result. The most important attributes of the development of the Internet, the Web and their communication-cooperation tools is openness of the entire process of production. The novelty that can take many forms is in the organizational structures, decision making and cooperation. This article argues that journals adopting a form of open-process approach could benefit by increased quality of submissions and publications, faster and more responsive pace of research and by attracting more risk taking and innovative authors. Through clearer structure and visibility of tasks, equally important could be possible internal benefits for journals: recognition of the most important workers and decision making in their hands, easier and improved project management, attracting new volunteers and reducing the impact of counter-productive participants. If these changes were implemented well, such open-process journals would gain readership and reputation. Open-process academic publishing can take procedurally and technologically complex forms. A simple transition model is suggested: how to start with an email list and right cultural safeguards.

The Internet Model = why Open Access is not enough

Publishing and peer review processes in academia are outdated and closed models. Key flaws are lack of transparency in pre-publication process, lack of dialogue in both pre and post-publication phases, and linear use of digital media that only scratches the surface of possibilities for greater reflexivity and dialogue in order to have more powerful, effective and responsive knowledge production (Cope and Kalantzis 2009). The history of peer review is closely tied to state and royal censorship, and academics take turn in disciplining each other and providing sense of order and assurance that a good science is produced, so that the contract between the state and science is preserved (Biagioli 2002:12-13).

At least in the areas i operate in (social sciences and humanities), these processes should be far more, if not entirely, open, with a provision for privacy in special cases. I call this model Open-process academic publishing. The name deliberately distinguishes it from Open Access (Suber 2007), which refers to only the outcome of academic knowledge production being open. The suggestion is not to open the processes in random ways, but in ways in which this openness – fundamentally based on volunteer participation – brings and enables more structure, more internalized working discipline, more commitment, and more ability to improve cooperation with deliberate precision, all with the goal of improving the outcomes. Since ’culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has’ (Crocker 2009), we could call it The Internet Model (software + networking). Its potential screams for being reused, hacked, for other areas of production. Academia, especially its publishing side, seems to me capable of embracing such volunteer-core open-process cooperation.

The model proposed here brings only few new aspects, mainly those related to the work done in the Open Organizations project (Geer, Malter, and Prug 2005a). It is an abstraction, a theoretical development of decades of developments in software and networking, and in related concepts and practices, especially in their open-process part, that has already been partly reused in news production (Arnison 2003).

What are my motives, you might ask? I am a PhD student dreading the idea of being drawn into the existing closed model. In social sciences and humanities (dozens of journals that i checked), author mostly have very little idea how long will it take your submission to be processed, what are the stages in the process and how do you engage with it, other than wait for the unknown length of time. Quite a few journals do have some of these elements stated on their web pages, but it still takes several months and often years, and it does not embrace open processes for better cooperation. Given what is possible and what we can observe in the production of software and networking, the current practice makes very little sense to me. Geared against innovation, seemingly ‘most appropriate for papers that contain little that is new’, on average with less capable researchers often judging the work by the best ones (Armstrong 1997:6) – i find the current state of academic publishing depressing and unacceptable. The most unacceptable element is that we are supposed to produce new knowledge. And yet, with all the existing tools and processes for communication and cooperation, processes that gave us the Internet, the Web, and most of what’s good about them, in academia, in terms of our working processes, ways of cooperation, we still mostly operate as if very little of this open volunteer based cooperation has actually happened – we mostly ignore it.

Discipline of Information Systems is not isolated in ‘ leaders explicitly advising new faculty not to innovate if they want a career’ (Whitworth and Friedman 2009a), and anti-innovation culture starts earlier. I was part of a class of twenty, first year PhD students at the Sociology department, London School of Economics in 2008, given the same advice. To increase our chances of being published, we were advised not to innovate, but instead stick to what is familiar, in order to make it easy for editors to accept our work. Avoidance of innovation and risk taking and conformance to the publishing system which discourages it, is now part of the academic training in some disciplines.

Instead of enabling better cooperation, which is the key for knowledge production, Internet and electronic tools are used in academic institutions increasingly to enlarge and multiply bureaucratic procedures, regulations and managerial control, changing university radically in the process (Dyer-Witheford 2005). That seems to be the trend (Sievers 2008:242-3). While managers are imposing more control in many aspects (Bousquet 2008:12-13, 59-70), we need to ask why is it that academics are so slow in adapting those new tools and processes. One aspect, which this paper does not deal with, and which requires a separate study, is possible use for the improvement of internal processes with the university departments: self-governance, labour relations, and organization of work in all aspects. The other aspect is the production of knowledge, most of it revolving around writing and publishing in journal papers. Is situation as extremely rotten as this recent paper boldly states?

Academics are now gate–keepers of feudal knowledge castles, not humble knowledge gardeners. They have for over a century successfully organized, specialized and built walls against error. [...] As research grows, knowledge feudalism, like its physical counterpart, is a social advance that has had its day. (Whitworth and Friedman 2009a)

Open Access movement and academic blogging are examples of the positive adoption, and it inspired me to get involved and start recently writing in open, on blogs, about Open Access. However, blogging is limited to individuals working on their own, linking and having discussion through comments [1]. It does not apply the full software-networking Internet model, which is not a surprise – it is not meant to be about collective, organised, prolonged production work. Still, i am tempted to argue that blogs, pingbacks (Langridge and Hickson 2002), discussions in comments (Adio et al. 2009), intense circulation of new posts and comments via RSS (RSS Advisory Board 2009) amongst clusters of inter-linked blogs, are all elements of an early form of open-process cooperation developing in academia. Not developing in an institutional setting, but, for now, in a self-administered, out-of-institutions, way. Which is a good thing; it carries the volunteer-core spirit, an essential part of the Internet Model open-process aspect. I would not fully agree that ‘science is already a wiki [...] just a really, really inefficient one – the incremental edits are made in papers instead of wikispace’ (Wilbanks 2009). However, there are several aspects of wikis, blogs and comments that could lend itself well to creation of new forms of scientific production that could be a step forward from the current journal model. Hence, my below argument for adding a new type of journal article, one suitable to a faster, more responsive, easier to asses, production of theory, more suitable to how we live and work today. However, adding a new type of academic article to the existing publishing models is not sufficient. We need to change the publishing processes too, to make this possible.

Within the boundaries, key concepts, that define the Open Access (OA) movement, the possibilities of opening up, radically changing for better, the actual processes of academic production and publishing, based on the reuse of the existing models developed in software and networking, are limited. Hence, i will leave out the more detailed direct comparison with the OA. The reasons for change are many and developed in detail below. While i fully agree with OA goals, and i am working on implementing and promoting them, OA falls much too short of what, given the models and tools we have at our disposal, could and should be done in academia.

Primary limitation of OA is focusing on the Open Source paradigm and its central attribute: openness of the final product. Which is not a surprise, given that this was the most dominant concept signifying the success of software and networking communities at the time of creation of the OA ideas.

Today, i claim, we need a paradigm shift. Even if OA did incorporate most of the main methodological points about the cooperation that Open Source was representing, it still would not have been enough. Open Source is a very limited subset of methodology that made software and networking communities so successful. By successful i mean inspiring hundreds of thousands of international volunteers engaging in various cooperative models of creating high quality software and sets of ground breaking network protocols, and further inspiring even larger number of people in other spheres to reuse and adopt some of their methods. To re-capture what was lost in the Open Source, we need an Open Process and The Internet Model to replace it, and thus to expose the world to the revolutionary potential of the re-use of these models in many spheres of society, particularly in knowledge production. I will focus here on what i think ought to be done to improve what academic publishing already do, with the focus on the work of journals.

Open-process publishing and reviewing advantages

The following benefits could be gained with open-process publishing and peer reviewing:

1) Quality of submissions would increase a lot over time – because new authors would see the history of the entire process and learn from it (archive of all submissions, peer reviews, editorial board comments, etc). In addition, because they would be less likely to submit badly written texts with no adjustments to publicly stated journal guidelines – a big problem for editors, i am told repeatedly, is the large amount of low quality initial submissions. In the current system, with externally invisible submissions, the reputation cost of submission for authors it too low: they can submit any rubbish without adjusting it to the journal’s guidelines. The only people who see these disrespectful acts (towards work of editors, especially volunteer work), and who associate it with author’s name, are editors. If submissions were openly visible, the cost of submitting random, unadjusted, low quality, undeveloped papers would be far higher, since such disrespectful behaviour would be publicly linked to the author. Atmospheric Chemistry and Physic journal has been operating an open, two-stage peer review process for years, and the results do confirm the logic of my hypothesis:

‘public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions. [..] The deterrent is particularly important, because reviewing capacities are the most limited resource in the publication process.’(Koop 2006)

2) Quality and innovation in published texts would increase too – because of the above point one, and because opening of the whole, or most, of the publishing process would improve the quality of peer and editorial board reviews, for the same reputation cost reasons stated in the point one. Doing low quality, superficial peer or editorial reviews would be publicly exposed and vice versa – possibility of lost, or gained reputation as an editor or peer reviewer would be a motivating factor [2]. In the current model, all of that work is visible only to those few who participate [3]. In one of the widest researches, review of 68 papers concerning peer review, a rather depressing picture is painted. At the time of writing it, Armstrong has been professor for over thirty years, founding two journals and acting on fourteen editorial boards. He puts emphasis on the anonymity aspect of reviewing and the lack of reward, thus confirming what i concluded speculatively: ‘reviewers generally work without extrinsic rewards. Their names are not revealed, so their reputations do not depend on their doing high quality reviews’. Although ’reviewers typically have less experience with the problem than do the authors ‘, they don’t contribute with any new data nor analyses, they spend between two and six hours doing it, often after waiting for months to do it. Overall, reviewers use their opinion against the scientific work of authors, often differing from other reviewers (Armstrong 1997:5). To complicate the whole thing further, academics are impressed by and prefer ‘complex procedures’ and ‘obscure writing’. Amongst several suggestions Armstrong makes is to have authors nominate one of the reviewers. This is especially important for innovative work, type of work that provides ‘useful and important new findings that advance scientific knowledge [...] which typically conflicts with prior beliefs’, and requires a paradigm shift (Armstrong 1997:2). Another suggestion he makes is open peer reviewing, since ‘disclosure of reviewer identity allows for a deeper dialogue among interested parties [...] while once the article is pronounced “peer reviewed” and published, there is little record of the process and no means of further development’ (Phillips, Bergen, and Heavner 2009). Such open process would create lasting relationships and build reputation for good reviewers. The logic of reputation works well in life in general, it can work well via online tools too – Ebay is a good example of quite a successful model of attaching behaviour to a name closely. Peer reviewers could still easily stay anonymous, if they choose so – they could send their review to editors who could forward it to the open-process system. In that case, they lose the reputation they could have gained for a signed well done reviewing.

3) Journals who implement this process well would attract more agile and risk taking authors – because through the open-process publishing it makes more sense for authors to take more risks (might sound counter-intuitive at first), be less within the known/accepted knowledge boundaries, since they can rely on the peer and editorial assessments of their work done in public. This in turn can lead to less politically correct, career-opportunist position taking from both authors and reviewers and to an opportunity for more bold leaps from both sides. In short, openness would steer reviewing assessment to be more focused on the merit of the work assessed, hence the authors can be more confident in submitting such, more risk taking, less compromise driven works. This would lead us away from ‘The modern academic system that has become almost a training ground for conformity’ (Whitworth and Friedman 2009a), and away from publish and perish devaluing model whose low-risk, but well-referenced style of writing has made overall research difficult to assess. It would encourage ground-breaking authors to publish their new research early and suppress mediocre authors who often, by the sheer number of low-risk publications, prosper in the current play-it-safe system. They develop careers by such – for the knowledge production suffocating (it clogs the production, editors, reviewers, publishers, they all waste time) and for individual careers thriving (it gets authors jobs and research grants) – volume publishing. Armstrong’s research again confirmed this: since wide variety of research points out it is common for reviewers to reject ground-breaking papers, ‘it is more rewarding (for researchers) to focus on their own advancement rather than the advancement of science. Why invest time working on an important problem if it might lead to controversial results that are difficult to publish?’(Armstrong 1997:15)

If open-process publishing were widely spread, re-writing of the same papers for different journals, again for the sake of careerism, to get research points and an extra publication, would be far easier to spot and expose. The current opaque system makes it easy for low-risk careerists, although Open Access is contributing to that changing for better. Open Process would reduce it drastically. If mailing lists were an early implementation model (submissions, editorial and peer reviews, revisions, everything is sent to an open mailing list – see below for the how it would work), spotting a submission which is a rewritten version of an already published paper would be simple. We could use any good web search engine to check for key paragraphs, concepts with author’s name, and it would be soon clear whether the author has already published on the topic, where and exactly what. Simultaneously, participation of the wider community of reviewers would increase the chance of innovative, risk taking, work being spotted and it would help to develop it and publish it (J Beel and B Gipp 2008) [4].

4) Journals that implement this process well would significantly raise the dynamics/pace of research – because some of the most in-depth debates that now happen on academic blogs [5] could thanks to the faster and open-process peer reviewing and commenting be integrated into journals in some form. The form could be shorter, still referenced as academic papers are, and arguments even more focused that those in an average 8000 paper. My impression is that most journal papers revolve around few core ideas (often a single one), not necessarily always connected as closely as to require a single paper. Today, i believe that some of these ideas originate in blog posts. We could enable those high quality 700-800 words blog posts to be submitted, first as rough drafts, and then in a fully referenced short, still burst alike, form of 1500-2000 words [6]. Since the argument would be shorter and more focused, it would be easier to evaluate it, which would mean shorter turn around peer reviewing and publishing, and hence sooner possibility of those whose work relates to it to respond [7]. The cycle of publishing would thus follow more closely the way we research, especially for senior academics for whom: ‘research is often done when a few precious hours can be salvaged from a deluge of other responsibilities’ (Weber 1999). It would also contribute to possibly avoiding the destiny of: ‘Many journal papers are out of date before they are even published’; with a rather frustrating truth that many experience personally: ’In the glacial world of academic publishing one rejection can delay publication by two–four years’ (Whitworth and Friedman 2009a). In addition to this, there are situations when a rapid response of scientists could be immensely beneficial (Varmus 2009). PLoS Currents is a recently started project to provide a platform for fast publishing of scientific papers on specific issues (worldwide H1N1 influenza A virus outbreak is the first one(Public Library of Science n.d.)), using board of expert moderators instead of in-depth peer review in order to get papers shared as rapidly as possible [8].

5) Journals would gain readership and reputation – because of all the above and because of below internal benefits and their public visibility. That is, given that they remain in a form which still justifies calling them journals. Several authors consider that the future of academic publishing will be focused on articles, with a possibility of moving towards ‘public research environments’ (Mietchen 2009) that will be displacing the notion of journals. One thing is more certain, that journals do not have a single future (Nielsen 2009c). Different platforms are already emerging and we will be seeing more of it in the near future. Scientific blogs are places where emerging models are discussed. There are big problems for a more collaborative model to emerge. Academic journal publishing is a hugely profitable industry (Cope and Kalantzis 2009) achieving its profits by a paradox of privatization of the work done by communities funded mostly by the state, selling the access to it back to those who produce it via library subscriptions. In health sciences and within most established institutions ‘the current publication and review process is controlled and fiercely defended by those who benefit from it’ (Phillips et al. 2009). For Nielsen, for radically open collaboration, science lacks both tools (infrastructure) and incentives: why would one write and comment on blogs if that does not count when grants and jobs are given (Nielsen 2009a). Perhaps that is true in physics, where he works, although i doubt it. I believe cooperation on blogs and comments, and the existing journal system, can and do co-exist for the benefit of the participants in both producing better work and in enhancing their careers. For example, early exposure of this piece on the blog resulted in the text being improved. Benjamin Geer and i started from some opposing views. However, after few rounds of clarifications in the blog comments i understood his main concerns about having in the open early versions of the text which are not ready, and which might have major flaws that the author will address as the work progresses. It led to Geer making a concrete proposal how to improve. An important concern was addressed and the model i started creating here was significantly improved – thanks to the work being done on the blog in an early stage, and thanks to both of us willing to discuss, trying understand each other ideas and concerns. On the back of the early release, i also got a presentation at a conference accepted [9], an invitation to give lecture to students in my department based on the text, encouragements to submit the text to a journal, and further suggestions for improvements. I integrated some of the suggestions and i’m submitting the text to a journal. Clearly, so far, i benefited a lot from an early exposure and from developing it in the open. It also did not limit my publishing options, quite the opposite, i think that it has increased them – although we will be able to tell this only with hindsight, once the text is published or rejected.

It is important to note that this type of open work and early releasing is not always possible, and i realized it immediately, while writing another political text during the same weeks when i was writing this one. This confirms that there will be different platforms, writing and cooperative scenarios and methodologies, for different situations, scientific fields and communities. Our thinking has to be open, if we are to increase the possibility of benefiting from the rupture of the centuries old model of scientific collaboration and publishing. A journal that would provide the environment (a mailing list with certain cultural safeguard, as we suggested below, would suffice) for early discussions like the one that happened on my Hack the State blog would gain readership and reputation. Some authors, for some texts, would gladly expose their early drafts in such environment.

Internal benefits for journals

In addition, there are enormous internal benefits for journal, all of which would contribute to their increased organizational health and development:

1) Clearer structure and visibility of tasks and processes contributes to recognizing own most important workers – due to breaking of a large task (publish a new issue) down in a set of defined and openly recorded smaller steps, more precise and transparent allocation of tasks and responsibilities exposes who does what, how and when. This is crucial, since such practice, system, structure of work, rewards those who do more, better and timely work. In organizations, especially in volunteer ones (most editorial boards/collectives in social sciences and humanities), recognizing contribution, and lack of it, is one of the keys for the survival and improvement of the project. Often, in projects where the structure of openly defined, recorded and visible smaller tasks does not exist, it happens that the majority of recognition for the work collectively done falls to the wrong people i.e. to those who have better social connections, who are in a more visible position within the communities in which the journal/project operates. This default mode of disorganization is a source of constant damage for the project. It kills the spirit, rightly, of harder working, most important, participants. In addition, it frequently makes them either imitate the behaviour of those who collect the recognition (contribute less, collect more reputation towards your career progress), or it makes them leave the project. This in turn requires constant recruitment of new project members either who will be blind to the unjust distribution of rewards (reputation), or who will accept it as it is. If we can take is as relevant, given the differences in the fields of operation, a recent research has shown that contributors to popular websites (, are motivated by the attention they get. The attention comes from the volume of contributions. Users who get no attention tend to stop (Wu, Wilkinson, and Huberman 2009). Although the work of a contributor to is significantly different from a volunteer in a collectively produced journal, there are some parallels. Translated in our context here, it suggests that making the work on tasks visible and interactive publicly (open-process publishing key point) is likely to award most attention to those who do most of it, which is a positive outcome for any project that relies on retaining the most productive members.

2) Increased focus on implementation work and continuously carried out processes. Defining the workflow steps and stages exposes what is the necessary implementation work that has to be continuously carried out. It puts emphasis on the organizations, group, collective as a set of ongoing processes. It also exposes other kinds of work as less important, and hence those who do it as less essential for the existence of the project and the group.
Many volunteer loosely structured groups suffer from participants who talk and communicate a lot, often object a lot as well, but contribute little to the implementation work tasks. Frequently, these participants hinder other key group members – on whose contribution the project and group rely on – from getting on with their tasks. Reducing the influence of talk and communication intensive participants who do not contribute much to the implementation work is highly positive for the survival, development and quality of the work produced.

In other words, structured open processes make it possible for an organization, collective, group to not be open and welcoming to any kind of participation, internally nor externally, but be selective instead. More of this kind of openness means more structure, more internalised working discipline, more commitment, and more ability to improve cooperation with precision. In a slightly more abstract terms, the more a whole is exposed, defined, and its workings and operations known and visible, the more likely we can adjust it, reshuffle it, to make it do what participants in the whole want it to do. Open processes enable this, hence open-process in the name. Closed processes allow more corruption of organizational goals: the less we know about the processes, components and their relations, the more individuals can utilise the results of collective work, or of work of others, for own goals and benefits (in academia, careerism).

In Free Software terms, long-term freedoms to act and produce collectively do not come cheaply, and have to be defined, developed and defended. The key pre-requisite for the four Free Software freedoms (defined as ethical demands) to cooperate and share is universal free access to software source code. What is missing from the Free Software definition (although it was frequently present in Richard Stallman’s work, and in the work of software and networking communities) to give us an accurate picture of the cooperative model discussed here, is what is visible from the Internet Engineering Task Force principles (IETF, see below).

In short, to explain the success of the Internet model, having the source code is not sufficient. Other key components must be present: defined goals, open participation (anyone can join) and work processes, respect for and focus on competence, volunteering core, rough consensus and running code decision making principle (voting used only in extreme circumstances) and defined responsibilities (protocol ownership, in IETF case, maintainer in FS case, or package maintainer in the case of GNU/Linux distributions).

This is precisely why Open Access (OA) concept and movement are not enough, nor was it their goal to implement a successful open volunteer cooperation on the trail of the Internet software-networking model. Put briefly, a specific organizational model is necessary too. Using the Open Source paradigm, a business friendly and self-declared ethics-free [10] version of Free Software, is even more misleading, because of its emphasis on the source code alone. It is the least useful model and concept to help here, since it lacks both explicitly defined ethics – which makes it possible in the first place to define, develop and defend sharing, and cooperation in Free Software – and a defined organizational model. To explain this successful model, i propose a following formula: The Internet Model = Free Software + IETF. In other words: software + networking. Or, even better: ethics + organization.

To the existing Internet Model, i would add the following attributes as highly beneficial: first, a mapped workflow of all working groups, components and their relations, and second, a defined decision making, participation and exclusion processes. The first one can be done through splitting of the work in stages (recognizable, definable points in collaboration), designating working groups with known tasks and participants, and mapping their relations, their inter-processes, so that dependencies between the stages, working groups and other components of total group activity are visible. All of this is geared towards enabling and focusing on openness of processes and on the contributions of those who carry out most of the implementation work. Since such type of work is the blood stream of collective work: without its movement, groups, collectives, organizations cannot produce. With open processes at each stage of work, possibility for new workers joining and participating in only selected parts of the overall production opens up.

3) Easier project management – increased task modularity and status (full status of submission = stage + state, see below) real-time visibility (anyone can anytime check the stage and state of any submission on the web system used ) allows for better project management, easier allocation, delegation of tasks, and a more precise sense of progress and problems. All beneficial for the general work spirit, time and resource assessments, and to keep authors who submit papers, and all other parties involved, informed correctly at all times about the full status of the submission.

4) Decision making into the hands of the people who matter most – because who does what, when and how becomes visible, and because those who carry out continuously implementation work matter most for the organization, decision making can be more in their hands. For example, Marxists Internet Archive (MIA) addresses this by defining a volunteer, and hence defining decision makers, through work contributions: ‘MIA volunteers are people who have, in the most recent six-month period, made at least three separate contributions over a period of three weeks to six months’.(Marxist Internet Archive Admin Committee 2009)

In the Open Organizations project, we defined this similarly: ‘anyone doing implementation work in the group, or has done such work in the recent past (e.g. within the past two months), can participate in its decision making’ (Geer, Malter, and Prug 2005b).

5) Attract new volunteers and reduce impact of the existing counter-productive internal participants – utilizing the above task and process openness and visibility, journal editorial boards could use decision making rules similar to MIA to attract volunteers. Through linking of decision making rights and defined implementation work, it would be recognized that certain type of work that could be done by external participants matters more than mere presence of existing internal talk and communication intensive participants. To reduce risk, only certain decision making rights can be given to new participants to start with, until existing board is not assured they are fit to carry out editorial work along journal’s long term goals and strategies. This opens up groups and projects for new participants who would from the beginning adopt the culture (habits) of doing the implementation work, while simultaneously reducing detrimental influence. It could also lead to justified exclusion, or sidelining, of existing internal talk and communication intensive participants. In the context of volunteer self-managed groups, this is a positive culture to develop. Existing software, like the Open Journal System (OJS) could be extended to enable this process to happen. An option for privacy, with reasons stated, could be added to the open-process workflow.

Modular process: stages and states

To summarise, open-process academic publishing would amount to the following being open: initial submission, editorial collective and individual comments, peer reviews, further peer comments, author comments back to reviewers, all the subsequent drafts, and the final published or rejected text.

One objection is that authors would want only their final version used and quoted, or to have the last final version clearly recognised and marked as the final one. A way to both increase the chances of that, and to modularise and define the work in a way to create conditions for the above open processes and their benefits, would be to introduce the concept of submission stage-and-state, using software web tools at our disposal to implement it. So that it is clear that when a submission comes in (in an openly visible web queue, imagine it like an RSS feed on the side bar of a website), it is at the stage called First Draft. As the submission moves through the stages of the publishing process, its full status changes accordingly. This defines our publishing workflow. Each stage could be in one of the two states: a) in the process (state = awaiting), or b) written (state = received) – both as seen from the perspective of journal editors. We could call full status its stage and state (awaiting or received) together. Awaiting and received states of each stage can be an important functional addition, so that involved parties can be notified when the state of a stage changes. For example, when the editorial board sends the paper for peer reviews, full status could read First Draft, Peer Review (awaiting). When the reviews come back, full status could change to First Draft, Peer Review (received). Here is how the whole workflow could look like, with each stage having its own queue containing all of the papers in that stage: 1. First Draft – incoming article, initial submission; 2. First Draft, Editorial Review – assigned to the next round of editorial board review (awaiting), and editorial review complete (received); 3. First Draft, Peer Review – sent for peer reviewing (awaiting), and peer review complete (received).

When required, the process could continue repeating the same steps, starting from the Second Draft, until the editorial board final decision on the paper is reached. Each journal would need to decide on its own stages, and it would be interesting to see differences in editorial models becoming visible. The key is to have the stages defined and ordered, no matter what they are. OJS, which give us a fine grained, yet clear, chart of the workflow that the software enables, can be a good starting point (Public Knowledge Project 2008:12). To clarify: a stage is a defined key step in the publishing process. Each stage can be in one of the pre-defined states. Full status is stage + state. It tells us the location of submitted paper in the process, what is currently going on with it, and whose turn is it to act on it.

If the editor in charge of the paper peer reviewing process decides that a new revision is required, status could be changed to Second Draft (awaiting). Changing of the stage and state of the submission could be done with an action as simple as editor changing the drop down menus with available stages and its belonging states. Web system would automatically do the required action – OJS does this already within its defined workflow. For example, when an editor changes the stage of submission to Second Draft (awaiting), it would send peer reviews received for the first draft and a note to the author (email CC the editor). Simultaneously, ‘Recent Changes’ – a web page which would, like on wiki systems, record each stage and state change of all the papers currently in the process – could be updated. When a new draft based on addressing the points raised in peer reviews is received (web submission by the author), full status automatically changes to Second Draft (received); etc, until we get to the Published, Rejected stage – or some more fine grained final outcome full status.

I have not used proprietary software for web based journals but i would be surprised if something like this already does not exist. However, although such existing proprietary systems to manage academic publishing process were not designed to enable open collaboration based on this model, we can still, and should, learn from them. More importantly, picture i presented here is a quite developed system. We do not need to wait to get to that point of software development in order to start developing our writing and publishing practices according to the currently existing tools and on-line cooperative models.

Existing tools, as simple (or, as complex, with thousands of plugins and themes) as this blog and freely available wikis and content management systems (Drupal, Joomla) can be customized well enough to enable us to start working using open-process collaborative practices with a significant degree of labour saving automation and other benefits now. Many of web systems that we could start using now to implement some aspects of this proposal are available in commercial hosting packages with high levels of fine grained point-and-click installation, backup and administration for less than few hundred pounds per year. This includes all the Internet bandwidth that an average journal might need, and in comparison to what was available only few years ago, it is a huge increase in affordability of web systems. It is the human element – seeing the potentially positive benefits, seeing them being larger than the risks associated with those changes and the risk of remaining in current closed models, changing the habits of editorial boards – that is the biggest obstacle. Finding the right web based technology is far less of a problem: we could improvise to start with simplest solutions, and add complexity later, in small incremental steps. By doing so, we would follow one of the most fundamental architectural, design principles of building the Internet over the past decades, simplicity: ‘3.5 Keep it simple. When in doubt during design, choose the simplest solution’ (Carpenter 1996).

What if software was developed through closed models?

If the currently existing closed academic publishing models were used instead of the open-process cooperation, it is very unlikely that we would have ended up with the software that runs the blog on which first versions of this text were written. WordPress has 7100+ available plugins that can be installed with a click on web interface – self-hosted installation is required, and in the case of vast majority of plugins no further technical knowledge is needed. It is even less likely that we would have ended up with open protocols (Internet Engineering Task Force 2009; Internet Mail Consortium n.d.) and networks that enabled standardised networking that we know as the Internet today.

Here’s how a close collaboration could have looked like without the software and networking communities we had since 1970’s, without hackers, without Internet Engineering Task Force, Free Software and Open Source production:

  • Most likely, the Internet in today’s form would not have existed. Instead, we would have had closed, commercial (pay to view), competing networks, where the exchange between the networks would have been in many cases impossible, and expensive and not affordable to many. There were commercial attempts to close the World Wide Web in separate networks, both in its early phase (AOL), and during the height of broadband expansion (an option, danger, that was discussed at the time).
  • Open mailing lists as central hubs where work on software, networks and protocols is debated would not exists.
  • IRC/online chat channels devoted to those projects would not exist.
  • All communication on patches (Wall n.d.) prior to the patch submissions (Bird, Gourley, and Devanbu 2007): problems, improvements, priorities, suggestions, ideas would be strictly between software source code maintainers and new contributors, and not in any way open, visible, to other contributors, nor to the public. Generations of software and network engineers could not learn from each other’s publicly available work and detailed discussions, but would have to rely only on learning through educational institutions and employment. Overall, in comparison with today’s model, it would all happen in the condition of extreme isolation.
  • Comments in the source code (Kotula 2000) (part of the cooperation in engineering and often of the submission process too) would not exist, or it would be invisible to anyone other than employees of the companies producing it.
  • There would be no blogs, nor news stories commenting on discussions that happen on mailing lists and other places of cooperative software and networking production. Instead, we would rely on PR coming from companies that produce the software.
  • Most likely, neither blogs, nor wikis (Cunningham 2009) would have been invented in the first place, or they would have been a minor, undeveloped software niches.
  • Only the final executable software would be available. Perhaps, in some cases, the final source would be available too, but for different purposes than it is today.

For those who have participated in open-process software and networking cooperation and those who know how it works: this closed version is a depressing picture, one that would give us neither the software, nor the Internet as we know it today. Especially not the amount of people involved in building it, nor the diversity of applications we have today.

If you have not had the privilege of participating in any of this, and if you have any doubts about how exactly the Internet was built, and what am i referring to, here are the working principles of the IETF. It is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF Mission Statement is documented in RFCC 3935 (H. Alverstrand 2004), and it operates on those principles, worth quoting in full (WG = working group):

  • Open process – any interested person can participate in the work, know what is being decided, and make his or her voice heard on the issue. Part of this principle is our commitment to making our documents, our WG mailing lists, our attendance lists, and our meeting minutes publicly available on the Internet.
  • Technical competence - the issues on which the IETF produces its documents are issues where the IETF has the competence needed to speak to them, and that the IETF is willing to listen to technically competent input from any source. Technical competence also means that we expect IETF output to be designed to sound network engineering principles – this is also often referred to as ‘engineering quality’.
  • Volunteer Core – our participants and our leadership are people who come to the IETF because they want to do work that furthers the IETF’s mission of “making the Internet work better”.
  • Rough consensus and running code – We make standards based on the combined engineering judgment of our participants and our real-world experience in implementing and deploying our specifications.
  • Protocol ownership – when the IETF takes ownership of a protocol or function, it accepts the responsibility for all aspects of the protocol, even though some aspects may rarely or never be seen on the Internet. Conversely, when the IETF is not responsible for a protocol or function, it does not attempt to exert control over it, even though it may at times touch or affect the Internet.

If you are an academic, or a student, try thinking of the open-process knowledge publishing and production proposals this way: if the above working principles had not existed, we would not have had the Internet, the blog on which this text was developed, nor most tools you use in daily life and work to communicate and cooperate. I doubt you would prefer such situation. If you would not, why not implement similar open processes in academia. If we are to judge open-process cooperation models by the results in software and networking protocols, we are missing a lot in the knowledge production by staying closed. I am not blind to political consequences of this proposal, quite the contrary, but discussing it would take too long – i’ll leave that for other texts.

Have no doubt, i see (clearly enough to keep working on showing its plausibility) a future of volunteer driven, open process direct-and-participatory democratic state-forms along the lines of this proposal, but that too, is a matter for a different text. Changing closed (closed given the existing opportunities for radically more open cooperation) academic publishing, bringing it where it could be in the light of existing volunteer driven open-process cooperation, in the age of the Internet Model production, is enough for a single text.

One of the most important reasons why the IETF and Free Software spread and were successful is because the results were immediately (or soon enough for people to notice, value it, and join the work) visible and operational (working examples). The same principle cannot be applied to theory, at least not in social sciences and humanities: we cannot see a theory implemented, working quickly. However, we can make the processes of open cooperation immediately operational and visible through adoption and development of open-process methods.

A Simple Transition: the Linux kernel development process

The above elaboration is perhaps too complex to be implemented straight away, to be the next step in a move from a closed access journal, to an open-process one. Ideally, we need a simple transition model. A model that will require a minimum amount of both additional labour and capital investment at the beginning (most editorial boards are volunteers already stretched to limits), and that will scale, if required, at a later stage. As Benjamin Geer correctly suggested, Linux kernel development process is one such model. It is well tested as it has been working well for over a decade in software.

Here’s how would such model work: the editor has gathered a group of scholars who have the time and interest to do peer review; Linus Torvalds, main author of Linux kernel calls them his ‘lieutenants’. There is an open mailing list, and a web site that says: ‘If you want to publish an article in this journal, you must propose your idea on the mailing list before you write the article.’

People show up on the mailing list and say things like, ‘I’m thinking of writing an article explaining X, etc., etc.’ The lieutenants (and the other subscribers) say, ‘That won’t work unless you deal with Y somehow. Also, you’ve assumed that X=Q, which is doubtful. Go and read Z and think about it some more.’ Thus they prevent submissions that are based on ignorance, well-known fallacies, etc. And they do this much more quickly than traditional peer review, because they don’t have to read an 8,000-word article to find out that there’s a serious problem: they can find and fix bugs at the design stage rather than the implementation stage. As it is well known, it is much cheaper and quicker to fix bugs at the design stage. Indeed, to improve peer reviewing, some of Armstrong’s suggestions are very close to ours: ‘With an early acceptance procedure, researchers could find out whether it was worthwhile to do research on a controversial topic before they invested much time. An additional benefit of such a review is that they receive suggestions from reviewers before doing the work and can then improve the design.’ (Armstrong 1997:17)

After the initial discussion, the authors then go away and produce rough drafts, which can be incomplete, or even just outlines with implementation notes (data to be gathered later, etc.). He posts the draft back to the mailing list. Then people on the list say, ‘OK, that looks better, but you need to make sure you deal with A’s argument, and get data on B, etc.’ Thus by the time an author submits an actual article, the editor and the peer reviewers already have a pretty good idea of what’s in it. The author also has a good idea of how receptive the reviewers are to the article, and thus how likely it is to be published. This helps everyone avoid wasting time on submissions that have no chance of being accepted, and yet, most important, the quality control role of the peer reviewing process is maintained.

The lieutenants do not have to do all the reviewing themselves, because authors comment on each other’s works in progress on the list. It’s in their interest to do so, because the tougher they are on each other, the less likely it is that flawed articles will slip through the process, and the better the journal’s reputation will become, thus making it a more prestigious place to get published. This means less work for the lieutenants. It also means development of a community of peer reviewers whose interest becomes to increase the reputation of a journal in which they publish.

Instead of publishing issues on a regular basis, the journal can publish each article electronically whenever it is ready. Articles get published when the community consensus is that they’re good enough to publish. At any given time, if there are no finished articles, the journal does not have to publish anything; thus there is no pressure to lower standards or to rush the process in order to meet a deadline.

A print issue can be treated as the ‘Best of’, or a special/themed issue, containing only a selection of what has been published on-line. This process would make a journal a lively place of activity, with authors being always kept up to date with what is going on with their submissions, and with a possibility for any journal reader to get engaged, on a volunteer basis, through this open process.

Over time, the editor should become more of a coordinator, like Linus, whose role is mainly to establish a general editorial line (e.g. it’s a political journal, and not one on culture, yet papers and issues on culture are welcome if done from angles productive for political debates and issues) and to arbitrate between the lieutenants when they disagree.

All that is needed for this process to start being used is an open mailing list. For early stage ideas, author can write emails directly to the mailing list – reviews can be done by replies. In a later stage, authors can email Word or Open Office documents, and reviewers can use commenting features, the system everyone is familiar with.

There is a ready available slightly more advanced option, a WordPress plugin that enables online commenting of the text written in blog pages, where comments appear sideways to the paragraph being commented on (Fitzpatrick 2007). There are plenty examples on their website, check the The Iraq Study Group Report with comments. The fanciest cutomized interface for this extension is the one used for the McKenzie Wark’s 2007 Gamer Theory (Harvard University Press) book. Pages are shown like a deck of cards, there are arrows underneath for next/previous navigation, and on the right hand side is the scrolling box with comments.

To make it simple to start with, all that is needed is an open, archived, easy to backup, mailing list. Other parts of the open process can be improved later. However, it is important to remember that WordPress and Drupal could be a good extension of the mailing list. Hosting for both is cheaply available, and in the case of WordPress, there is point-and-click backup and restore functionality. For the needs of most academics, blogging software provides incredibly richly and easily (minimal, often no, technical knowledge required) extensible cooperation platform. It is a vast collection of multiple functions providing impact far greater than several separate pieces of individual software.

Open-process peer reviewing and citing early drafts

There is one significant problem with the processes as open as we are suggesting here. Although authors might like the more extensive peer reviewing that is likely to happen on an open mailing list, it is to expect that most of them would not want to have their work cited, nor used anywhere, before the final version accepted by the journal isn’t ready. Or, at minimum, before they post a copy publicly for reviews on their blog, as some authors do. It would be extremely difficult, if not impossible, to prevent that with technical solutions. Yet, there is a cultural safeguard parallel from the Linux kernel development that we can reuse.

If a Linux kernel is released with a serious bug, people get annoyed, and the author of the offending code might be publicly embarrassed. However, if you post buggy code on the Linux kernel mailing list and someone notices, the worst thing that will happen to you is that you will have to fix it. Why? Because everyone knows, it is not safe to download source code from mailing lists and expect it to work properly. This is a cultural thing: it is accepted that free-software mailing lists are for hashing out ideas, not for finished work. Everything about them screams: ‘Danger, Construction Work’.

Therefore, we think that peer review could be open (also suggested by Armstrong) if it had the right cultural safeguards. There would have to be some principle like ‘respect for peer review’, which meant that citing journal-mailing-list messages and preliminary drafts in academic articles would be considered a huge taboo. Academic ethics would have to include the idea that you can criticise preliminary drafts as much as you want, but only on the journal-mailing-list. If you want to criticise them anywhere else, you have to wait until the final version, or a draft version approved by the author, is published. In this case, we believe, authors could be made comfortable with proposing preliminary ideas and subsequent drafts on a mailing list, without having to fear that they will be attacked while in the middle of writing.

Final Words

When i started writing this article, i though there are multiple risks, drawbacks, significant additional labour investments, transition plans, and other reasonably raised issues to be addressed, in order for this proposal to make sense to the editorial boards who will be making decisions whether to try adopting elements of open-process academic publishing and peer reviewing, or not. What i found trough research surprised me. I have been convinced that successful journals that do not take risks and change towards open-process participatory publishing in some way, risk losing most. They risk losing relevance in their field to new journals that could capture the attention of academic community in given field if they embrace elements of open-process possibilities as their competitive advantage. In medicine, PLoS One journal started from scratch in 2006. Today, it is one of the largest journals by volume in the world, peer reviewed, open access and with rich use of commenting tools and automatically generated article metrics. Its primary publishing criteria are data and methodology validity, while they leave the originality and importance for readers to judge. Their downside is a highly problematic principle that authors pay publishing costs, although this is somewhat balanced by a fee waiver system and by the reviewers not knowing whether the authors pay or not. More important, PLoS (PLoS 2009), PLoS One and other innovative examples i came across still use only a small part of what open-process paradigm offers. It should not come as a surprise if we see journal success stories based on innovation in publishing and reviewing models in social sciences, humanities and arts soon.

The key claim of this text can now be summed up: best opportunities for enabling cooperative and participatory open-process knowledge production do not arise from a combination of Open Access and article metrics alone, but from the adoption and customization of the open-process paradigm elaborated here. This way, ground-breaking ideas and more cooperative process of knowledge creation will be encouraged.


[1] There are reputable journals already allowing comments directly in texts, blue squares in the text are user made comments.

[2] See (Kaplan 2005) as an example of a proposal to make reviewers to account for their comments.

[3] See (Fitzpatrick 2010) book draft for an extensive analysis of the problems of anonymity in peer reviewing.

[4] See how peer review functions could be developed and improved with cooperative approach, through a new system Scienstein. For a more technical explanation of Scienstein, see (B. Gipp, J. Beel, and Hentschel n.d.).

[5] See (Nielsen 2009b) See Nielsen, “Is scientific publishing about to be disrupted?.”, especially the part where he discussed how New York Times cannot compete in providing scientific writing with plenty of top scientists and their blogs.

[6] (Armstrong 1997:22-23) suggests alternative forms of articles, including publishing electronically peer reviews.

[7] See (Gura 2002:258-260) for an open peer reviewing model which starts with fully finished articles.

[8] See (Mietchen n.d.). In the spirit of Open Process, he provided several excellent comments and references, some of which i incorporated in the text.

[9] CSA 2010, 18th-20th March 2010, Berkeley, USA – presentation in the Technology stream.

[10] Ethics-free claim is entirely untrue. Ethics of Open Source is a capitalist one. See (Prug 2007).

[11] This is an early version, yet widely functional version of plugin installation/removal and activation/deactivation. Its weakest side is that it does not take the list of currently available plugins live, but it instead provides the fixed list with each release (Kukreti n.d.).

[12] Also developed through open cooperative processes with the final results open too – see email standards (Internet Mail Consortium n.d.).

[13] See (Whitworth and Friedman 2009b) example of a democratic knowledge exchange system design (Figure 1) for a complex system.

[14] The idea for this section was provided by Benjamin Geer in the comments of the blog after the initial text version was written there. I used his words verbatim for most of the section. You can see his original contribution on the blog. This kind of cooperation is precisely what the text advocates. It was unexpected, but not a complete surprise, to get an example of the open-process cooperation while writing the text.

[15] See (Atmospheric Chemistry and Physics Editors n.d.) , where the first draft gets posted on a website for an 8 week long open discussion, after which it gets edited, to finally enter the peer reviewing process.

[16] See (Callaos 2009) as an example of what they call multi-methodological approach, using a combination of top-down and bottom-up, blind and open peer reviewing.

[17] The plugin (Tejeda n.d.) has a new name, Digress, and release, with active development ongoing.


“Kernel Trap.” (Accessed September 20, 2009).

Adio, Sarah, Johann Jaud, Bettina Ebbing, Matthias Rief, and Günther Woehlke. 2009. “Dissection of Kinesin’s Processivity.” PLoS ONE 4:e4612.

Armstrong, J. Scott. 1997. “Peer Review for Journals: Evidence on Quality Control, Fairness, and Innovation.” Science and Engineering Ethics 3:63-84.

Arnison, Matthew. 2003. “Open publishing is the same as free software.” (Accessed September 20, 2009).

Atmospheric Chemistry and Physics Editors. n.d. “Atmospheric Chemistry and Physics – Review Process.” (Accessed October 4, 2009).

Beel, J, and B Gipp. 2008. “Collaborative Document Evaluation: An Alternative Approach to Classic Peer Review.” Proceedings of World Academy of Science, Engineering and Technology 31:10.

Biagioli, Mario. 2002. “From Book Censorship to Academic Peer Review.” Emergences: Journal for the Study of Media & Composite Cultures 12:11.

Bird, Christian, Alex Gourley, and Prem Devanbu. 2007. “Detecting Patch Submission and Acceptance in OSS Projects.” P. 26 in Proceedings of the Fourth International Workshop on Mining Software Repositories. IEEE Computer Society (Accessed September 20, 2009).

Bousquet, Marc. 2008. How the university works : higher education and the low-wage nation. New York: New York University Press.

Callaos, Nagib. 2009. “Participative Peer-to-Peer Reviewing: PPPR.” Orlando, Florida, USA (Accessed October 6, 2009).

Carpenter, B. 1996. “RFC 1958 (rfc1958) – Architectural Principles of the Internet.” (Accessed September 23, 2009).

Cope, Bill, and Mary Kalantzis. 2009. “Signs of epistemic disruption: Transformations in the knowledge system of the academic journal.” First Monday 14. (Accessed October 5, 2009).

Crocker, Stephen D. 2009. “How the Internet Got Its Rules.” The New York Times, April 7 (Accessed September 20, 2009).

Cunningham, Ward. 2009. “Wiki Design Principles.” (Accessed September 20, 2009).

Dyer-Witheford, Nick. 2005. “Cognitive capitalism and the contested campus.” European Journal of Higher Arts Education.

Fitzpatrick, Kathleen. 2007. “CommentPress: New (Social) Structures for New (Networked) Texts.” Journal of Electronic Publishing 10.

Fitzpatrick, Kathleen. 2010. “One: Peer Review – anonymity.” in Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York University Press (Accessed October 7, 2009).

Geer, Richard Malter, and Toni Prug. 2005a. “Introduction to Open Organizations.” (Accessed September 20, 2009).

Geer, Richard Malter, and Toni Prug. 2005b. “Open Organizations: Guidelines for Volunteer Working Groups.” (Accessed September 20, 2009).

Gipp, B., J. Beel, and C. Hentschel. n.d. “Scienstein: A research paper recommender system.” Pp. 309–315 in International Conference on Emerging Trends in Computing.

Gura, Trisha. 2002. “Scientific publishing: Peer review, unmasked.” Nature 416:258-260.

H. Alverstrand. 2004. “RFC 3935 – A Mission Statement for the IETF.” (Accessed August 6, 2009).

Internet Engineering Task Force. 2009. “Official Internet Protocol Standards.” (Accessed September 21, 2009).

Internet Mail Consortium. n.d. “IETF Request For Comments defining E-mail.” (Accessed November 7, 2009).

Internet Mail Consortium. n.d. “Internet Mail Standards.” (Accessed September 21, 2009).

Kaplan, David. 2005. “How to Fix Peer Review.” The Scientist 19:10.

Koop, Thomas. 2006. “An open, two-stage peer-review journal.” Nature. (Accessed September 23, 2009).

Kotula, Jeffrey. 2000. “Source Code Documentation: An Engineering Deliverable.” P. 505 in Technology of Object-Oriented Languages, International Conference on, vol. 0. Los Alamitos, CA, USA: IEEE Computer Society.

Kukreti, Utkarsh. n.d. WordPress Plugin Manager. (Accessed September 20, 2009).

Langridge, Stuart, and Ian Hickson, eds. 2002. “Pingback 1.0 Specification.” (Accessed September 20, 2009).

Marxist Internet Archive Admin Committee. 2009. “Marxist Internet Acrhive Volunteers.” (Accessed September 22, 2009).

Mietchen, Daniel. n.d. “Open-process academic publishing: Some more comments.” (Accessed September 29, 2009).

Mietchen, Daniel. 2009. “What would science look like if it were invented today?.”,5061.html (Accessed September 29, 2009).

Nielsen, Michael. 2009a. “Doing science in the open.” (Accessed September 29, 2009).

Nielsen, Michael. 2009b. “Is scientific publishing about to be disrupted?.” (Accessed October 3, 2009).

Nielsen, Michael. 2009c. “There is no single future for scientific journals.” (Accessed September 29, 2009).

Phillips, Carl, Paul Bergen, and Karyn Heavner. 2009. “Pre-Submission and Post-Publication Reviews as Partial Solutions to the Fundamental Inadequacy of Public Health Science Peer Review.” Orlando, Florida, USA

PLoS. 2009. “PLoS Progress Report.”

Prug, Toni. 2007. “Hacking ideologies, part 2: Open Source, a capitalist movement.” (Accessed September 15, 2009).

Public Knowledge Project. 2008. “OJS in an Hour.”

Public Library of Science. n.d. “PLoS Currents: Influenza – Rapid Access to Research in Progress.” (Accessed September 29, 2009).

RSS Advisory Board. 2009. “RSS 2.0 Specification (version 2.0.11).” (Accessed September 21, 2009).

Sievers, Burkard. 2008. “The psychotic university.” ephemera 8:238.

Suber, Peter. 2007. “Open Access Overview (definition, introduction).” (Accessed September 20, 2009).

Tejeda, Eddie A. n.d. (Accessed October 3, 2009).

Varmus, Harold. 2009. “A new website for the rapid sharing of influenza research.” Official Google Blog. (Accessed September 29, 2009).

Wall, Larry. n.d. Patch. Free Software Foundation (Accessed September 20, 2009).

Weber, Ron. 1999. “The journal review process: a manifesto for change.” Communications of the AIS 2:3.

Whitworth, Brian, and Rob Friedman. 2009a. “Reinventing academic publishing online. Part I: Rigor, relevance and practice.” First Monday 14. (Accessed September 20, 2009).

Whitworth, Brian, and Rob Friedman. 2009b. “Reinventing academic publishing online. Part II: A socio–technical vision.” First Monday 14. (Accessed September 20, 2009).

Wilbanks, John. 2009. “Publishing science on the web.” Common Knowledge. (Accessed September 20, 2009).

Wu, Fang, Dennis M Wilkinson, and Bernardo A Huberman. 2009. “Feedback loops of attention in peer production.” 0905.1740. (Accessed September 30, 2009).


Benjamin Geer, whose early discussion and comments made a key contribution by providing a simple transition model. Daniel Mietchen, whose comments and references provided valuable additional lines of research. Most of their contributions can be seen in their original form in comments at my blog. Both provided additional feedback to the submitted version of the paper, which helped to sharpen and clarify arguments further. I started writing this paper on the blog as a set of recommendations to the academic journal Historical Materialism, encouraged by discussions with one of their editorial board members, Demet Dinler.


This page is wiki editable click here to edit this page.

3 comments to Open-process Academic Publishing

  • Patrick

    I find the whole idea quite interesting, and the fact that it works for softwares can certainly give us hopes. A few points though:

    1. I wouldn’t emphasise as much the software paradigm, and would probably scrap the part “What if software were developed through closed models”. I don’t think it brings a lot to the discussion of publication, except to show that open processes are successful in programming.

    2. The speed of innovation is not as clear cut… True, opensource softwares have been at the hear of important innovations. But commercial companies also have been very quick to foster major developments… They also have more focus on usability/GUI than open source contributors might do.

    3. I am also at the LSE (Economics) and have certainly been given the opposite advice: to be bold and daring! So no generalisation… it’s probably highly dependant on the topic, and its internal dynamism (the two issues being obviously linked in both directions).

    4. Also, I think one point that you do not mention at all is the potential unintended consequences of non-anonymous refereeing.

    In a given specialty, there are not that many people worldwide that can comment. It means that people working on the same topic as you are going to be people you interact repeatedly within your career.
    If you are not anonymous anymore, and you get rejected for (what you believe to be) the wrong reason, you’d probably be mad at the referee and will want to reject his next work… In the long term, it’s probably for the quality of the research if it’s very hard to publish and everyone tries to shoot down bad ideas, but one also needs to think about the incentives on the writer’s side… If it’s so hard, maybe it’s not that fun as a career.
    The argumetn goes also the way around: if you have to ref someone who is known to be influential (not necess. for publication but for other academic issues), you might not want to reject that person if you are going to apply to his university for instance.

    Just to say that these “reputation” issue should be taken into account when considering non-anonymous refereeing, which is part of the Open-Process you propose.

  • Mike

    Sorry for the dated reply but your argument regarding the limits of content exchange such as blogging and preoccupation with final versions (OA) has been similarly observed within intelligence agencies in the United States.

    This blogger suggests that “living intelligence” should be applied to scientific publication.

    As suggested within your text, the idea of a more advanced original content creation model is an extremely small field.

    The arguments made by Chris Rasmussen are similar but are born out of furthering the open principles behind internal company collaboration or Enterprise 2.0, which is similar to your argument of furthering open source and OA models.

    The videos within these links are especially good at explaining the background and idea.

    I wanted to share these links because the ideas over-lap substantially.

  • Very interesting proposal.

    The Wiki Encyclopedia of Law is applying some of your proposals. In general, and taking in account that an Encyclopedia is not an academic journal, the outcome is positive.

Leave a Reply




You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre lang="" line="" escaped="">