- 1 The Internet Model = why Open Access is not enough
- 2 Open-process publishing and reviewing advantages
- 3 Internal benefits for journals
- 4 Modular process: stages and states
- 5 What if software was developed through closed models?
- 6 A Simple Transition: the Linux kernel development process
- 7 Open-process peer reviewing and citing early drafts
- 8 Final Words
The Internet Model = why Open Access is not enough
This is an early version of the text. Latest version of this text is here.
Publishing and peer review processes in academia are currently closed models. In my view, at least in the areas i operate in (social sciences and humanities), these processes should be far more, if not entirely, open, with a provision for privacy in special cases. I call this model Open-process academic publishing. The name deliberately distinguishes it from Open Access, which refers to only the final outcome of academic knowledge production being open. The suggestion is not to open the processes in random ways, but in ways in which this openness — fundamentally based on volunteer participation — brings/enables more structure, more internalized working discipline, more commitment, and more ability to improve cooperation/collaboration with deliberate precision – all with the goal of improving the outcomes. “[...] culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has”, hence, we could call it The Internet Model (software/FS + networking/IETF). Its potential screams for being reused, hacked, for other areas of production. Academia, especially its publishing side, seems to me capable of embracing such volunteer-core open-process cooperation.
The model proposed here brings only few new aspects, mainly those related to the work done in the Open Organizations project. It’s an abstraction, a theoretical development of a model developed for decades in software and networking, and related concepts and practices, especially their open-process part, have been already reused in news production.
What are my motives, you might ask? I’m a first year PhD student, and i’m dreading the idea of being drawn into the existing closed model, model where you mostly, in social sciences and humanities (dozens of journals that i checked), have no idea how long will it take for you submission to be processed, what are the stages in the process and how do you engage with it (other than wait). Quite a few journals do have all these elements stated on their webpages, but it still takes years, it still doesn’t embrace openness for better cooperation, and it still makes no sense to me. I find the current state of academic publishing depressing and unacceptable. The most unacceptable element is that we’re supposed to produce new knowledge. And yet, with all the existing tools and processes for communication and cooperation, processes that gave us the Internet and most of what’s good about it, in academia, in terms of our working processes, ways of cooperation, we still mostly operate as if very little of this open volunteer based cooperation has actually happened – we mostly ignore it.
Instead of enabling better cooperation, which is the key for knowledge production, Internet and electronic tools are used in academic institutions increasingly to enlarge and multiply bureaucratic procedures, regulations and managerial control – that seems to be the trend. Fine, managers are trying to do what they think their jobs are, but what about academics? Why are they not adopting those new tools and processes? Is situation as extremely rotten as this recent paper boldly states:
Academics are now gate–keepers of feudal knowledge castles, not humble knowledge gardeners. They have for over a century successfully organized, specialized and built walls against error. [...] As research grows, knowledge feudalism, like its physical counterpart, is a social advance that has had its day. (Whitworth, Friedman, First Monday, Volume 14, Number 8 – 3 August 2009)
Open Access movement and academic blogging are examples of the positive adoption, and it inspired me to get involved and start recently writing in open, on blogs, about Open Access. Good quality academic blogging is great, but it is limited to individuals working on their own, linking and having discussion through comments. It doesn’t apply the full software-networking Internet model, which isn’t a surprise – blogging is not meant to be about collective, organised, prolonged production work . Still, i’m tempted to argue that blogs, pingbacks, discussions in comments, intense circulation of new posts and comments (via RSS) amongst clusters of inter-linked blogs, are all elements of an early form of the open-process part of the Internet Model developing in academia – not in an institutional setting, but, for now, in a self-administered, out-of-institutions, way. Which is a good thing – it carries the volunteer-core spirit, an essential part of the Internet Model open-process side. John Wilibanks recently wrote on his blog: “science is already a wiki [...] just a really, really inefficient one – the incremental edits are made in papers instead of wikispace” – it is in this light that i see blogs and blog comments as a new form of scientific production which could be integrated, and improved on, into the institutional setting and journal papers production. Hence my below argument for adding a new type of journal paper, one suitable to a faster, more responsive, easier to asses, production of theory, more suitable to how we work today. However, for this to happen, we can’t just add a new type of academic paper to the existing publishing models. We need to change the publishing processes too, to make this possible.
Within Open Access, the possibility of opening up, radically changing for better, the actual processes of academic production and publishing, based on the existing models developed in software and networking, are dismissed as not relevant, nor required, nor good for the goals of OA initiatives. I have little desire to argue with such positions, since to me they seem to come from a different discursive universe, and we’ll be wasting our energies, trying to reconcile our light-years-separated standing positions. The reasons for change are many and developed in detail below. The best place for a substantial critique of the existing model and its problems, Reinventing academic publishing online. Part I: Rigor, relevance and practice, was published in First Monday days after i finished writing the first draft of this text – i strongly recommend it as a complementary reading to this text. While i fully agree with OA goals, and i’m working on implementing and promoting them, OA falls way too short of what, given the models and tools we have at our disposal, could and should be done in the academia.
Primary limitation of OA is focusing on only one part of the Open Source paradigm: openness of the final product. Which is not a surprise, given that this was the most dominant concept signifying the success of the software and networking communities at the time of creation of the OA ideas.
Today, i claim, we need a paradigm shift. Even if OA did incorporate most of the main methodological points about the collaboration that Open Source was representing, it still would not have been enough. Open Source is a very limited subset of methodology that made software and networking communities so successful. Hence, to re-capture what was lost in the Open Source, we need an Open Process and The Internet Model to replace it, and thus to expose the world to the revolutionary potential of the re-use of these models in many spheres of society, particularly in science. I will develop in detail the shortcoming of the Open Source model, and reasons for adopting new concepts in a paper i’m currently writing, with the provisional title Open Process & The Internet Model. As soon an alpha version of the paper is ready, i’ll publish it here on the blog and keep improving it live on the blog, increasing the version number with each improvement, following the practice i started with this text. Here, i’ll focus on what i think ought to done, to improve what academic publishing already does, with the focuse on the work of journals.
Open-process publishing and reviewing advantages
The following benefits could be gained with open-process publishing and peer reviewing:
1) Quality of submissions would increase a lot over time – because new authors would see the history of the entire process and learn from it (archive of all submissions, peer reviews, editorial board comments, etc), and because they would be less likely to submit badly written texts with no adjustments to publicly stated journal guidelines (a big problem for editors, i get told over and over, is the large amount of low quality initial submissions). In the current system, with externally invisible submissions, the cost of submission for authors it too low: they can submit any rubbish without adjusting it to journal’s guidelines. The only people who see these disrespectful (towards volunteer work of editors) acts, and who associate it with author’s name, are editors. If submissions were openly visible, the cost of submitting random, unadjusted, low quality, undeveloped papers would be far higher, since such disrespectful behavior would be publicly linked to the author.
2) Quality of texts published would increase in general - because of a ) point 1, b) opening of the whole, or most, of the publishing process would also improve the quality of peer and editorial board reviews, for the same reasons like in point 1). Doing low quality, superficial peer or editorial reviews would be publicly exposed and vice versa – possibility of lost, or gained reputation as an editor or peer reviewer would be a motivating factor. In the current model, all of that work is visible only to those few who participate. The logic of reputation works well in life in general, it can work well via online tools too – Ebay is a good example of quite a successful model of attaching behavior to a name closely.
3) Journals who do this process well would attract more agile and risk taking authors - because through open-process publishing it makes more sense for authors to take more risks (might sounds counter-intuitive at first), be less within the known/accepted knowledge boundaries, since they can rely on the peer and editorial assessments of their work done in public – which in turn can lead to less politically correct, career-opportunist position taking from both authors and reviewers, and to an opportunity for more bold, leaps taking steps from both sides. In short, openness would steer reviewing assessment to be more focused on the merit (of course, different academic communities will have different notion of merit in their fields) of the work assessed, hence authors can be more confident in submitting such, more risk staking, less compromise driven works. Which would lead us away from “The modern academic system has become almost a training ground for conformity.” (Whitworth and Friedman, 2009), and away from the publish and perish devaluing model. Model whose low-risk, but well-referenced style of writing has made overall research difficult to asses. It would encourage ground-breaking authors to publish their new research early and suppress mediocre authors who often, by the sheer number of low-risk publications prosper in the current play-it-safe system, and develop careers by such — for the knowledge production suffocating (clogs the production, editors, reviewers publishers, all waste time) and for invidal careers thriving (get’s authors jobs and research grants), volume publishing. If open-process publishing was widely spread, re-writing of the same papers for different journals, again for the sake of careerism, to get research points and another publication, would be far easier to spot and expose. The current opaque system makes it easy for low-risk careerists, although Open Access is contributing to that changing for better. Open Process would reduce it drastically: if mailing lists were an early implementation model (submissions, editorial and peer reviews, revisions, everything gets sent to an open mailing list), spotting a submission which is a rewritten version of an already published paper would be trivial: one could use any good web search engine to check for key paragraphs, concepts with author’s name and it would be in no time clear whether the author has already published on the topic, where and what.
4) Journals who do this process well would significantly raise the dynamics/pace of research - because some of the most in-depth debates that now happen on academic blogs, could, thanks to the faster and open-process peer reviewing and commenting, move to journals. The form could be shorter, still referenced like academic papers are, and argument even more focused that those in an average 8000 paper are. My impression is that most long journal papers revolve around few core ideas, often not necessarily connected as closely as to necessarily require a single longer paper. Today, i believe that some of these ideas originate in blog posts. We could enable those high quality 700-800 words blog posts to be submitted in a fully referenced short, burst alike, form of 1500-2000 words. Because the argument would be shorter and focused, it would be easier to evaluate it, which would mean shorter turn around peer reviewing and publishing, and hence sooner possibility of those whose work relates to it to respond. The cycle of publishing would thus follow more closely how we research, especially for senior academics for whom: “research is often done when a few precious hours can be salvaged from a deluge of other responsibilities.”(Weber, 1999). It would also contribute to possibly avoiding the destiny of: “Many journal papers are out of date before they are even published. “; with a rather frustrating truth that many experience personally: “In the glacial world of academic publishing one rejection can delay publication by two–four years” (Whitworth and Friedman, 2009).
Internal benefits for journals
In addition, there are enormous internal benefits for journals, that would contribute to their increased organizational health and development:
1) Clearer structure and visibility of tasks and processes contributes to recognizing own most important workers - because more precise (due to breaking it down in defined and openly recorded smaller steps) and more transparent allocation of tasks and responsibilities exposes who does what and how, it rewards those who do more and better work – and in volunteer organizations (most editorial boards/collectives), recognizing contribution, and lack of it, is one of the keys for survival and improvement of the organization. Often, it happens that recognition falls to wrong people i.e. to those who have better social connections, who are in the more visible position. And that kills the spirit, rightly, of harder working, most important, participants.
2) Increased focus on implementation work and continuously carried out processes - because defining workflow steps and stages exposes what is the necessary implementation work that has to be continuously carried out – it puts emphasis on an organization/group/collective as a set of ongoing processes. It also exposes other kind of work as less important, and hence those who do it as less essential for the existence of the group/organization.
In practice: many volunteer loosely structured groups/organizations/collectives suffer from participants who talk and communicate a lot, often object a lot as well, but contribute little to the implementation work tasks. Frequently, these type of participants hinder other key participants — on whose work the organization relies on — from getting on with their tasks. Reducing the influence of these talk&communication intensive participants who don’t contribute much to the implementation work is highly positive for the survival, development and quality of work the organization/group/collective produces.
In other words: structured open processes make it possible for an organization/collective/group to not be open and welcoming to any kind of participation internally nor externally, but be selective instead. More of this kind of openness, means more structure, more internalised working discipline, more commitment, and more ability to improve cooperation/collaboration with precision. In a slightly more abstract terms, the more a whole is exposed, defined, and its workings/operations known/visible, the more we can adjust it, reshuffle it, to make it do what participants in the whole want it to do. Open processes enable this, hence open-process in the name. Closed processes allow more corruption of organizational goals: the less we know about the processes, components and their relations, the more individuals can utilise them for own goals and benefits (in academia, careerism).
In Free Software terms, long term freedoms to act and produce collectively do not come cheaply, and have to be defined, developed and defended. The key pre-requisite for the four Free Software freedoms (defined as ethical demands) to cooperate and share is universal free access to software source code. What is missing from the Free Software definition to give us an accurate picture of the collaborative model discussed here, is what is visible from the IETF principles (see below).
In short, to explain the success of the Internet model, having source code isn’t sufficient. Another key component must be present. And that is developing aimed (goals defined), quality focused, volunteer cooperation in a specific organizational model with the following set of attributes: open participation (anyone can join) and processes, competence, volunteering core, rough consensus and running code decision making principle, defined responsibilities (protocol ownership, in IETF case).
This is precisely why Open Access is not enough to implement a successful open volunteer collaboration on the trail of the Internet software-networking model. One needs a specific organizational model too. And using Open Source paradigm (a movement that is a business friendly and declaratively ethics-free version of Free Software) is even more misleading, because of its emphasis on the source code alone. Open Source is the least useful model/concept of all to help us think this, since it lacks both defined ethics (which is what makes it possible in the first place to define, develop and defend one’s freedoms in Free Software) and a defined organizational model. What we need to explain this successful model, is this formula: The Internet Model = Free Software + IETF. In other words: software + networking. Or even better: ethics + organization. Which is where we arrive to the set of incredibly intriguing political points that ought to be developed here, but i’ll leave that for another text. (small technical note: email subscription to a specific blog category, one used exclusively for publishing longer in-dept texts, will be offered to readers who’d like to be informed when the next text in the Hacking The State series gets published on this blog).
To the existing Internet model, i would add the following organizational attributes as highly beneficial: mapped components and relations (stages — recognizable, definable points in collaboration; working groups; their relation, their inter-processes), defined decision making and defined participation and exclusion models. All of this is geared towards enabling and focusing on the contributions of those who carry out most of the implementation work – such type of work is the blood stream of organization, without its movement, organizations can not produce.
3) Easier project management – because increased task modularity and status (full status of submission = stage + state. see below) real-time visibility (anyone can anytime check the stage&state of any submission on the web system used ) allows for better project management, easier allocation/delegation of tasks, and a more precise sense of progress and problems. Which is all good for the general work spirit, time/resource assessments, and to keep authors who submit papers, and all other parties involved, informed correctly at all times about the stage&state of the submission.
4) Decision making into the hands of the people who matter most - because who does what and how becomes visible, and because those who carry out continuously implementation work matter most for the organization, decision making can be more in their hands.
For example, Marxists Internet Archive (MIA) addresses this by defining a volunteer, and hence defining decision makers, through work contributions: “MIA volunteers are people who have, in the most recent six-month period, made at least three separate contributions over a period of three weeks to six months”.
In the Open Organizations project, we defined this similarly: “Anyone who is doing implementation work in the group, or has done such work in the recent past (e.g. within the past two months), can participate in its decision-making.”
5) Attract new volunteers and reduce impact of the existing counter-productive internal participants – utilizing the above task/process openness and visibility, journal editorial boards could use decision making rules similar to MIA to attract volunteers. Through linking of decision making rights and defined implementation work, it would be recognized that certain type of work that could be done by external participant matters more than mere presence of existing internal talk&communication intensive participants. To reduce risk, only certain decision making rights can be given to new participants to start with, until existing board is not assured they are fit to carry out journal’s long term goals and strategies.
This opens up the organizations for the new participants who would from the beginning adopt the culture (habits) of doing the implementation work and it reduces detrimental influence, and eventually leads to the exclusion of, existing internal talk&communication intensive participants. Which is (exclusion habits and processes) also a positive culture to develop.
Existing software, like the Open Journal System (OJS) could be extended to enable this process to happen. An option for privacy, with reasons stated, could be added to the open-process workflow.
Modular process: stages and states
To summarise, these open process would amount to the following being open: initial draft, editorial collective/individual comments, peer reviews, further peer comments, author comments back to reviewers, all the subsequent drafts, and the final published/rejected text.
One objection is that authors would want only their final version used and quoted, or at to have the least final version clearly recognised and marked as final. A way to both increase the chances of that, and to modularise and define the work in a way to create conditions for the above open processes and their benefits, would be to introduce the concept of submission stage&state, using software web tools at our disposal to implement it. So that it is clear that when a submission comes in (in an openly visible web queue, imagine it like an RSS feed on the side bar of a website), it is at the stage First Draft. As the paper moves through the stages of the publishing process, its full status (stage + state) changes accordingly. This defines our publishing workflow.
First Draft – Editorial Review stage would be a submission with an editorial board review either in process (state = awaiting) or written (status = received); next stage would be First Draft – Peer Review. Awaiting and received states of each stage can be an important functional addition, so that involved parties can be notified when the state of a stage changes. For example, when the editorial board sends the paper for peer reviews, full status could read First Draft – Peer Review (awaiting), when the reviews come back, full status could change to First Draft – Peer Review (received). To clarify:
- A stage is a defined step in the process.
- Each stage can be in one of the pre-defined states.
- Full status is stage+state – it tells us where is the submitted paper in the process and what’s currently going on with it i.e. whose turn is it to act on it.
If the editor in charge of the paper peer reviewing process decides that a new revision is required, status could be changed to Second Draft (awaiting). Changing of the stage and state of the submission could be done with an action as simple as editor changing the drop down menus with available stages and its belonging possible states. Web system would automatically do the required action (Open Journal System does this already within its defined workflow). For example, when editor changes the stage of submission to Second Draft (awaiting) it would send peer reviews received for the first draft and a note to the author (email CC the editor) and perhaps update the RecentChanges web page which would, like on wikis, note each stage and state change of all the papers/submission currently in the process. When a new draft based on incorporation of peer reviews is received (web submission by author), full status automatically changes to Second Draft (received); etc, until we get to the Published, Rejected status – or some more fine grained final outcome full status.
I haven’t used proprietary software for web based journals but i’m quite certain that something like this already exists. However, although existing systems to manage academic publishing process were not designed to enable open collaboration based on a volunteer drive, we can still, and should, learn from them. Picture i presented here is a highly developed system. We don’t need to wait to get to that point.
Existing tools, as simple (or as complex, with thousands of plugins and themes) as this blog and freely available wikis and CMS systems (Drupal) can be customised well enough to enable us to start working using these open-process collaborative practices with a significant degree of labour saving automation now. Many of these web systems that we could start using now to implement a simplified version of this proposal, including various wikis, WordPress and Drupal, are available to be bought in hosting packages that allow quite amazing levels of fine grained point-and-click installation, backup and administration (in comparison to what was available only few years ago) for less then few hundred pounds/dollars/euros per year (including all the Internet bandwidth that an average journal might need). It is the human element — seeing the potentially positive benefits, seeing them being larger than the risks associated with those changes and the risk of remaining in the current closed mode, changing the habits of editorial boards — that is the biggest obstacle. Finding the right web based technology is far less of a problem.
What if software was developed through closed models?
If the currently existing closed academic publishing process were used instead of the open-process collaboration which has been at core of Free Software and Open Source production, it is very unlikely that we would have ended up with the software that runs this blog, with its 6000+ available plugins which can be installed with a click on the web interface (no technical knowledge needed), nor we would have ended up with open protocols (developed through open collaborative processes with final results open too – see email standards) and networks that enabled standardised networking that we know as the Internet today.
Here’s how a close collaboration could have looked like without Internet Engineering Task Force, Free Software and Open Source production:
- most likely, the Internet in today’s form would not have existed. Instead, we would have had closed, commercial (pay to view), competing networks where the exchange between the networks would have been in many cases impossible, and/or expensive and not affordable to many (i vaguely remember a good text on this possible alternative outcome, but can’t recall it)
- mailing lists as central hubs where work on software, networks and protocols is debated would not exists
- IRC/online chat channels devoted to those projects would not exist
- all communication on patches prior to the patch submissions: problems, improvement, priorities, suggestions, ideas would be strictly between source maintainers and new contributor and not in any way open, visible, to other contributors, nor to public
- comments in the source code (part of the cooperation in engineering and often of the submission process too) would not exist, or be invisible
- there would be no blogs nor news stories commenting on discussions that happen on mailing lists, we would rely on PR from companies
- most likely no blogs, nor wikis would have been invented in the first place, or they would have been a minor, undeveloped software niches
- only the final executable software would be available. perhaps, in some cases, the final source would be available too.
For those who have participated in open software/networking collaboration, and/or those who know how it works: isn’t this closed version quite a depressing picture? And one that would never give us nor the software, nor the Internet as we know it today?
If you haven’t had the privilege of participation in this, and if you have any doubts about how exactly the Internet was built, and what am i referring to, here are the working principles of the The Internet Engineering Task Force (IETF),
a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF Mission Statement is documented in RFC 3935.
which operates on those principles, worth quoting in full (WG = working group):
- Open process – any interested person can participate in the work, know what is being decided, and make his or her voice heard on the issue. Part of this principle is our commitment to making our documents, our WG mailing lists, our attendance lists, and our meeting minutes publicly available on the Internet.
- Technical competence – the issues on which the IETF produces its documents are issues where the IETF has the competence needed to speak to them, and that the IETF is willing to listen to technically competent input from any source. Technical competence also means that we expect IETF output to be designed to sound network engineering principles – this is also often referred to as”engineering quality”.
- Volunteer Core – our participants and our leadership are people who come to the IETF because they want to do work that furthers the IETF’s mission of “making the Internet work better”.
- Rough consensus and running code – We make standards based on the combined engineering judgment of our participants and our real-world experience in implementing and deploying our specifications.
- Protocol ownership – when the IETF takes ownership of a protocol or function, it accepts the responsibility for all aspects of the protocol, even though some aspects may rarely or never be seen on the Internet. Conversely, when the IETF is not responsible for a protocol or function, it does not attempt to exert control over it, even though it may at times touch or affect the Internet.
If you are an academic, a student, think of the open processes knowledge publishing/production proposals, and of this particular proposal, this way: if the above IETF and free/open software principles had not existed, we would not have had the Internet, nor this blog, nor most tools you use in daily life/work to communicate and cooperate/collaborate. Would you prefer such state of the world? And if you wouldn’t, why not implement similar open processes in academia. If we’re to judge open collaboration models by the results in software and networking protocols, we are missing a lot by staying closed. I’m not blind to political consequences of this proposal, quite the contrary, but discussing it would take too long – i’ll leave that for another post.
Have no doubt, i see (clearly enough to keep working on showing its plausibility) a future of volunteer driven, open process direct-and-participatory democratic state-forms along the lines of this proposal, but that too, is a matter for a different text. Changing closed academic publishing, bringing it where it could be in this age of volunteer driven open processes cooperation, the age of the Internet Model production, is enough for a blog post.
As to the many people who have been saying similar things (in a less structured and developed form, but the spirit of the Internet Model is there) i’m saying here during the last decade, on many mailing lists, blogs, panels, it would be useful to have it listed in one place, to see what kind of negative answers they were given at the time, and to address those objections that — from the standpoint of desire for open-process academic publishing and peer reviewing — make sense (i’ll try to catalog some of it on a wiki page here).
Overall, i have one thing to say to those who share my views on this topic: one of the most important reasons why the IETF and Free Software spread and were successful is because the results were immediately (or soon enough for people to notice and, value it, and join the work) visible and operational (working examples). The same principle can not be applied to theory, at least not in social sciences and humanities, we can’t see theory implemented, working quickly. However, we can make the processes of open cooperation immediately operational and visible. In other words, to make it happen, we can do it ourselves, now. If enough of us do it, and do it well, closed journals, books, cooperation processes, and closed access knowledge (production process and final products) in general, will become history.
A Simple Transition: the Linux kernel development process
The above elaboration is perhaps too complex to be implemented straight away, to be the next step in a move from a closed access journal, to an open process one. Ideally, we need a simple transition model. A model that will require a minimum amount of both additional labour and capital investment at the beginning (most editorial boards are volunteers already stretched to limits), and that will scale, if required, at a later stage. As Benjamin Geer correctly suggested (and wrote in comments almost this entire section – see comments below the text), Linux kernel development process is one such model. It’s well tested as it has been working well for over a decade in software.
Here’s how would such model work: the editor has gathered together a group of scholars who have the time and interest to do peer review; Linus Torvalds, main author of Linux kernel calls them his “lieutenants”. There’s an open mailing list, and a web site that says: “If you want to publish an article in this journal, you must propose your idea on the mailing list before you write the article.”
People show up on the mailing list and say things like, “I’m thinking of writing an article explaining X, etc., etc.” The lieutenants (and the other subscribers) say, “That won’t work unless you deal with Y somehow. Also, you’ve assumed that X=Q, which is doubtful. Go and read Z and think about it some more.” Thus they prevent submissions that are based on ignorance, well-known fallacies, etc. And they do this much more quickly than traditional peer review, because they don’t have to read an 8,000-word article to find out that there’s a serious problem: they can find and fix bugs at the design stage rather than the implementation stage. As everyone knows, it’s much cheaper and quicker to fix bugs at the design stage.
After the initial discussion, the authors then go away and produce rough drafts, which can be incomplete, or even just outlines with implementation notes (data to be gathered later, etc.). He post the draft back to the mailing list. Then people on the list say, “OK, that looks better, but you need to make sure you deal with A’s argument, and get data on B, etc.” Thus by the time an author submits an actual article, the editor and the peer reviewers already have a pretty good idea of what’s in it. The author also has a pretty good idea of how receptive the reviewers are to the article, and thus how likely it is to get published. This helps everyone avoid wasting time on submissions that have no chance of being accepted, and yet, most important, the quality control role of the peer reviewing process is maintained.
The lieutenants don’t have to do all the reviewing themselves, because authors comment on each other’s works in progress on the list. It’s in their interest to do so, because the tougher they are on each other, the less likely it is that flawed articles will slip through the process, and the better the journal’s reputation will become, thus making it a more prestigious place to get published. This means less work for the lieutenants. It also means development of a community of peer reviewers whose interest becomes to increase the reputation of a journal in which they publish.
Instead of publishing issues on a regular basis, the journal can publish each article electronically whenever it’s ready. Articles get published when the community consensus is that they’re good enough to publish. At any given time, if there are no finished articles, the journal doesn’t have to publish anything; thus there is no pressure to lower standards or to rush the process in order to meet a deadline.
A print issue can be treated as the “Best of”, or a special/themed issue, containing only a selection of what has been published on-line. This process would make a journal a lively place of activity, with authors being always kept up to date with what is going on with their submissions, and with a possibility for any journal reader to get engaged, on a volunteer basis, through this open process.
Over time, the editor should become more of a coordinator, like Linus, whose role is mainly to establish a general editorial line (e.g. it’s a political journal, and not one on culture, yet papers and issues on culture are welcome if done from angles productive for political debates and issues) and to arbitrate between the lieutenants when they disagree.
All that is needed for this process to start being used is an open mailing lists. For early stage ideas, author can write emails directly to the mailing list – reviews can be done by replies. In a later stage, authors can email Word or Open Office documents, and reviewers can use commenting features, the system everyone is familiar with.
There is a ready available slightly more advanced option, a WordPress (software running this blog, freely available and incredibly easy to install and use) extension called Commentpress – online commenting of the text written in blog pages where comments appear sideways the to the paragraph being commented on. There are plenty examples on their website, check the The Iraq Study Group Report with comments. The fanciest cutomised interface for this extension is the one used for the McKenzie Wark’s 2007 Gamer Theory (Harvard University Press) book. Pages are shown like a deck of cards, there are arrows underneath for next/previous navigation, and on the right hand side is the scrolling box with comments.
To make it all simple to start with, all that is needed is an open, archived, easy to backup, mailing list. Other parts of the open process can be improved later. One thing that i discovered lately is how powerful blogging platforms became. For the need of most academic without special figures in their test (maths, physics, chemistry, etc), blogging software like this WordPress seems to me miles ahead of Microsoft Word, or Open Office, as a convenient, yest incredibly richly and easily (point and click, no technical knowledge required) extensible working platforms. I say deliberately platform, and not software, nor blog, nor website, because it provides multiple functions in one, and the collection of them together in an easy to use place, provides a result with impact far greater than what would one get from several separate pieces of software that would be required to perform what advanced platforms like WordPress do. But details of that are also best left for another text. It is enough to remember that WordPress would be, in my opinion, a brilliant extension of the mailing list, and is free to setup and point-click to backup and restore.
Open-process peer reviewing and citing early drafts
One of the problems with the process as open as we’re suggesting here is that although authors might like the more extensive peer reviewing that is likely to happen on an open mailing list, it’s likely that most of them would not want to have their work cited, nor used anywhere, before the final version accepted by the journal isn’t ready. It would be extremely difficult, if not impossible, to prevent that with technical solutions. Yet, there’s a cultural safeguard parallel from the Linux kernel development that we can reuse.
If a Linux kernel is released with a serious bug, people get annoyed, and the author of the offending code might be publicly embarrassed. But if you post buggy code on the Linux kernel mailing list and someone notices, the worst thing that will happen to you is that you’ll have to fix it. Why? Because everyone knows that it’s not safe to download source code from mailing lists and expect it to work properly. This is a cultural thing: it’s accepted that free-software mailing lists are for hashing out ideas, not for finished work. Everything about them screams ‘Danger: Construction Work’.
Therefore, we think that peer review could be open if it had the right cultural safeguards. There would have to be some principle like ‘respect for peer review’, which meant that citing journal-mailing-list messages and preliminary drafts in academic articles (or newspaper articles!) would be considered a huge taboo. Academic ethics would have to include the idea that you can criticise your opponents’ preliminary drafts as much as you want, but only on the journal-mailing-list. If you want to criticise them anywhere else, you have to wait until the final version is published. In this case, we believe, authors could be made comfortable with proposing preliminary ideas and subsequent drafts on a mailing list, without having to fear that they will be attacked while in the middle of writing.
Finally, there are multiple risks, drawbacks, additional labour investment, transition plans, and other reasonably raised issues to be addressed, in order for this proposal to make sense to editorial boards and editors who will be making decisions whether to accept elements of open-process academic publishing and peer reviewing, or not. I’ll write on those in a separate post. Also, consider this a rough first draft. I’ll keep revising it, probably in its own wiki page on this blog.
As to probable objections that this proposal is a speculation with no empirical side to it: thanks to the good work in social sciences and humanities, it is widely accepted today (widely enough for me) that both empiricism and idealist speculation are both dead concepts. However, not only am I happy to speculate to an extent — based on my subjectively objective reading of reality (which is the position that matters most, since there are no neutral objective positions), which has little in common with an empiricist one — i believe it is necessary to do so. Only practice can confirm our speculations right, or wrong. And even when we do get confirmed to be wrong, i’m perfectly happy to live and die by Beckett’s: “Try again. Fail again. Fail better.”.
In many ways, my work on open-process collaboration in academia is a “try again” of a project that could be easily seen as quite a failed one – Open Organizations. I don’t care about those assessments either. I’m happy to keep trying, and keep failing, if necessary. The worst possible scenario, and the only one i fear, is to not try. Failure is fine. Especially when performed in open.
This is an early version of the text. Latest version of this text is here.This page is wiki editable click here to edit this page.