Open-process academic publishing


The Internet Model = why Open Access is not enough

This is an early version of the text. Latest version of this text is here.

Publishing and peer review processes in academia are currently closed models. In my view, at least in the areas i operate in (social sciences and humanities), these processes should be far more, if not entirely, open, with a provision for privacy in special cases. I call this model Open-process academic publishing. The name deliberately distinguishes it from Open Access, which refers to only the final outcome of academic knowledge production being open.  The suggestion is not to open the processes in random ways, but in ways in which this openness — fundamentally based on volunteer participation — brings/enables more structure, more internalized working discipline, more commitment, and more ability to improve cooperation/collaboration with deliberate precision – all with the goal of improving the outcomes.  “[...] culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has”, hence, we could call it The Internet Model (software/FS + networking/IETF). Its potential screams for being reused, hacked, for other areas of production. Academia, especially its publishing side, seems to me capable of embracing such volunteer-core open-process cooperation.

The model proposed here brings only few new aspects, mainly those related to the work done in the Open Organizations project. It’s an abstraction, a theoretical development of a model developed for decades in software and networking, and related concepts and practices, especially their open-process part, have been already reused in news production.

What are my motives, you might ask? I’m a first year PhD student, and i’m dreading the idea of being drawn into the existing closed model, model where you mostly, in social sciences and humanities (dozens of journals that i checked), have no idea how long will it take for you submission to be processed, what are the stages in the process and how do you engage with it (other than wait). Quite a few journals do have all these elements stated on their webpages, but it still takes years, it still doesn’t embrace openness for better cooperation, and it still makes no sense to me. I find the current state of academic publishing depressing and unacceptable. The most unacceptable element is that we’re supposed to produce new knowledge. And yet, with all the existing tools and processes for communication and cooperation, processes that gave us the Internet and most of what’s good about it, in academia, in terms of our working processes, ways of cooperation, we still mostly operate as if very little of this open volunteer based cooperation has actually happened – we mostly ignore it.

Instead of enabling better cooperation, which is the key for knowledge production, Internet and electronic tools are used in academic institutions increasingly to enlarge and multiply bureaucratic procedures, regulations and managerial control – that seems to be the trend. Fine, managers are trying to do what they think their jobs are, but what about academics? Why are they not adopting those new tools and processes? Is situation as extremely rotten as this recent paper boldly states:

Academics are now gate–keepers of feudal knowledge castles, not humble knowledge gardeners. They have for over a century successfully organized, specialized and built walls against error.  [...] As research grows, knowledge feudalism, like its physical counterpart, is a social advance that has had its day. (Whitworth, Friedman, First Monday, Volume 14, Number 8 – 3 August 2009)

Open Access movement and academic blogging are examples of the positive adoption, and it inspired me to get involved and start recently writing in open, on blogs, about Open Access. Good quality academic blogging is great, but it is limited to individuals working on their own, linking and having discussion through comments. It doesn’t apply the full software-networking Internet model, which isn’t a surprise – blogging is not meant to be about collective, organised, prolonged production work . Still, i’m tempted to argue that blogs, pingbacks, discussions in comments, intense circulation of new posts and comments (via RSS) amongst clusters of inter-linked blogs, are all elements of an early form of the open-process part of the Internet Model developing in academia – not in an institutional setting, but, for now, in a self-administered, out-of-institutions, way. Which is a good thing – it carries the volunteer-core spirit, an essential part of the Internet Model open-process side. John Wilibanks recently wrote on his blog: “science is already a wiki [...] just a really, really inefficient one – the incremental edits are made in papers instead of wikispace” – it is in this light that i see blogs and blog comments as a new form of scientific production which could be integrated, and improved on, into the institutional setting and journal papers production. Hence my below argument for adding a new type of journal paper, one suitable to a faster, more responsive, easier to asses, production of theory, more suitable to how we work today. However, for this to happen, we can’t just add a new type of academic paper to the existing publishing models. We need to change the publishing processes too, to make this possible.

Within Open Access, the possibility of opening up, radically changing for better, the actual processes of academic production and publishing, based on the existing models developed in software and networking, are dismissed as not relevant, nor required, nor good for the goals of OA initiatives. I have little desire to argue with such positions, since to me they seem to come from a different discursive universe, and we’ll be wasting our energies, trying to reconcile our light-years-separated standing positions. The reasons for change are many and developed in detail below.  The best place for a substantial critique of the existing model and its problems, Reinventing academic publishing online. Part I: Rigor, relevance and practice, was published in First Monday days after i finished writing the first draft of this text – i strongly recommend it as a complementary reading to this text.  While i fully agree with OA goals, and i’m working on implementing and promoting them, OA falls way too short of what, given the models and tools we have at our disposal, could and should be done in the academia.

Primary limitation of OA is focusing on only one part of the Open Source paradigm: openness of the final product. Which is not a surprise, given that this was the most dominant concept signifying the success of the software and networking communities at the time of creation of the OA ideas.

Today,  i claim, we need a paradigm shift. Even if OA did incorporate most of the main methodological points about the collaboration that Open Source was representing, it still would not have been enough. Open Source is a very limited subset of methodology that made software and networking communities so successful. Hence, to re-capture what was lost in the Open Source, we need an Open Process and The Internet Model to replace it, and thus to expose the world to the revolutionary potential of the re-use of these models in many spheres of society, particularly in science.  I will develop in detail the shortcoming of the Open Source model, and reasons for adopting new concepts in a paper i’m currently writing, with the provisional title Open Process & The Internet Model.  As soon an alpha version of the paper is ready, i’ll publish it here on the blog and keep improving it live on the blog, increasing the version number with each improvement, following the practice i started with this text. Here, i’ll focus on what i think ought to done, to improve what academic publishing  already does, with the focuse on the work of journals.

Open-process publishing and reviewing advantages

The following benefits could be gained with open-process publishing and peer reviewing:

1) Quality of submissions would increase a lot over time – because new authors would see the history of the entire process and learn from it (archive of all submissions, peer reviews, editorial board comments, etc), and because they would be less likely to submit badly written texts with no adjustments to publicly stated journal guidelines (a big problem for editors, i get told over and over, is the large amount of low quality initial submissions). In the current system, with externally invisible submissions, the cost of submission for authors it too low: they can submit any rubbish without adjusting it to journal’s guidelines. The only people who see these disrespectful (towards volunteer work of editors) acts, and who associate it with author’s name, are editors. If submissions were openly visible, the cost of submitting random, unadjusted, low quality, undeveloped papers would be far higher, since such disrespectful behavior would be publicly linked to the author.

2) Quality of texts published would increase in general - because of a ) point 1, b) opening of the whole, or most, of the publishing process would also improve the quality of peer and editorial board reviews, for the same reasons like in point 1). Doing low quality, superficial peer or editorial reviews would be publicly exposed and vice versa – possibility of lost, or gained reputation as an editor or peer reviewer would be a motivating factor. In the current model, all of that work is visible only to those few who participate. The logic of reputation works well in life in general, it can work well via online tools too – Ebay is a good example of quite a successful model of attaching behavior to a name closely.

3) Journals who do this process well would attract more agile and risk taking authors - because through open-process publishing it makes more sense for authors to take more risks (might sounds counter-intuitive at first), be less within the known/accepted knowledge boundaries, since they can rely on the peer and editorial assessments of their work done in public – which in turn can lead to less politically correct, career-opportunist position taking from both authors and reviewers, and to an opportunity for more bold, leaps taking steps from both sides. In short, openness would steer reviewing assessment to be more focused on the merit (of course, different academic communities will have different notion of merit in their fields) of the work assessed, hence authors can be more confident in submitting such, more risk staking, less compromise driven works. Which would lead us away from “The modern academic system has become almost a training ground for conformity.” (Whitworth and Friedman, 2009), and away from the publish and perish devaluing model. Model whose low-risk, but well-referenced style of writing has made overall research difficult to asses. It would encourage ground-breaking authors to publish their new research early and suppress mediocre authors who often, by the sheer number of low-risk publications prosper in the current play-it-safe system, and develop careers by such — for the knowledge production suffocating (clogs the production, editors, reviewers publishers, all waste time) and for invidal careers thriving (get’s authors jobs and research grants),  volume publishing. If open-process publishing was widely spread, re-writing of the same papers for different journals, again for the sake of careerism, to get research points and another publication, would be far easier to spot and expose. The current opaque system makes it easy for low-risk careerists, although Open Access is contributing to that changing for better. Open Process would reduce it drastically: if mailing lists were an early implementation model (submissions, editorial and peer reviews, revisions, everything  gets sent to an open mailing list), spotting a submission which is a rewritten version of an already published paper would be trivial: one could use any good web search engine to check  for key paragraphs, concepts with author’s name and it would be in no time clear whether the author has already published on the topic, where and what.

4) Journals who do this process well would significantly raise the dynamics/pace of research - because  some of the most in-depth debates that now happen on academic blogs, could, thanks to the faster and open-process peer reviewing and commenting, move to journals. The form could be shorter, still referenced like academic papers are, and argument even more focused that those in an average 8000 paper are. My impression is that most long journal papers revolve around few core ideas, often not necessarily connected as closely as to necessarily require a single longer paper. Today, i believe that some of these ideas originate in blog posts. We could enable those high quality 700-800 words blog posts to be submitted in a fully referenced short, burst alike, form of 1500-2000 words. Because the argument would be shorter and focused, it would be easier to  evaluate it, which would mean shorter turn around peer reviewing and publishing, and hence sooner possibility of those whose work relates to it to respond. The cycle of publishing would thus follow more closely how we research, especially for senior academics for whom: “research is often done when a few precious hours can be salvaged from a deluge of other responsibilities.”(Weber, 1999).  It would also contribute to possibly avoiding the destiny of: “Many journal papers are out of date before they are even published. “; with a rather frustrating truth that many experience personally: “In the glacial world of academic publishing one rejection can delay publication by two–four years” (Whitworth and Friedman, 2009).

5) Journals would gain readership and reputation – because of all the above and because of below internal benefits and their public visibility

Internal benefits for journals

In addition, there are enormous internal benefits for journals, that would contribute to their increased organizational health and development:

1) Clearer structure and visibility of tasks and processes contributes to recognizing own most important workers - because more precise (due to breaking it down in defined and openly recorded smaller steps) and more transparent allocation of tasks and responsibilities exposes who does what and how, it rewards those who do more and better work – and in volunteer organizations (most editorial boards/collectives), recognizing contribution, and lack of it, is one of the keys for survival and improvement of the organization. Often, it happens that recognition falls to wrong people i.e. to those who have better social connections, who are in the more visible position. And that kills the spirit, rightly, of harder working, most important, participants.

2) Increased focus on implementation work and continuously carried out processes - because defining workflow steps and stages exposes what is the necessary implementation work that has to be continuously carried out – it puts emphasis on an organization/group/collective as a set of ongoing processes. It also exposes other kind of work as less important, and hence those who do it as less essential for the existence of the group/organization.

In practice: many volunteer loosely structured groups/organizations/collectives suffer from participants who talk and communicate a lot, often object a lot as well, but contribute little to the implementation work tasks. Frequently, these type of participants hinder other key participants — on whose work the organization relies on — from getting on with their tasks. Reducing the influence of these talk&communication intensive participants who don’t contribute much to the implementation work is highly positive for the survival, development and quality of work the organization/group/collective produces.

In other words: structured open processes make it possible for an organization/collective/group to not be open and welcoming to any kind of participation internally nor externally, but be selective instead. More of this kind of openness, means more structure, more internalised working discipline, more commitment, and more ability to improve cooperation/collaboration with precision. In a slightly more abstract terms, the more a whole is exposed, defined, and its workings/operations known/visible, the more we can adjust it, reshuffle it, to make it do what participants in the whole want it to do. Open processes enable this, hence open-process in the name. Closed processes allow more corruption of organizational goals: the less we know about the processes, components and their relations, the more individuals can utilise them for own goals and benefits (in academia, careerism).

In Free Software terms, long term freedoms to act and produce collectively do not come cheaply, and have to be defined, developed and defended. The key pre-requisite for the four Free Software freedoms (defined as ethical demands) to cooperate and share is universal free access to software source code. What is missing from the Free Software definition to give us an accurate picture of the collaborative model discussed here, is what is visible from the IETF  principles (see below).

In short, to explain the success of the Internet model, having source code isn’t sufficient. Another key component must be present. And that is developing aimed (goals defined), quality focused, volunteer cooperation in a specific organizational model with the following set of attributes: open participation (anyone can join) and processes, competence, volunteering core, rough consensus and running code decision making principle, defined responsibilities (protocol ownership, in IETF case).

This is precisely why Open Access is not enough to implement a successful open volunteer collaboration on the trail of the Internet software-networking model. One needs a specific organizational model too. And using Open Source paradigm (a movement that is a business friendly and declaratively ethics-free version of Free Software) is even more misleading, because of its emphasis on the source code alone. Open Source is the least useful model/concept of all to help us think this, since it lacks both defined ethics (which is what makes it possible in the first place to define, develop and defend one’s freedoms in Free Software) and a  defined organizational model.  What we need to explain this successful model, is this formula: The Internet Model = Free Software + IETF.  In other words: software + networking. Or even better: ethics + organization. Which is where we arrive to the set of incredibly intriguing political points that ought to be developed here, but i’ll leave that for another text. (small technical note: email subscription to a specific blog category, one  used exclusively for publishing longer in-dept texts, will be offered  to readers who’d like to be informed when the next text in the Hacking The State series gets published on this blog).

To the existing Internet model, i would add the following organizational attributes as highly beneficial: mapped components and relations (stages — recognizable, definable points in collaboration; working groups; their relation, their inter-processes), defined decision making and defined participation and exclusion models. All of this is  geared towards enabling and focusing on the contributions of those who carry out most of the implementation work – such type of work is the blood stream of organization, without its movement, organizations can not produce.

3) Easier project management – because increased task modularity and status (full status of submission = stage + state. see below) real-time visibility (anyone can anytime check the stage&state of any submission on the web system used ) allows for better project management, easier allocation/delegation of tasks, and a more precise sense of progress and problems. Which is all good for the general work spirit, time/resource assessments, and to keep authors who submit papers, and all other parties involved, informed correctly at all times about the stage&state of the submission.

4) Decision making into the hands of the people who matter most
- because who does what and how becomes visible, and because those who carry out continuously implementation work matter most for the organization, decision making can be more in their hands.

For example, Marxists Internet Archive (MIA) addresses this by defining a volunteer, and hence defining decision makers, through work contributions: “MIA volunteers are people who have, in the most recent six-month period, made at least three separate contributions over a period of three weeks to six months”.

In the Open Organizations project, we defined this similarly: “Anyone who is doing implementation work in the group, or has done such work in the recent past (e.g. within the past two months), can participate in its decision-making.”

5) Attract new volunteers and reduce impact of the existing counter-productive internal participants
– utilizing the above task/process openness and visibility, journal editorial boards could use decision making rules similar to MIA to attract volunteers. Through linking of decision making rights and defined implementation work, it would be recognized that certain type of work that could be done by external participant matters more than mere presence of existing internal talk&communication intensive participants. To reduce risk, only certain decision making rights can be given to new participants to start with, until existing board is not assured they are fit to carry out journal’s long term goals and strategies.

This opens up the organizations for the new participants who would from the beginning adopt the culture (habits) of doing the implementation work and it reduces detrimental influence, and eventually leads to the exclusion of, existing internal talk&communication intensive participants. Which is (exclusion habits and processes) also a positive culture to develop.

Existing software, like the Open Journal System (OJS) could be extended to enable this process to happen. An option for privacy, with reasons stated, could be added to the open-process workflow.

Modular process: stages and states

To summarise, these open process would amount to the following being open: initial draft, editorial collective/individual comments, peer reviews, further peer comments, author comments back to reviewers, all the subsequent drafts, and the final published/rejected text.

One objection is that authors would want only their final version used and quoted, or at to have the least final version clearly recognised and marked as final. A way to both increase the chances of that, and to modularise and define the work in a way to create conditions for the above open processes and their benefits, would be to introduce the concept of submission stage&state, using software web tools at our disposal to implement it. So that it is clear that when a submission comes in (in an openly visible web queue, imagine it like an RSS feed on the side bar of a website), it is at the stage First Draft. As the paper moves through the stages of the publishing process, its full status (stage + state) changes accordingly. This defines our publishing workflow.

First Draft – Editorial Review stage would be a submission with an editorial board review either in process (state = awaiting) or written (status = received); next stage would be First Draft – Peer Review. Awaiting and received states of each stage can be an important functional addition, so that involved parties can be notified when the state of a stage changes. For example, when the editorial board sends the paper for peer reviews, full status could read First Draft – Peer Review (awaiting), when the reviews come back, full status could change to First Draft – Peer Review (received). To clarify:

  • A stage is a defined step in the process.
  • Each stage can be in one of the pre-defined states.
  • Full status is stage+state – it tells us where is the submitted paper in the process and what’s currently going on with it i.e. whose turn is it to act on it.

If the editor in charge of the paper peer reviewing process decides that a new revision is required, status could be changed to Second Draft (awaiting). Changing of the stage and state of the submission could be done with an action as simple as editor changing the drop down menus with available stages and its belonging possible states. Web system would automatically do the required action (Open Journal System does this already within its defined workflow). For example, when editor changes the stage of submission to Second Draft (awaiting) it would send peer reviews received for the first draft and a note to the author (email CC the editor) and perhaps update the RecentChanges web page which would, like on wikis, note each stage and state change of all the papers/submission currently in the process. When a new draft based on incorporation of peer reviews is received (web submission by author), full status automatically changes to Second Draft (received); etc, until we get to the Published, Rejected status – or some more fine grained final outcome full status.

I haven’t used proprietary software for web based journals but i’m quite certain that something like this already exists. However, although existing systems to manage academic publishing process were not designed to enable open collaboration based on a volunteer drive, we can still, and should, learn from them. Picture i presented here is a highly developed system. We don’t need to wait to get to that point.

Existing tools, as simple (or as complex, with thousands of plugins and themes) as this blog and freely available wikis and CMS systems (Drupal) can be customised well enough to enable us to start working using these open-process collaborative practices with a significant degree of labour saving automation now. Many of these web systems that we could start using now to implement a simplified version of this proposal, including various wikis, WordPress and Drupal, are available to be bought in hosting packages that allow quite amazing levels of fine grained point-and-click installation, backup and administration (in comparison to what was available only few years ago) for less then few hundred pounds/dollars/euros per year (including all the Internet bandwidth that an average journal might need). It is the human element — seeing the potentially positive benefits, seeing them being larger than the risks associated with those changes and the risk of remaining in the current closed mode, changing the habits of editorial boards — that is the biggest obstacle. Finding the right web based technology is far less of a problem.

What if software was developed through closed models?

If the currently existing closed academic publishing process were used instead of the open-process collaboration which has been at core of Free Software and Open Source production, it is very unlikely that we would have ended up with the software that runs this blog, with its 6000+ available plugins which can be installed with a click on the web interface (no technical knowledge needed), nor we would have ended up with open protocols (developed through open collaborative processes with final results open too – see email standards) and networks that enabled standardised networking that we know as the Internet today.

Here’s how a close collaboration could have looked like without Internet Engineering Task Force, Free Software and Open Source production:

  • most likely, the Internet in today’s form would not have existed. Instead, we would have had closed, commercial (pay to view), competing networks where the exchange between the networks would have been in many cases impossible, and/or expensive and not affordable to many (i vaguely remember a good text on this possible alternative outcome, but can’t recall it)
  • mailing lists as central hubs where work on software, networks and protocols is debated would not exists
  • IRC/online chat channels devoted to those projects would not exist
  • all communication on patches prior to the patch submissions: problems, improvement, priorities, suggestions, ideas would be strictly between source maintainers and new contributor and not in any way open, visible, to other contributors, nor to public
  • comments in the source code (part of the cooperation in engineering and often of the submission process too) would not exist, or be invisible
  • there would be no blogs nor news stories commenting on discussions that happen on mailing lists, we would rely on PR from companies
  • most likely no blogs, nor wikis would have been invented in the first place, or they would have been a minor, undeveloped software niches
  • only the final executable software would be available. perhaps, in some cases, the final source would be available too.

For those who have participated in open software/networking collaboration, and/or those who know how it works: isn’t this closed version quite a depressing picture? And one that would never give us nor the software, nor the Internet as we know it today?

If you haven’t had the privilege of participation in this, and if you have any doubts about how exactly the Internet was built, and what am i referring to, here are the working principles of the The Internet Engineering Task Force (IETF),

a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual. The IETF Mission Statement is documented in RFC 3935.

which operates on those principles, worth quoting in full (WG = working group):

  • Open process – any interested person can participate in the work, know what is being decided, and make his or her voice heard on the issue. Part of this principle is our commitment to making our documents, our WG mailing lists, our attendance lists, and our meeting minutes publicly available on the Internet.
  • Technical competence – the issues on which the IETF produces its documents are issues where the IETF has the competence needed to speak to them, and that the IETF is willing to listen to technically competent input from any source. Technical competence also means that we expect IETF output to be designed to sound network engineering principles – this is also often referred to as”engineering quality”.
  • Volunteer Core – our participants and our leadership are people who come to the IETF because they want to do work that furthers the IETF’s mission of “making the Internet work better”.
  • Rough consensus and running code – We make standards based on the combined engineering judgment of our participants and our real-world experience in implementing and deploying our specifications.
  • Protocol ownership – when the IETF takes ownership of a protocol or function, it accepts the responsibility for all aspects of the protocol, even though some aspects may rarely or never be seen on the Internet. Conversely, when the IETF is not responsible for a protocol or function, it does not attempt to exert control over it, even though it may at times touch or affect the Internet.

If you are an academic, a student, think of the open processes knowledge publishing/production proposals, and of this particular proposal, this way: if the above IETF and free/open software principles had not existed, we would not have had the Internet, nor this blog, nor most tools you use in daily life/work to communicate and cooperate/collaborate. Would you prefer such state of the world? And if you wouldn’t, why not implement similar open processes in academia. If we’re to judge open collaboration models by the results in software and networking protocols, we are missing a lot by staying closed. I’m not blind to political consequences of this proposal, quite the contrary, but discussing it would take too long – i’ll leave that for another post.

Have no doubt, i see (clearly enough to keep working on showing its plausibility) a future of volunteer driven, open process direct-and-participatory democratic state-forms along the lines of this proposal, but that too, is a matter for a different text. Changing closed academic publishing, bringing it where it could be in this age of volunteer driven open processes cooperation, the age of the Internet Model production, is enough for a blog post.

As to the many people who have been saying similar things (in a less structured and developed form, but the spirit of the Internet Model is there) i’m saying here during the last decade, on many mailing lists, blogs, panels, it would be useful to have it listed in one place, to see what kind of negative answers they were given at the time, and to address those objections that — from the standpoint of desire for open-process academic publishing and peer reviewing — make sense (i’ll try to catalog some of it on a wiki page here).

Overall, i have one thing to say to those who share my views on this topic: one of the most important reasons why the IETF and Free Software spread and were successful is because the results were immediately (or soon enough for people to notice and, value it, and join the work) visible and operational (working examples). The same principle can not be applied to theory, at least not in social sciences and humanities, we can’t see theory implemented, working quickly. However, we can make the processes of open cooperation immediately operational and visible. In other words, to make it happen, we can do it ourselves, now. If enough of us do it, and do it well, closed journals, books, cooperation processes, and closed access knowledge (production process and final products) in general, will become history.

A Simple Transition: the Linux kernel development process

The above elaboration is perhaps too complex to be implemented straight away, to be the next step in a move from a closed access journal, to an open process one. Ideally, we need a simple transition model. A model that will require a minimum amount of both additional labour and capital investment at the beginning (most editorial boards are volunteers already stretched to limits), and that will scale, if required, at a later stage. As Benjamin Geer correctly suggested (and wrote in comments almost this entire section – see comments below the text), Linux kernel development process is one such model. It’s well tested as it has been working well for over a decade in software.

Here’s how would such model work: the editor has gathered together a group of scholars who have the time and interest to do peer review; Linus Torvalds, main author of Linux kernel calls them his “lieutenants”.  There’s an open mailing list, and a web site that says: “If you want to publish an article in this journal, you must propose your idea on the mailing list before you write the article.”

People show up on the mailing list and say things like, “I’m thinking of writing an article explaining X, etc., etc.” The lieutenants (and the other subscribers) say, “That won’t work unless you deal with Y somehow. Also, you’ve assumed that X=Q, which is doubtful. Go and read Z and think about it some more.”  Thus they prevent submissions that are based on ignorance, well-known fallacies, etc. And they do this much more quickly than traditional peer review, because they don’t have to read an 8,000-word article to find out that there’s a serious problem: they can find and fix bugs at the design stage rather than the implementation stage. As everyone knows, it’s much cheaper and quicker to fix bugs at the design stage.

After the initial discussion, the authors then go away and produce rough drafts, which can be incomplete, or even just outlines with implementation notes (data to be gathered later, etc.). He post the draft back to the mailing list. Then people on the list say, “OK, that looks better, but you need to make sure you deal with A’s argument, and get data on B, etc.” Thus by the time an author submits an actual article, the editor and the peer reviewers already have a pretty good idea of what’s in it. The author also has a pretty good idea of how receptive the reviewers are to the article, and thus how likely it is to get published. This helps everyone avoid wasting time on submissions that have no chance of being accepted, and yet, most important, the quality control role of the peer reviewing process is maintained.

The lieutenants don’t have to do all the reviewing themselves, because authors comment on each other’s works in progress on the list. It’s in their interest to do so, because the tougher they are on each other, the less likely it is that flawed articles will slip through the process, and the better the journal’s reputation will become, thus making it a more prestigious place to get published. This means less work for the lieutenants. It also means development of a community of peer reviewers whose interest becomes to increase the reputation of a journal in which they publish.

Instead of publishing issues on a regular basis, the journal can publish each article electronically whenever it’s ready. Articles get published when the community consensus is that they’re good enough to publish. At any given time, if there are no finished articles, the journal doesn’t have to publish anything; thus there is no pressure to lower standards or to rush the process in order to meet a deadline.

A print issue can be treated as the “Best of”, or a special/themed issue, containing only a selection of what has been published on-line. This process would make a journal a lively place of activity, with authors being always kept up to date with what is going on with their submissions, and with a possibility for any journal reader to get engaged, on a volunteer basis, through this open process.

Over time, the editor should become more of a coordinator, like Linus, whose role is mainly to establish a general editorial line (e.g. it’s a political journal, and not one on culture, yet papers and issues on culture are welcome if done from angles productive for political debates and issues) and to arbitrate between the lieutenants when they disagree.

All that is needed for this process to start being used is an open mailing lists. For early stage ideas, author can write emails directly to the mailing list – reviews can be done by replies. In a later stage, authors can email Word or Open Office documents, and reviewers can use commenting features, the system everyone is familiar with.

There is a ready available slightly more advanced option, a WordPress (software running this blog, freely available and incredibly easy to install and use) extension called Commentpress – online commenting of the text written in blog pages where comments appear sideways the to the paragraph being commented on. There are plenty examples on their website, check the The Iraq Study Group Report with comments.  The fanciest cutomised interface for this extension is the one used for the McKenzie Wark’s 2007 Gamer Theory (Harvard University Press) book. Pages are shown like a deck of cards, there are arrows underneath for next/previous navigation, and on the right hand side is the scrolling box with comments.

To make it all simple to start with, all that is needed is an open, archived, easy to backup, mailing list. Other parts of the open process can be improved later. One thing that i discovered lately is how powerful blogging platforms became. For the need of most academic without special figures in their test (maths, physics, chemistry, etc),  blogging software like this WordPress seems to me miles ahead of Microsoft Word, or Open Office, as a convenient, yest incredibly richly and easily (point and click, no technical knowledge required) extensible working platforms. I say deliberately platform, and not software, nor blog, nor website, because it provides multiple functions in one, and the collection of them together in an easy to use place, provides a result with impact far greater than what would one get from several separate pieces of software that would be required to perform what advanced platforms like WordPress do. But details of that are also best left for another text.  It is enough to remember that WordPress would be, in my opinion, a brilliant extension of the mailing list, and is free to setup and point-click to backup and restore.

Open-process peer reviewing and citing early drafts

One of the problems with the process as open as we’re suggesting here is that although authors might like the more extensive peer reviewing that is likely to happen on an open mailing list, it’s likely that most of them would not want to have their work cited, nor used anywhere, before the final version accepted by the journal isn’t ready. It would be extremely difficult, if not impossible, to prevent that with technical solutions. Yet, there’s a cultural safeguard parallel from the Linux kernel development that we can reuse.

If a Linux kernel is released with a serious bug, people get annoyed, and the author of the offending code might be publicly embarrassed. But if you post buggy code on the Linux kernel mailing list and someone notices, the worst thing that will happen to you is that you’ll have to fix it. Why? Because everyone knows that it’s not safe to download source code from mailing lists and expect it to work properly. This is a cultural thing: it’s accepted that free-software mailing lists are for hashing out ideas, not for finished work. Everything about them screams ‘Danger: Construction Work’.

Therefore, we think that peer review could be open if it had the right cultural safeguards. There would have to be some principle like ‘respect for peer review’, which meant that citing journal-mailing-list messages and preliminary drafts in academic articles (or newspaper articles!) would be considered a huge taboo. Academic ethics would have to include the idea that you can criticise your opponents’ preliminary drafts as much as you want, but only on the journal-mailing-list. If you want to criticise them anywhere else, you have to wait until the final version is published. In this case, we believe, authors could be made comfortable with proposing  preliminary ideas and subsequent drafts on a mailing list, without having to fear that they will be attacked while in the middle of writing.

Final Words

Finally, there are multiple risks, drawbacks, additional labour investment, transition plans, and other reasonably raised issues to be addressed, in order for this proposal to make sense to editorial boards and editors who will be making decisions whether to accept elements of open-process academic publishing and peer reviewing, or not. I’ll write on those in a separate post. Also, consider this a rough first draft. I’ll keep revising it, probably in its own wiki page on this blog.

As to probable objections that this proposal is a speculation with no empirical side to it: thanks to the good work in social sciences and humanities, it is widely accepted today (widely enough for me) that both empiricism and idealist speculation are both dead concepts. However, not only am I happy to speculate to an extent — based on my subjectively objective reading of reality (which is the position that matters most, since there are no neutral objective positions), which has little in common with an empiricist one — i believe it is necessary to do so. Only practice can confirm our speculations right, or wrong. And even when we do get confirmed to be wrong, i’m perfectly happy to live and die by Beckett’s: “Try again. Fail again. Fail better.”.

In many ways, my work on open-process collaboration in academia is a “try again” of a project that could be easily seen as quite a failed one – Open Organizations. I don’t care about those assessments either. I’m happy to keep trying, and keep failing, if necessary. The worst possible scenario, and the only one i fear, is to not try. Failure is fine. Especially when performed in open.

This is an early version of the text. Latest version of this text is here.


 

This page is wiki editable click here to edit this page.

29 comments to Open-process academic publishing

  • benjamingeer

    As I was falling asleep last night (not having read your text above yet), I was imagining this:

    I’m imagining a journal that uses the Linux kernel development process. The editor has gathered together a group of scholars who have the time and interest to do peer review; they’re like Linus’s “lieutenants”. There’s an open mailing list, and a web site that says: “If you want to publish an article in this journal, you must propose your idea on the mailing list before you write the article.”

    People show up on the mailing list and say things like, “I’m thinking of writing an article explaining X, etc., etc.” The lieutanants (and the other subscribers) say, “That won’t work unless you deal with Y somehow. Also, you’ve assumed that X=Q, which is doubtful. Go and read Z and think about it some more.” Thus they prevent submissions that are based on ignorance, well-known fallacies, etc. And they do this much more quickly than traditional peer review, because they don’t have to read an 8,000-word article to find out that there’s a serious problem: they can find and fix bugs at the design stage rather than the implementation stage. As everyone knows, it’s much cheaper and quicker to fix bugs at the design stage.

    After the initial discussion, the authors then go away and produce rough drafts, which can be incomplete, or even just outlines with implementation notes (data to be gathered later, etc.). Then people on the list say, “OK, that looks better, but you need to make sure you deal with A’s argument, and get data on B, etc.”

    Thus by the time an author submits an actual article, the editor and the peer reviewers already have a pretty good idea of what’s in it. The author also has a pretty good idea of how receptive the reviewers are to the article, and thus how likely it is to get published. This helps everyone avoid wasting time on submissions that have no chance of being accepted.

    The lieutenants don’t have to do all the reviewing themselves, because authors comment on each other’s works in progress on the list. It’s in their interest to do so, because the tougher they are on each other, the less likely it is that flawed articles will slip through the process, and the better the journal’s reputation will become, thus making it a more prestigious place to get published. This means less work for the lieutenants.

    Instead of publishing issues on a regular basis, the journal can publish each article whenever it’s ready. Articles get published when the community consensus is that they’re good enough to publish. At any given time, if there are no finished articles, the journal doesn’t have to publish anything; thus there is no pressure to lower standards or to rush the process in order to meet a deadline.

    Over time, the editor should become more of a coordinator, like Linus, whose role is mainly to establish a general editorial line (e.g. it’s a monolithic kernel, not a modular one, so arguments in favour of a modular kernel are not welcome) and to arbitrate between the lieutenants when they disagree.

    I suggest we think about this model and see if there are any problems with it, before inventing a new model from scratch.

    • toniprug

      Hi Ben, i hope you don’t mind that i integrated comments you made here into couple of sections and added you to metadata of the text as a contributor. This text badly needed a transition model, something that can be implemented now, without much fuss. That’s the key. And you provided it with Linux kernel suggestion. In addition to your suggestion, WordPress could be a brilliant writing/commenting/reviewing platform for those who’re read to try something other than Microsoft Word or Open Office. Also free to get on wordpress, and very cheaply available on self-hosting plans.

      Also, it become clear to me from our discussion that you’re right in insisting that citing early drafts will be a big problem for most authos, and by discussing it here we also came to a conclusion. Again arriving at your comparison of the Linux kernel list list culture as a safeguard – that’s a great simple solution too, so i included it as well.

      One thing that the text still perhaps lacks, is addressing some of the Stevan’s objections to peer review changes on his OA list. Most of his strong views come from fundamental misunderstanding of what is open process software development and how it differs from academia. Given the prominence of his work in the OA movement, it might be a good idea to show why are his views mistaken. I’m not worried much about it, but i would assume that the objections he is raising will be raised by other as well. What i’m more tempted to do, is post a call for collaboration to his list, to ask all those dozens of people that asked, email proposals, for peer review changes, people that were all told that they are wrong by Stevan and few others, to post those views/ideas/critiques here – addressing how could we do peer review well through open processes. We might get some interesting dissenting ideas from there, ideas that could not be discussed much, and were discussed on mistaken basis, on the OA list.

    • toniprug

      Forgot to mention that i’d like to work a bit further on this piece, not too much, just to reference it better with relevant academic works in this field, and submit it to a journal (I have few papers on my mind i’d like to look at for possible links with open process idea and academic knowledge production). Is submitting it for publishing ok with you, given significant inclusions of your comments? And do you think the text is still missing some important aspects a lot, something that you think had to be covered before it makes sense to submit it? I was thinking of First Monday, they’re Internet only journal with Creative Commons licencing for everything they publish.

  • toniprug

    I can imagine this model working well for new journals, but i doubt it would make sense to the existing ones.

    I like the design stage idea. It also goes along with the idea of writing software in small loops, getting feedback frequently (like Extreme programming does).

    It does work like that in some cases in academic journals. I recently contacted a journal, saying that some of my work could fit in what they’re publishing, but i’m not sure what exactly. So i got asked by one of the editors to submit a 500 words proposal which will be discussed amongst the editors and we’ll take it from there. It’s very close to what you’re saying, just not in open. I would prefer if it was in the open though, for all the reasons i state in above text.

    Your concerns about having only the final, published version visible and cited is not addressed by this. Did you change your mind? Would you be comfortable publishing in a Linus’ model journal? That would make all your drafts visible online, which means that anyone can cite them. I think that’s great.

    Why not “release early, release often” = publish early, publish often? How can you loose something by an earlier version, not as good at the later on, getting cited? My thinking on this is that if i liked your early draft, and i cited you, i would be delighted to find out that your published version is much improved. It would motivate me to go back to my piece and improve it based on your improvements. Think of it as a library/class in a large software project. Your first link (or use a class) to an early version. When a new version gets released, you get all the benefits of the improvements. But it is the early linking with smaller libraries that enables larger, more powerful, software to be written. How would software develop without use/require statements? And isn’t writing papers also a participation in a highly modular process, where we move in small steps, but we all include each others steps (papers). In the same sense, enabling others to read your early version means enabling them to write better papers. Imagine if the authors that you’re using to build up your arguments are publishing their latest early drafts (if you work with works of alive authors) – i imagine that you would appreciate early access to their latest work and that it would make your paper better. If that’s the case, then you should do the same, publish early, publish often.

    Also, when i mentioned “Given enough eyeballs, all bugs are shallow” principles, i got told on couple of occasion by friend and colleagues that there aren’t that many eyeballs in the academia i.e. not many people interested. I think that such argument seems truthful only because or the current state of academic production processes.

    If we look at academic blogs, they’re incredibly lively places of debate. I also get the strong sense that it’s on these blogs, through individual action and resources, that academics, especially those starting their careers now (or in the early stages of careers), test their ideas – although very few (like http://roughtheory.org, entire PhD written on the blog) do it extensively and explicitly state that. In that light, i see academic blogs as an early stage of opening up of the knowledge production processes, one that already embraces both of the above (release early + bugs shallow) software collaborative principles. Transferring those practices into journals seems a logical next step for me. It is most likely already the case that there are academics used to testing and debating ideas for papers in open on their blogs, and simultaneously working in still closed publishing workflows. Which makes the transition to open-process publishing less disruptive.

    Although, from my experiences in corporate sector, disruption can be an excellent method for enabling invention in production. Why not reuse/hack it for the production of knowledge.

    Disrupting journals as they are can’t in any way be a bad thing, but it seems i’m quite isolated in the view that the quality of knowledge produced in social sciences and humanities is terrible – which is the point from which i desire, propose and hope for radical change towards the Internet model, and definitely one disruptive for the current state of low-quality closed publishing.

  • benjamingeer

    Suppose there’s a huge mistake in one of my early drafts, something utterly illogical, nonsensical or badly misinformed, and it gets caught and fixed during peer review. Suppose someone reads this early draft, and immediately cites it and criticises it in their own article, thus making me look like an idiot. I then have to publish a response to their article to defend myself, pointing out that the mistake was corrected in the final draft. Meanwhile the article that criticised me has been cited all over the Internet, the whole world thinks I’m an idiot, and my reputation is permanently damaged; my attempts to defend myself can never catch up. (Especially if the mainstream media that cited the article criticising me don’t cite my self-defence, e.g. because my argument threatens their interests somehow.)

    Or imagine a more subtle problem: in an early draft, I make a spectacular argument that seems plausible but has a subtle flaw. I get cited all over the Internet for making this breathtaking claim. Soon, lots of people notice the flaw, and thus conclude that the argument is completely wrong. Meanwhile, on the peer-review mailing list, we’ve realised that the argument can be modified slightly in order to be made valid. I publish the modified version of the article, but now it’s very difficult to get people to pay attention to it, because most people are convinced that my argument has been discredited.

    To put it simply, if people can cite early drafts, that means that in effect, there’s no difference between early drafts and published articles. It would make the review process appear irrelevant.

    In order to prevent problems like this, I’d be happy to participate in a Linus-style journal, as long as there were safeguards in place to keep the early drafts from being cited outside the peer-review mailing list. Maybe it would be enough just to put an automatic message at the top of each draft: “This is an early draft and is probably completely wrong. Do not cite it anywhere.”

  • toniprug

    Lots of papers that i read as PDFs from conferences contain such notice, i used it myself once. So, no big deal there. You could easily participate in an open-process model.

    Your worse case scenario is an example of attack/defend mode of thinking, i don’t see things that way – when i see a terrible work, with such gigantic problems, i ignore it. So, no i don’t think those kind of attacks do happen a lot, at least not in books/papers that i read. Most of stuff i read is people using work on which they can build on, and not trying to construct their central argument by attacking.

    As to your other, subtle, problem, it seems like a more plausible case, but not in material/areas i work on. Social sciences and humanities are not mathematics. A good thinker can not be dismissed because of a subtle but important problem. Quite the contrary, my experience is that brilliant thinkers do make precisely the sort of subtle mistakes you describe, and they do it all the time – but those mistaken points are never structurally such points that disputing them, exposing them as incorrect, can bring the rest of the work down. In other words, i don’t think that theories in social sciences and humanities hinge on any single, small point/argument in a larger structure. In those disciplines, structures can not collapse, if one small bit falls. Or, at least, this definitely does not apply to anything that i read that i can think of right now, which is primarily philosophy and political sociology.

    Even in the cases of hard core philosophy, where those structural key points are more likely to be found, it still doesn’t bring down the entire work of a philosopher, since good&interesting thinkers these days usually work in wide ares, writing on variety of subjects.

  • benjamingeer

    You seem to assume that people really bother to understand a theory, in order to judge its strengths and weakness, before deciding whether to use it or dismiss it. That’s not my experience at all. Here’s an example. I use Bourdieu’s theory a lot, and my impression is that many academics who use it, as well as many who dismiss it entirely, have only a very superficial understanding (or misunderstanding) of it. People who use it often reduce it to a theory of ‘cultural capital’, and ignore the rest (without which the idea of cultural capital is meaningless). Those who dismiss it have all sorts of misconceptions, which appear regularly in books and peer-reviewed journals. ‘Bourdieu’s theory is deterministic… He’s a Marxist… He reduces everything to a conflict between bourgeois and bohemians… He never mentions X or defines what he means by Y…’ All of it false. (Note: Bourdieu was very prolific and wrote about a wide variety of subjects, so there’s a high probability of being wrong when saying that he ‘never mentioned’ some particular thing.)

    Academics often seem to form a first impression about a particular theory or thinker very quickly, e.g. by skimming a textbook written by someone who is hostile to the theory in question. Once this first impression is formed, they often seem to be very reluctant to change it later on, even when confronted with evidence that contradicts their impression. This is particularly the case with Bourdieu’s theory, which tends to arouse the hostility of academics because it challenges a lot of what many of them believe about themselves.

    And all these misconceptions manage to survive even though people have only been exposed to final published versions of Bourdieu’s works. If they were exposed to half-baked early drafts, it would be that much easier for them to dismiss unfamiliar ideas without giving them serious consideration.

    Your worse case scenario is an example of attack/defend mode of thinking

    Academia is a field of struggle. Academics do try to discredit each other. I’ve seen back-and-forth attacks between rival academics, oozing hostility, published over several issues of a peer-reviewed journal: ‘Response to X… Reponse to Y’s Reponse… Reponse to X’s Reponse to My Reponse to His Reponse…’ etc. I’ve heard stories of academics who seethe with hatred when one of their rivals gets up to speak at a conference. There’s even a book that accuses Bourdieu of ‘sociological terrorism’. (When someone uses the word ‘terrorism’ to refer to academic research, you know there’s real hatred involved.)

    I don’t want to attack people; I’d rather just talk about what I’m doing. But often, in order to challenge an orthodoxy, you have to challenge the people who uphold it. Moreover, I’m doing research on controversial topics, using controversial theories, so I’d be foolish to think I won’t be attacked. Academia is all about recognition and credibility, and people can gain recognition by attacking others. The great thing about peer review currently is that it’s a way to get useful critiques without paying a high price for your mistakes. It’s a safe area where you can screw up without giving your opponents a chance to damage your career. If peer review is going to be open, I think it’ll need some way of preserving that safety.

    • toniprug

      Ok, this makes a lot of sense

      The great thing about peer review currently is that it’s a way to get useful critiques without being held accountable for your mistakes. It’s a safe area where you can screw up without giving your opponents a chance to damage your career.

      And it sounds like an extension of the way supervision works. My sessions with supervisors are precisely the chance to learn through mistakes without worrying too much about being mistaken, but having certain extra freedom to experiment and test ideas in early stages. Still, i could be the odd one out, but i’m not sure that i would mind my supervision in open, on a blog (write drafts + get comments that way), for example. As to ‘Response to X and Y’, yes, i’ve seen those too. I don’t remember once reading anything useful from these. Antagonistic model doesn’t work for me, i don’t find it productive, not just in own work, but with others too. This isn’t to say that one should have antagonistic positions/ideas in work, quite the contrary, they are necessary to move forward. But there are other ways to develop one’s work. I just hope to avoid those ‘Response to X and Y situations’.

      Overall, yes, i see now clearly why you need to be free from attacks during the writing. I might come to you views one day. But at the moment, my optimism for working in open is a combination of arguments coming from other fields (RFCs, FS, the Internet Model) and a combination of a gut instinct that that is how i should proceed with knowledge production in academic work too. As to the ‘empirical evidence’ on how things operate right now in academia, i admit, your arguments for a healthy dose of closed peer reviewing are strong. I still can’t think that way, i don’t have the feel for you logic, although i can rationally understand it. In those situations, and i’ve learned this lesson with other situations in life, intuition, gut instinct, ‘feel for things’ is the one to follow. Which at first might seem as quite an unscientific way to proceed about something that’s meant to be science. But that’s what my ‘empirical evidence’ of all kind of life situations suggests: when unsure and in conflict about arguments-VS-intuition, follow the latter.

  • benjamingeer

    If a Linux kernel is released with a serious bug, people get annoyed, and the author of the offending code might be publicly embarrassed. But if you post buggy code on the Linux kernel mailing list and someone notices, the worst thing that will happen to you is that you’ll have to fix it. Why? Because everyone knows that it’s not safe to download source code from mailing lists and expect it to work properly. This is a cultural thing: it’s accepted that free-software mailing lists are for hashing out ideas, not for finished work. Everything about them screams ‘Danger: Construction Work’.

    So I think peer review could be open if it had the right cultural safeguards. There would have to be some principle like ‘respect for peer review’, which meant that citing journal-mailing-list messages and preliminary drafts in academic articles (or newspaper articles!) would be considered a huge taboo. Academic ethics would have to include the idea that you can criticise your opponents’ preliminary drafts as much as you want, but only on the journal-mailing-list. If you want to criticise them anywhere else, you have to wait until the final version is published. In that case, I’d feel fine about proposing my rubbish preliminary ideas on a mailing list.

  • benjamingeer

    Of course, some people might hold back their criticisms until the final version is published, just to be able to embarrass their opponents in public. But at least this would be no worse than what happens now.

  • benjamingeer

    I thought of a possible problem. Inevitably, if people are announcing preliminary ideas on a mailing list, the mailing list archives will be used to settle the question of who thought of a particular idea first. I’m not sure it would be possible to keep that sort of controversy from spreading outside the list.

  • toniprug

    I think you got the it right right, i can see cultural safeguards allowing both advantages of open peer reviews and of protection from use until author feels it’s ready. That’s the kind of model i would be happy with too.

    Still, what i’m thinking in order to make early criticism easier, to make it less painful to open up early, is to start using TODO notes, not in a separate file, but straight into the text. Here’s an example of the real use in one of my current drafts:[TODO: this processes generalisation is terribly weak]. Which indicates the need for further work and free-for-all (well, almost) critique. It’s a sign that says: if you’d like to get involved in this project: hack here. I’m also thinking about including ChangeLog, so that a reader faced with a text with several releases can see the main changes between releases without having to read the full diff. Although, wordpress colored diffs are a great surprise.

    • Rob Friedman

      Toni and Ben, I just came across this extensive exchange between you, and it reinforces some of the basic premises that Brian and I have been trying to give voice to for a while.

      In about two weeks First Monday will publish Part 2 of the article Toni referenced early in the original post, and I’d be very curious to see your reactions to the implementation suggestions we offer, given they’re right in line, Toni, with your Open Process plan, as well as where you and Ben got to regarding open and closed critiques.

      One thing that got jumbled by even our best peer reviewers (ones we didn’t know, ones we did, and one who chose to reveal himself) is the distinction between Open Access and what you’re calling Open Process. For us, OA is a business model, a production and consumption mechanism that has good and bad effects depending on what role you’re playing and how wealthy your institution is, whereas OP is a knowledge sharing and collaboration mechanism that, through social networking tools and behaviors can come to redefine how researchers and their networks behave. It’s a community of thinkers we want to engage and focus on, as ideas are born free and should stay that way (even Jefferson thought so).

      We’re not as interested in promoting or dismantling the journal industry — even the OA side (imagine the size of the shot we’d need in our sling) as we are in promoting at least an attempt at many of the OP actions you’re describing here, and I hope you refine along the way.

      If you haven’t seen them already, you may want to check out Joe Esposito’s 2004 piece in FM on OA, as well as Bill Cope and Mary Kalantzis’s April 2009 piece, giving their take on the problem with academic journals. If the editorial team of FM was receptive to these arguments and ours, you’re on target in wanting to give your OP ideas a voice there, too.

  • [...] Current generation of MySociety projects is focused only on some of these principles. This proposal is based on the premise that we can be more productive in furthering MySociety goals by renewing the approach, using the above, improved and extended, principles and methodology. I call it Open Process, or The Internet Model (first developed here). [...]

  • [...] have recently read, and I highly recommend, Toni Prug’s and Benjamin Geer’s work-in-progress essay on the limits of academic publishing and the need to renovate it in a new, technologically radical [...]

  • Dear Toni,

    that’s a wonderful project that you are suggesting here. We agree that OpenAccess is a good first step towards openness in academic research, but, as you say, it is just the output that is open. The open process is what we really aim for, as is generally the case in the production of Free Software, or more general in “commons-based peer production” processes (Yochai Benkler, Michel Bauwens).

    With the Free Knowledge Institute we maintain definitions of and references to these important forms of producing free knowledge, see here e.g.: http://freeknowledge.eu/definitions

    Another person interested in this topic wrote about the “Open Scholar”, see here: http://www.academicevolution.com/2009/08/the-open-scholar.html

    best,

    Wouter

  • I thought of a possible problem. Inevitably, if people are announcing preliminary ideas on a mailing list, the mailing list archives will be used to settle the question of who thought of a particular idea first. I'm not sure it would be possible to keep that sort of controversy from spreading outside the list….

  • Hi Toni,

    I’m only halfway through this page but need to go offline now, so just a brief hello. I got here through the two-part essay by Brian Whitworth and Rob Friedman which you cite and am currently writing up the second part of a very similar essay in a wiki, hoping for others to jump in — perhaps this may be of interest to you. One comment on the “Open-process publishing and reviewing advantages” section: The quality of submissions did indeed rise over time with the journal Atmospheric Chemistry and Physics which uses a two-stage submission model and public peer review (which may or may not be anonymous). This is an indicator from the real-journal world that the Linux Kernel system would indeed have effect 1.

    More comments to follow later.

    Daniel

  • benjamingeer

    @Daniel, many thanks for pointing out the journal Atmospheric Chemistry and Physics. Their review procedure looks amazing. That’s an experiment that deserves a lot more attention.

  • Ben, Atmospheric Chemistry and Physics is the oldest of a whole series of journals published by the European Geosciences Union that use this two-stage public peer review system (for an example discussion, see here) and have inspired other journals to follow, e.g. Economics. Technically, there is no problem in extending this 2-stage model to the n-stage model used in version control systems, as the Linux Kernel method implies.

  • Some further comments: Content -Good framing of the discussion, though at places lacking in references-On “discussions in comments”, see here and here.-If you do not comment in detail on the “different discursive universe”, you might as well shorten or delete that phrase.-Open-process publishing and reviewing advantages, (1) –A good reference on the Atmospheric Chemistry and Physics model is here.-Open-process publishing and reviewing advantages, (3) –Plagiarism detection already works quite well now, some tools are listed here. -Open-process publishing and reviewing advantages, (4) –On speeding up the publication process, see here (my comment). -Open-process publishing and reviewing advantages, (5) –The readership and even reputation of open-process publishers may increase, but “journals” in the sense we know them may well cease to exist (in fact, already now there is but one journal — the scientific literature), since the open-process handling of submissions will naturally focus on the article level (as long as these exist) and later perhaps on individual submissions to the global knowledge system, and be this a single wiki edit (e.g. via tools like WikiTrust). On incremental publishing, see here and here and here. -Internal benefits for journals, general –given my reservations on the last point, it may be worth considering to exchange the term “journal” for something else in this section (I used “public research environment“), which will obviously affect other aspects of the phrasing -Internal benefits for journals, (1)–on the feedback loop between productivity and recognition, see here. -Internal benefits for journals, (4) –Karma system in use at Slashdot may be relevant for this section, see here.-Modular process: stages and states–These stages fit well with text-based disciplines, but there may be more components (overview here) Typos and phrasing -production work . Still,-John Wilibanks -what i think ought to done-publish and perish devaluing model. Model-argument even more focused that those in an average 8000 paper are-on whose work the organization relies on -in-dept texts(yes, I would like to subscribe)-or at to have

  • My next set of comments did not go through here, so I posted it on my blog. Still not finished the whole thing yet, so more comments to come.

    • toniprug

      strange, your last set of comments was caught by the spam filter. i just approved it. had no time to look in depth at your comments, will do it in few days time. i’m in the middle of finishing off a 14000 words text which i couldn’t write on the blog – it’s too complex, politically provocative, and it took me months to get my head around it. i read piles of books and journal papers in the process of wrestling with the ideas from the text. It’s an example, at least for me personally, that i can do everything on the blog in the open. but i would love to go through an entirely open peer reviewing process once i submit it to the journal for which it is written – which won’t be the case, that option is available. Still, i’ll ask and argue for it.

  • [...] interesting proposal for academic journals, going beyond the Open Access paradigm: The suggestion is not to open the processes in random [...]

  • This is just to let you know that I am writing up a blog post on a related topic, in a collaborative way that may be of interest to you.

  • [...] to developing alternative peer review processes. And here I’m inspired by the intervention of Toni Prug – Open-process academic publishing for interrogating the conservative, closed nature of academic peer review, and thinking through [...]

  • [...] to developing alternative peer review processes. And here I’m inspired by the intervention of Toni Prug – Open-process academic publishing for interrogating the conservative, closed nature of academic peer review, and thinking through [...]

  • [...] in their research in myriad ways.  For a detailed argument in this direction see the following Hackthestate blog entry. It seems odd that philosophy, usually a pioneer in new ideas, is staunchly reluctant to consider [...]

Leave a Reply

 

 

 

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre lang="" line="" escaped="">