It’s probably time to come clean about my recent spate of posts on startups, Ruby, Python and so on. Well, there are a few things about peer review and publishing in the realm of academia that I think could be better, so I tried to figure out an alternative process that retains the benefits and overcomes some of the problems of the current system. We think we’ve done that, and it turns out that I wasn’t the only one who thought that things could be a lot better.
NICTA has provided pre-seed funding in the form of a couple of commercialisation grants to implement this new way of doing things. I’ve hired a top notch graduate software engineer (who’s been working with me as a student for the past year and a half on unrelated things) to help me deliver alpha and beta versions of this system over the next six months or so. For this project, we’ll be working in startup mode; I’ll be making every effort to provide a small company atmosphere for the engineer and others who join the project.
It turns out the solution to the problem can also be applied to (web) search, since it is essentially a nice way of ranking documents within communities. I can’t go into the details of the solution here, but I can list some of the things that I (and other researchers, as it happens) think could be better.
- Traditional peer review requires that authors trust reviewers to act in good faith – reviewers are not required to “put their money where their mouth is”, so to speak;
- Related to the above, traditional peer review gives no real incentive to support the good work of a group competing scientists;
- Related to the above, traditional peer review provides no real incentive not to support the poor work of a colleague or friend;
- Traditional peer review gives no tangible recognition to the many hours of reviewing that scientists do – reviewing is just something you’re expected to do for the good of the scientific community;
- Traditional peer review gives no incentive to authors to self-review their work before submission, meaning reviewers get burdened with too many bad and mediocre papers;
- Metrics such as H-index and G-index are somewhat arbitrary, do not give a direct indication of the esteem with which scientists are held by their peers, and are not indicative of the current capacity of a scientist to produce good work;
- Citation collusion is too easy to accomplish, but difficult to filter out when calculating the above metrics;
- Not enough cross-fertilisation between fields, largely because closed communities are too common; and
- The publication process is too slow, often taking years for a journal paper and months for a conference paper.
These are some of the problems that researchers say they can see with the current way of doing things. We think we can claim that our idea solves many of these problems. For example, under our system, which we are calling PubRes for the moment, citation collusion is futile. Under PubRes, you’d also be silly to lend support to a paper that you know isn’t very good (even if it is written by a colleague), and you’d be silly not to lend support to a good paper (even if it is written by a competing group of scientists or your worst enemy). There are some things we haven’t solved, like honorary authorship and ghost authorship, but these are problems I’d like to investigate in the future. Although I can’t reveal the details here, I can say that the underlying mechanics of PubRes are no more complicated than traditional peer review procedures (and probably much less complicated), but it is a major departure from how things are done now. I can also say that the feedback we’ve got from people we’ve explained it to has been overwhelmingly positive, which is the main reason I’m still pursuing this.
NICTA are making sure we do this properly, so some of the grant money is being spent on figuring out the structure of the academic publishing market. We already know that the top three academic publishers had combined 2007 revenues in excess of $US3 billion, but that doesn’t say much. We’re currently doing some much deeper market research to get a better understanding of the domain.
It’s important to note that what we’re doing is completely different to all known attempts to bring science to the web. PubRes is not another CiteULike or Connotea. It’s not another arXiv.org. It’s not like PLoS One or PubMed Central. It’s different to ResearchGATE and Science Commons. While our implementation may contain elements of these existing tools, PubRes is a fundamentally new way of getting your research published, and it’s a new, much fairer (we think), more direct way of rating scientists and the papers that they write. One of our aims is also to make the whole reviewing, publishing and reading cycle a lot more fun.
With any luck, a public beta will be available early next year. Oh, we think we’ve settled on Ruby and Ruby on Rails for the web tier, and no doubt there’ll be some AJAX stuff in there to pull off a few nifty browser side features we have in mind. Stay tuned.