Similar Projects
There are numerous open science projects and decentralised science (DeSci) in existence. How does the Referee project differ and why are these differences important? The answer reveals more of our philosophy about where change is most needed. Let’s start by looking at two of the more innovative approaches in the DeSci space.
DeSci Labs/DeSci Foundation were founded by Vrije Universiteit Amsterdam professor Philipp Koellinger. Together, these organisations have an impressive vision for how science research could be improved in the future and their four-part series on the problems with academic publishing is worth reading. Several key innovations include Autonomous Research Communities (ARCs), a Web3-Native Unit of Knowledge, and Secure Persistent Identifiers (PIDs). ARCs are decentralized collectives operating on blockchain technology to curate, validate, and share scientific knowledge securely and transparently, ensuring that the value generated by scientific discoveries is rightfully attributed and rewarded. In addition, these communities can set attestations (constative statements or sets of criteria) that they find valuable, allowing authors to submit a research object to attest to those criteria. Web3-Native Units of Knowledge are intended to replace static PDFs with a dynamic, interoperable format. These facilitate not just the creation and sharing of research but also its verification and reproducibility. Underlying this system are Secure PIDs, which offer a robust alternative to the fragile DOI system; these identifiers are designed to be unbreakable and encode the content of the underlying object rather than merely pointing to its location.
Similar to Desci Labs, the ResearchHub Foundation also seeks to redefine how science is funded, reviewed, and published. Users can earn ResearchCoin (RSC), a community rewards token, for their contributions such as uploading papers, commenting, and posting. They can also receive RSC from other users who appreciate their content or want to tip their papers. ResearchHub also offers an electronic lab notebook for note-takers and has started a pilot for paying peer reviewers, mostly $150 in RSC, for their efforts. Again, this is similar to the Referee project. Like Desci Labs/Foundation, ResearchHub uses the labour theory of value paradigm and doesn’t have a common paper weakness enumeration framework or a reliability scoring system.
Other peer review efforts
The PubPeer Foundation is a California non-profit that seeks to improve the quality of scientific research by enabling innovative approaches for community interaction. It operates as an open forum where people can post papers and members can comment on them, but there is no formal scoring, reputation staking or downstream processes.
Review Commons is a journal-independent preprint review platform that follows the traditional model of requesting holistic narrative reviews for papers with the goal of improving their candidacy for publishing.
The STM Integrity Hub was created by academic journals to provide an environment for publishers to check submitted articles for research integrity issues.
Ants-Review is a blockchain protocol for incentivizing open and anonymous peer review proposed in 2021 by Bianca Trovò (Sorbonne University) and Nazzareno Massari (MakerDAO). Winner of ETHTurin Hackathon in 2020, this protocol has only been implemented as a proof of concept.
VitaDAO’s The Longevity Decentralized Review (TLDR) is an on-demand peer review service. Articles from preprint servers are auto-posted daily for review. Reviewers are incentivized to review these papers and receive a share of the donations given to TLDR. Papers and reviews of papers are upvoted by users to measure quality quantitatively. In addition, authors can upvote and comment on reviews to improve feedback and help determine payouts.
DARPA developed the Systematizing Confidence in Open Research and Evidence (SCORE) program to develop and deploy automated tools to assign "confidence scores" to different social and behavioral science research results and claims. This research relied on surveys and prediction markets to assess the replicability of SBS papers.
The Center for Open Science is conducting to research projects related to research evaluation.
Scaling Machine Assessments of Research Trustworthiness (SMART) seeks to advance the development of automated confidence evaluation of research claims.
Center for Open Science Expands Systematizing Confidence in Open Research and Evidence (SCORE) builds on DARPA's SCORE project above to use prediction markets to assess the reproduction, replication, and robustness of research findings using AI agents.
OpenMKT aims to increase the transparency of marketing research by tracking direct replications of marketing articles, retractions of marketing articles, preregistered studies with low p-values, and studies that provide evidence of systemic bias in marketing research. There is no formal scoring, reputation staking, or downstream processes.
SCINET is a decentralized research and investment platform focused on the life sciences. Built on the Internet Computer blockchain, it allows retail and institutional investors to invest directly in research and technology with security and authenticity. It is not concerned with evaluating the reliability of existing papers, reputation staking or downstream processes.
Estimating the Reliability & Robustness of Research (ERROR) claims to be a structured “bug bounty”–style program aimed at detecting, reporting, and correcting errors in scientific publications. It incentivizes both authors and reviewers with monetary rewards based on the severity of errors discovered or the absence of errors. Despite its claim, the project resembles a reward/bonus system more than a traditional bug bounty program. It has no defined research flaw taxonomy, is quite expensive (min CHF 750-2750/paper depending on flaws), and does not leverage AI.
eLife publishes articles with fully open post-publication peer review; reviewers sign their reviews, and editorial decisions are transparent. Relevance: Demonstrates the growing acceptance of open peer review in established journals.
F1000Research is a post-publication peer review where papers are published immediately after basic checks, then undergo open, invited peer review. Pioneered a model that eliminates “gatekeeping” prior to sharing findings.
PeerJ is a membership-based open access journal with optional open peer reviews. Shows how alternative financial models (e.g., author memberships) can sustain open access publishing.
OpenReview is a platform originally popularized by machine learning conferences (ICLR); enables open discussion, commenting, and identity-visible or anonymous reviews. Illustrates how open peer review can be integrated into mainstream, high-stakes conferences.
Peerage of Science was a collaborative peer-review system aimed at improving transparency and reviewer recognition; operates as a platform for multiple journals. Now defunct, it represented an early attempt to unify peer reviewers across many journals with a shared set of standards.
ScienceOpen aggregates research outputs with an open evaluation layer; allows registered users to rate and review articles publicly. This site demonstrates the potential for a “social network” layer on top of published work.
Hypothes.is is a web-based annotation tool that lets researchers add public or private comments to scholarly articles and other web content. It highlights the benefits (and limitations) of decentralized commentary for scholarly works, without formal scoring or structured review.
Clarivate Analytics tracks, verifies, and showcases peer-review activity; reviewers build a public record of their reviewing history.
Peer Community In (PCI) is a network of communities (e.g., PCI Ecology, PCI Evolutionary Biology) recommending preprints; reviews are open, and authors can revise based on feedback. Shows how discipline-specific “communities of trust” can operate to evaluate research collectively.
Reviewer Credits is a platform for journals to recruit, manage, and reward peer reviewers.
Numerous blogs and Twitter/Bluesky accounts document and question papers, such as Data Colada, Research Watch, and others.
These projects are led mainly by academics, which tempers their desire to replace the current system radically. As Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science, conceded on a Freaknomics podcast, "Our field doesn’t have a culture of open criticism. It’s not considered okay." For this reason, validation is best done by people outside the system as it is in cybersecurity. Referee represents a more radical vision for knowledge curation but is very open to working with members of these projects to advance our mutual objectives.
Last updated