What is The Unjournal? We commission and publish independent, public evaluations of research that
can inform high-stakes global decisions. We focus on economics, quantitative social science, forecasting,
and policy-relevant research—including development economics, global health, animal welfare,
AI governance, climate policy, and catastrophic risks.
Learn more →
Early prototype (March 2026). Coverage and scoring depth will improve as we expand sources and
incorporate human feedback. Scores are AI-generated suggestions to help identify candidates for evaluation.
How it works: Papers are automatically discovered from
multiple academic sourcesCurrently scanning: NBER (economics working papers), arXiv (econ, quantitative finance), CEPR (European economics), EA Forum (effective altruism research links), Semantic Scholar (AI-powered search by cause area), OpenAlex/SSRN (social science preprints), and RePEc (economics working papers). New papers are fetched periodically and scored automatically.,
then scored by AI models against Unjournal's prioritization criteria.
Scores reflect
evaluation priorityHow strongly we recommend commissioning an independent Unjournal evaluation of this paper. This considers: (1) Is this research relevant to important global welfare decisions? (2) Would independent evaluation add value beyond existing peer review? (3) Is the paper at a stage where feedback can improve it? (4) Are the authors likely to engage? A high priority score does NOT mean the research is good or bad—it means evaluation would be particularly valuable.—the
expected value of commissioning an independent evaluation—not an assessment of research quality.
We welcome both team and public feedback.
Each paper is scored on five criteria (0–10 scale):
Decision Relevance — Informs high-value global welfare decisions?
Timing Value — Working paper stage where feedback is actionable?
Methodological Potential — Feasible rigor given the field and question
Four-stage pipeline:
Suggesting — A paper is suggested (by AI or human), with a 0–100 percentile rating and discussion of relevance
Assessing — A second team member gives an independent rating and discussion (assessor should not read the suggester rating first)
Voting — If average rating ≥ 65%, the field group votes (Strong Yes to Strong No). Positive votes + strong case → moves to evaluation
Evaluation — An evaluation manager commissions 2+ public evaluations via PubPub
Prioritization = expected value of evaluation, not quality endorsement.
Read more.
Comment directly on this page using the
Hypothes.is sidebar
(look for the < tab on the right edge of the page). Highlight any text and add your annotation —
visible to all Hypothes.is users. You can also use the feedback buttons on each paper card.
-
Shown
-
High eval. priority
Vision: How this tool will work
We are building an efficient, AI-augmented prioritization pipeline:
AI discovery & preliminary rating — The tool finds, vets, and suggests research from multiple sources (NBER, arXiv, SSRN, EA Forum, etc.), giving a preliminary score and adding it to the prioritization database.
Human suggestions — Team members and the public can also add research directly as a "suggester" or "submitter," in which case the AI provides an additional analysis report.
Notifications — Sign up for alerts when new high-potential research in your area is added.
Team assessment — Team members review suggestions, find those of most interest, and give independent ratings. These may be used to continually train and improve the AI recommendation model.
Voting & decisions — The team votes (as in our current process), moving papers forward for commissioned evaluation.
The AI uses Unjournal's core principles and previous prioritization decisions as context.
We welcome your thoughts on this workflow — use the Hypothes.is sidebar or email
contact@unjournal.org.