Communicating results
- Evaluations and author response to be given DOI's, enter the bibliometric record
- Considering: 'publication tier' of authors' responses as a workaround to encode aggregated evaluation
- Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: See: hypothes.is for collab. annotation)
- Sharing evaluation data
We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.
In the short run/in our pilot phase, we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on 'aggregating expert opinion'. (See, e.g., Hemming et al, 2017; Hanea et al, 2021; McAndrew et al, 2020; Marcoci et al, 2022)
Considering...
- Syntheses of evaluations and author feedback
- Input to prediction markets, replication projects, etc.
- Less technical summaries and policy-relevant summaries, e.g., for the EA Forum, Asterisk magazine, or mainstream long-form outlets
Last modified 1mo ago