dc.contributor
Universitat Ramon Llull. Esade
dc.contributor.author
Menkveld, Albert
dc.contributor.author
Dreber, Anna
dc.contributor.author
Dumitrescu, Ariadna
dc.date.accessioned
2026-02-19T14:12:06Z
dc.date.available
2026-02-19T14:12:06Z
dc.date.issued
2021-11-23
dc.identifier.uri
https://hdl.handle.net/20.500.14342/5320
dc.description.abstract
In statistics, samples are drawn from a population in a data-generating process
(DGP). Standard errors measure the uncertainty in estimates of population pa-
rameters. In science, evidence is generated to test hypotheses in an evidence-
generating process (EGP). We claim that EGP variation across researchers adds
uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams
test the same hypotheses on the same data. NSEs turn out to be sizable, but
smaller for more reproducible or higher rated research. Adding peer-review stages
reduces NSEs. We further find that this type of uncertainty is underestimated by
participants.
dc.rights
Attribution 4.0 International
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
dc.title
Nonstandard errors
dc.type
info:eu-repo/semantics/article
dc.description.version
info:eu-repo/semantics/acceptedVersion
dc.identifier.doi
http://dx.doi.org/10.2139/ssrn.3961574
dc.rights.accessLevel
info:eu-repo/semantics/openAccess