Data Dictionary
This page documents the columns in the Replications Database. Each row in the database represents a single effect from an original study paired with a replication attempt of that effect.
| Column Name | Type | Required? | Description |
|---|---|---|---|
original_url | string | yes | URL for the paper that contained the original experiment - we always use http://doi.org/{DOI} if available |
replication_url | string | yes | URL for the paper that contained the replication experiment - we always use http://doi.org/{DOI} if available |
description | string | yes | One sentence description of the effect that was found in the original paper which the replication paper attempted to replicate |
result | “success” OR “failure” OR “inconclusive” OR “reversal" | yes | Result of the replication – success, failure, inconclusive, or reversal. For human-curated rows, this was the overall judgement of a human who looked at the paper. For the AI curated rows, we asked the AI to go with whatever the replication authors said in the paper, and if the authors didn't make a judgement (rare) then we asked the AI to make it's own judgement. |
replication_type | "direct" OR "close" OR "conceptual" OR "close experiment" OR "close extension" | online vs in-person | type of replication - see our "Defining replication" page for more info |
original_authors | semicolon separated list | no | Full names of authors on original paper |
original_title | string | no | title of the original paper in sentence case (only first word capitalized) |
original_journal | string | no | full name of journal or venue for original paper |
original_volume | integer | no | volume number for original paper, if applicable |
original_issue | integer | no | issue number for original paper, if applicable |
original_pages | string | no | page numbers for original paper, if applicable |
original_year | integer | no | year the original paper was published in the journal |
replication_authors | semicolon separated list | no | Full names of authors on replication paper |
replication_title | string | no | title of the replication paper in sentence case (only first word capitalized) |
replication_journal | string | no | full name of journal or venue for replication paper |
replication_volume | integer | no | volume number for replication paper, if applicable |
replication_issue | integer | no | issue number for replication paper, if applicable |
replication_pages | string | no | page numbers for replication paper, if applicable |
replication_year | integer | no | year the replication paper was published in the journal |
original_n | positive integer | no | For many papers this is the total number of human subjects in the original experiment, for other papers it is the total number of animals or petri dish cultures or whatever |
original_es | float | no | raw effect size from the original experiment |
original_es_type | string | no | type of effect size (d = Cohen’s d, OR = Odds Ratio, HR = Hazard Ratio, η² = Eta Squared, f = Cohen’s f, f² = Cohen’s f², R² = R Squared, φ = Phi Coefficient, r = Pearson Correlation, t = t-test, F = F-test, z = z-test, χ² = Chi-squared) |
original_es_95_CI | [float, float] | no | 95% confidence interval for the effect size in the original study |
original_p_value | float | no | original p value |
original_p_value_type | string | no | whether the p value was =, <, or > |
original_p_value_tails | “one” OR “two” OR “” | no | Whether the p value is for one or two tails |
replication_n | positive integer | no | For many papers this is the total number of human subjects in the replication experiment, for other papers it is the total number of animals or petri dish cultures or whatever |
replication_es | float | no | raw effect size from the replication experiment |
replication_es_type | string | no | type of effect size (d = cohen's d, etasq, r = pearsons r, etc) |
original_es_r | float | no | Original effect size converted to Pearson's r for cross-study comparability |
replication_es_r | float | no | Replication effect size converted to Pearson's r for cross-study comparability |
replication_es_95_CI | [float, float] | no | 95% confidence interval for the effect size in the replication |
replication_p_value | float | no | replication p value |
replication_p_value_type | string | no | whether the p value was =, <, or > |
replication_p_value_tails | “one” OR “two” OR “” | no | whether the replication p value had one or two tails |
field | string | no | field categorization from our ontology |
discipline | string | yes | discipline categorization from our ontology |
subdiscipline | string | no | subdiscipline categorization from our ontology |
tags | semicolon separated list | no | set of tags to enable search |
validated | “yes" OR "no" | yes | has the core required data been validated by a human at least once? |
validated_person | string | no | name of the person and/or organization who did the validation, if applicable |
replication_initiative_tag | string | no | A tag that appears if the experiment was replicated as part of a major initiative to replicate many experiments (eg > 10) in a particular field. For instance, for Replication Project: Cancer Biology, the tag is “RP:CB” |
source | string | no | notes on the source of the data |
explanation | string | no | explanation from the AI |
confidence | "low" OR "medium" or "high" | no | confidence level of the AI-curated result (high, medium, low) |
ai_version | string | no | version number of the AI curation pipeline that generated this row |