Use our evidence finder to see where in the world high quality studies have been conducted. Each pin will provide a link to the original study. In future updates of the map, you'll be able to filter the results by client group, outcome area or study type.
The studies from this map come from our Evidence and Gap Map of Effectiveness Studies. You can also read about the Gap Map in our report. We will continue to add new studies as they are identified; if you know of any we have missed please let us know.
See the standards of evidence used to create this map below.
Each study in the map has a rating for the quality of evidence. For individual studies the quality assessment is based on how well the study design deals with the technical issue of selection bias in assessing programme effectiveness. The categories are:
Randomised control trials:
A study in which people are randomly assigned an intervention. One group receives the intervention being tested, and a second control group receives a dummy intervention, or no intervention at all. These groups are compared to assess how effective the experimental intervention was.
A study in which people are exposed to either experimental or control conditions by natural factors. While not controlled in the traditional sense, the assignment of interventions may still resemble randomised assignment. These groups are compared to assess how effective the experimental intervention was.
Regression discontinuity design:
A study in which people are assigned to intervention or control groups based on a cut-off threshold before the start of the test. Comparing the results from either side of the threshold (those who have received an intervention and those who haven’t) provides an estimate of the intervention’s effect where randomisation isn’t possible.
Interrupted time series:
A study that measures a particular outcome at multiple points in time, both before and after the introduction of an intervention. This aims to show whether the effect of the intervention is greater than any pre-existing or underlying factors over time.
A statistical modeling approach in which a variable satisfying certain statistical requirements acts as a proxy for participating in the intervention.
Propensity score matching:
A study in which participants and non-participants in an intervention are assigned a ‘propensity’ score – a number that represents the probability of a person participating in an intervention based on observed characteristics. Using this score, a third comparison group is created to compare people who have participated in an intervention with those who haven’t, but otherwise share a similar probability of participating.
Other forms of matching
Difference-in-difference without matching
Before versus after (pre- post- test) designs
For systematic reviews we score each study using the 16 item checklist called AMSTAR 2 (‘Assessing the Methodological Quality of Systematic Reviews’). The 16 items cover: (1) PICOS in inclusion criteria, (2) ex ante protocol, (3) rationale for included study designs, (4) comprehensive literature search, (5) duplicate screening, (6) duplicate data extraction, (7) list of excluded studies with justification, (8) adequate description of included studies, (9) adequate risk of bias assessment, (10) report sources of funding, (11) appropriate use of meta-analysis, (12) risk of bias assessment for meta-analysis, (13) allowance for risk of bias in discussing findings, (14) analysis of heterogeneity, (15) analysis of publication bias, and (16) report conflicts of interest.
High reliability: 13-16
Medium to high reliability: 10-12
Low to medium reliability: 7-9
Low reliability: 0-6
Use our evidence finder to see where in the world studies have been conducted. Filter results by client group, study setting or context and more.