Also known as the ‘what works’ map, you can find 690 quantitative impact evaluations and effectiveness reviews of homelessness interventions. It also shows the level of confidence you can have in the findings - high, medium and low confidence. View the map’s report, take a look at its Standards of Evidence, and view the critical appraisal.Use the map
We are always adding studies to the growing evidence base. If you know of any we have missed please let us know.
Study design: At least 3 RCTs or 5 other studies with a combined sample size of at least 300
Attrition: High levels of attrition, especially differential attrition between the treatment and comparison groups, reduce the confidence we can have in study findings.
Outcome measure: For the study findings to be usable and meaningful there should be a clear description of the outcome measures, preferably using existing, validated approaches.
Baseline balance: We can have less confidence in study findings if there were significant differences between the treatment and comparison groups at baseline.
(assessed but not affecting overall rating):
Blinding: The absence of blinding of participants and researchers can bias study findings. This may be so even though blinding is not possible.
Power calculations: Power calculations help determine the sample size required. Without such calculations there is a risk of underpowered studies and so a high likelihood of not correctly identifying effective programmes.
Description of intervention: A clear description of the intervention is necessary to be clear what is being evaluated, so that effectiveness is not assigned to similar, but different, interventions.
Protocol registered before commencement of the review
Adequacy of the literature search. Justification for excluding individual studies
Risk of bias from individual studies being included in the review
Appropriateness of meta-analytical methods
Consideration of risk of bias when interpreting the results of the review
Assessment of presence and likely impact of publication bias.
PICOS in inclusion criteria
Rationale for included study designs
Duplicate data extraction
Adequate description of included studies
Report sources of funding
Risk of bias assessment for meta-analysis
Analysis of heterogeneity
Report conflicts of interest