Evidence & Gap MapsEvidence FinderSHARE FrameworkIntervention Tool

Evidence and Gap Maps

See the big picture of evidence in homelessness – what do we know, and what don’t we know?

Version 1.0.2   Last updated 14/05/2018

The first Evidence and Gap Map on homelessness

Our Evidence and Gap Maps on homelessness bring together the highest quality evidence on homelessness from around the world to highlight where evidence does or doesn’t exist for specific interventions and outcomes. This will allow those working in homelessness to be more strategic about how they fill the gaps in knowledge. 

All types of evidence have their own value and help us answer different questions, so with our partners at Campbell CollaborationHeriot-Watt University and Queen’s University Belfast we will be creating a series of maps over time on different types of evidence. We are currently working on a second map on process evaluations.

This is a first edition of the map. Over the next few months, we will seek to identify any studies we have missed – if you know of any please send them to us. The project team will complete the quality assessment of all studies, and full database records for those studies. As the evidence base continues to grow, the map will be updated annually.

The map of effectiveness studies – which includes all relevant quantitative impact evaluations and effectiveness reviews – can be found below (desktop only). Please note the map is not mobile-friendly. Future maps will capture different types of evidence, including qualitative studies.

Update 5.5.18: We're experiencing some issues with the map loading in Internet Explorer. Please try using a different browser while we look into this issue (no problems currently reported in Chrome or Firefox).

Read more about what the maps can tell us in the report and on the blog.

Find out more about the standards of evidence used to create this map below.

Share your thoughts at feedback@homelessnessimpact.org

Evidence standards

The Centre will be applying standards for each of our tools and maps, which have been developed with our partners The Campbell Collaboration.

Standards of Evidence for the map of effectiveness studies

Each study in the map has a rating for the quality of evidence. For individual studies the quality assessment is based on how well the study design deals with the technical issue of selection bias in assessing programme effectiveness. The categories are:

High reliability

Randomised control trials: 

A study in which people are randomly assigned an intervention. One group receives the intervention being tested, and a second control group receives a dummy intervention, or no intervention at all. These groups are compared to assess how effective the experimental intervention was. 

Natural experiments: 

A study in which people are exposed to either experimental or control conditions by natural factors. While not controlled in the traditional sense, the assignment of interventions may still resemble randomised assignment. These groups are compared to assess how effective the experimental intervention was.

MEDIUM TO High reliability

Regression discontinuity design: 

A study in which people are assigned to intervention or control groups based on a cut-off threshold before the start of the test. Comparing the results from either side of the threshold (those who have received an intervention and those who haven’t) provides an estimate of the intervention’s effect where randomisation isn’t possible.

Interrupted time series: 

A study that measures a particular outcome at multiple points in time, both before and after the introduction of an intervention. This aims to show whether the effect of the intervention is greater than any pre-existing or underlying factors over time.  

LOW TO MEDIUM reliability

Instrumental variables: 

A statistical modeling approach in which a variable satisfying certain statistical requirements acts as a proxy for participating in the intervention. 

Propensity score matching:

A study in which participants and non-participants in an intervention are assigned a ‘propensity’ score – a number that represents the probability of a person participating in an intervention based on observed characteristics. Using this score, a third comparison group is created to compare people who have participated in an intervention with those who haven’t, but otherwise share a similar probability of participating.

LOW RELIABILITY

Other forms of matching

Difference-in-difference without matching

Before versus after (pre- post- test) designs

NOT INCLUDED

Case Studies

Qualitative assessments

For systematic reviews we score each study using the 16 item checklist called AMSTAR 2 (‘Assessing the Methodological Quality of Systematic Reviews’).  The 16 items cover: (1) PICOS in inclusion criteria, (2) ex ante protocol, (3) rationale for included study designs, (4) comprehensive literature search, (5) duplicate screening, (6) duplicate data extraction, (7) list of excluded studies with justification, (8) adequate description of included studies, (9) adequate risk of bias assessment, (10) report sources of funding, (11) appropriate use of meta-analysis, (12) risk of bias assessment for meta-analysis, (13) allowance for risk of bias in discussing findings, (14) analysis of heterogeneity, (15) analysis of publication bias, and (16) report conflicts of interest.

WE SCORE REVIEWS AS FOLLOWS

High reliability: 13-16

Medium to high reliability: 10-12

Low to medium reliability: 7-9

Low reliability: 0-6