Intervention ToolEvidence & Gap MapsEvidence FinderSHARE Platform

Evidence and Gap Maps

See the big picture of evidence on homelessness interventions – what do we know, and what don’t we know?

ABOUT THE Evidence & Gap Maps

Our Evidence and Gap Maps bring together evidence on homelessness interventions from around the world to highlight where evidence does or doesn’t exist on what works and why they work or not. This helps target research investments faster and in a more strategic, impactful way.

The Effectiveness Map, or ‘what works’ map - captures impact evaluations and effectiveness reviews. The Implementation Issues or ‘why things work or not’ map focuses on the factors that impact on the successful implementation of homelessness interventions. Together the two maps capture around 500 studies on interventions – the largest resource of its type anywhere in the world.

These EGMs have been created with our partners at The Campbell Collaboration and Heriot-Watt University. Both will be updated at regular intervals. The Effectiveness map has now been categorised to show how much confidence we can have in the findings: high, medium and low confidence.

The Effectiveness Map

Also known as the ‘what works’ map. Explore it to find 260 quantitative impact evaluations and effectiveness reviews of homelessness interventions. It has now been categorised to show how much confidence we can have in the findings: high, medium and low confidence. View the report behind this map as well as the Standards of Evidence, and critical appraisal.

Version 1.0.5   Last updated 22/08/2019

The Implementation Issues Map

Also known as the ‘why things work or not’ map. Use it to find 246 qualitative process evaluations that examined factors which help or hinder the successful implementation of homelessness interventions. At the moment the information is displayed in two digital tools – one on barriers and another on facilitators – but an integrated version will be available in future. View the report behind this map here.

Version 1.0.4   Last updated 16/07/2019

Update 5.5.18: EPPI reviewer team are experiencing some issues with the map loading in Internet Explorer. Please try using a different browser while they look into this issue (no problems currently reported in Chrome or Firefox). Please note our maps are not yet mobile friendly.

Learn more

Read more about the maps and what they tell us on our blog. Here's a post announcing our first map on the blog and a write up about part 2 of our maps on the blog here.

Feedback

Do remember to share any studies we missed at feedback@homelessnessimpact.org

Evidence Standards

The Centre is  applying standards of evidence to each of our tools and maps, which have been developed with our partners The Campbell Collaboration.

Standards of Evidence for the Effectiveness Map

Each study in the map has been rated as high, medium or low for ‘confidence in study findings’.  For systematic reviews in the map this rating was made using the revised version of ‘A MeaSurement Tool to Assess systematic Reviews’ (AMSTAR 2).  The rating of primary studies was made using a critical appraisal tool based on various approaches to risk of bias assessment.

The two tools – AMSTAR 2 and the primary study critical appraisal tool – assess a range of items regarding study design and reporting. Some of these items are designated as ‘critical’. The overall  rating for the study is the lowest rating or any critical item.

Primary studies

Critical items


Study design:   We have high confidence in study findings from studies designs best able to detect causal effects such as randomized control trials.


Attrition: High levels of attrition, especially differential attrition between the treatment and comparison groups, reduce the confidence we can have in study findings.


Outcome measure:  For the study findings to be usable and meaningful there should be a clear description of the outcome measures, preferably using existing, validated approaches.


Baseline balance: We can have less confidence in study findings if there were significant differences between the treatment and comparison groups at baseline.


Other items

(assessed but not affecting overall rating):


Blinding:  The absence of blinding of participants and researchers can bias study findings. This may be so even though blinding is not possible.


Power calculations: Power calculations help determine the sample size required. Without such calculations there is a risk of underpowered studies and so a high likelihood of not correctly identifying effective programmes.


Description of intervention: A clear description of the intervention is necessary to be clear what is being evaluated, so that effectiveness is not assigned to similar, but different, interventions.

Systematic Reviews

Critical items

Protocol registered before commencement of the review

Adequacy of the literature search.Justification for excluding individual studies

Risk of bias from individual studies being included in the review

Appropriateness of meta-analytical methods

Consideration of risk of bias when interpreting the results of the review  

Assessment of presence and likely impact of publication bias

Other items

PICOS in inclusion criteria

Rationale for included study designs

Duplicate screening

Duplicate data extraction

Adequate description of included studies

Report sources of funding

Risk of bias assessment for meta-analysis

Analysis of heterogeneity

Report conflicts of interest

For a complete description of the coding tools click here for the primary study critical appraisal tool and here for AMSTAR 2.

manage cookies