Evidence & Gap MapsEvidence FinderSHARE FrameworkIntervention Tool

SHARE Framework

A shared vision for ending homelessness — what needs to happen and how will we know we are succeeding?

Version 1   Last updated 10/05/2018

Our SHARE framework is currently in ‘beta’. In the coming months, we will be working with partners and collaborators to develop the framework further.

What needs to happen to put an end to homelessness?

Resources + Leadership

Evidence standards

The Centre will be applying standards for each of our tools and maps, which have been developed with our partners The Campbell Collaboration.

Standards of Evidence for the map of effectiveness studies

Each study in the map has a rating for the quality of evidence. For individual studies the quality assessment is based on how well the study design deals with the technical issue of selection bias in assessing programme effectiveness. The categories are:

High reliability

Randomised control trials: 

A study in which people are randomly assigned an intervention. One group receives the intervention being tested, and a second control group receives a dummy intervention, or no intervention at all. These groups are compared to assess how effective the experimental intervention was. 

Natural experiments: 

A study in which people are exposed to either experimental or control conditions by natural factors. While not controlled in the traditional sense, the assignment of interventions may still resemble randomised assignment. These groups are compared to assess how effective the experimental intervention was.

MEDIUM TO High reliability

Regression discontinuity design: 

A study in which people are assigned to intervention or control groups based on a cut-off threshold before the start of the test. Comparing the results from either side of the threshold (those who have received an intervention and those who haven’t) provides an estimate of the intervention’s effect where randomisation isn’t possible.

Interrupted time series: 

A study that measures a particular outcome at multiple points in time, both before and after the introduction of an intervention. This aims to show whether the effect of the intervention is greater than any pre-existing or underlying factors over time.  

LOW TO MEDIUM reliability

Instrumental variables: 

A statistical modeling approach in which a variable satisfying certain statistical requirements acts as a proxy for participating in the intervention. 

Propensity score matching:

A study in which participants and non-participants in an intervention are assigned a ‘propensity’ score – a number that represents the probability of a person participating in an intervention based on observed characteristics. Using this score, a third comparison group is created to compare people who have participated in an intervention with those who haven’t, but otherwise share a similar probability of participating.

LOW RELIABILITY

Other forms of matching

Difference-in-difference without matching

Before versus after (pre- post- test) designs

NOT INCLUDED

Case Studies

Qualitative assessments

For systematic reviews we score each study using the 16 item checklist called AMSTAR 2 (‘Assessing the Methodological Quality of Systematic Reviews’).  The 16 items cover: (1) PICOS in inclusion criteria, (2) ex ante protocol, (3) rationale for included study designs, (4) comprehensive literature search, (5) duplicate screening, (6) duplicate data extraction, (7) list of excluded studies with justification, (8) adequate description of included studies, (9) adequate risk of bias assessment, (10) report sources of funding, (11) appropriate use of meta-analysis, (12) risk of bias assessment for meta-analysis, (13) allowance for risk of bias in discussing findings, (14) analysis of heterogeneity, (15) analysis of publication bias, and (16) report conflicts of interest.

WE SCORE REVIEWS AS FOLLOWS

High reliability: 13-16

Medium to high reliability: 10-12

Low to medium reliability: 7-9

Low reliability: 0-6

FEEDBACK

This is version 1 of the framework – we’re excited to see how you use it and we want your help to refine it. 

Thanks for sharing your thoughts! We'll be in touch with any follow up questions.
Oops! Something went wrong while submitting the form.