Primary studies
Critical items
Study design: We have high confidence in study findings from studies designs best able to detect causal effects such as randomized control trials.
Attrition: High levels of attrition, especially differential attrition between the treatment and comparison groups, reduce the confidence we can have in study findings.
Outcome measure: For the study findings to be usable and meaningful there should be a clear description of the outcome measures, preferably using existing, validated approaches.
Baseline balance: We can have less confidence in study findings if there were significant differences between the treatment and comparison groups at baseline.
Other items
(assessed but not affecting overall rating):
Blinding: The absence of blinding of participants and researchers can bias study findings. This may be so even though blinding is not possible.
Power calculations: Power calculations help determine the sample size required. Without such calculations there is a risk of underpowered studies and so a high likelihood of not correctly identifying effective programmes.
Description of intervention: A clear description of the intervention is necessary to be clear what is being evaluated, so that effectiveness is not assigned to similar, but different, interventions.