Part of CogTale’s mission is to provide easy-to-read, information about evidence synthesis the field of cognitive interventions.

Research Quality

Learn how to interpret your search findings using a range of resources available on the CogTale website. Three scales are used to evaluate risk of bias and quality of evidence in the studies uploaded to CogTale.

PEDro Scale

The PEDro scale is a free, online resource developed by the Centre for Evidence-Based Physiotherapy. The PEDro scale evaluates a study’s methodological quality, allowing for the identification of study results which are valid and useful.

The scale consists of 11 items. The first item (“eligibility criteria were specified”) evaluates external validity (i.e. how ‘generalizable’ the findings of the study are to the wider population). Items 2-9 assess the study’s internal validity (i.e. the degree to which you can be confident that the results of the study are caused by one independent variable). Items 10-11 determine whether the study results are interpretable (i.e. whether sufficient statistical information has been provided). A point is awarded for each satisfied criterion. The total score of the PEDro scale is determined by summing the scores of criteria 2-11, thus, (excluding criterion 1), the methodological quality of the study is ranked based on a total score out of 10.

The PEDro scale was based on the Delphi list developed by Verhagen and collegues (for more information on the Delphi list please review: Verhagen et al., 1998). The reliability and validity of the PEDro scale has also been established by several studies (see: deMorton, 2009Maher et al., 2003, as examples) and in some cases has been shown to be more comprehensive than other measures of methodological quality (see: Bhogal et al., 2005).


The JADAD scale is a commonly used tool to assess the methodological quality of controlled trials. The scale consists of 7 items that assess three key methodological features of controlled trials: (1) randomisation (i.e. when study participants are assigned to a treatment or control group by chance), (2) blinding (i.e. minimising the risk of prior expectations of participants and researchers from influencing the reporting of results), and (3) withdrawals and dropouts (i.e. participants who fail to complete a study).

Researchers respond to questions using a Yes/No format (e.g. “was the study described as randomised?”). For items 1-5, one point (+1) is awarded for each satisfied criterion. Items 6 and 7 attract a negative score (-1). The total score of the JADAD scale is determined by summing the scores of criteria 1-7, and the methodological quality of the study is therefore based on a total score out of 5 (where 5 is the best score a study can achieve.

The JADAD has been shown to be an easy to use tool, with established reliability and external validity (see: Olivio et al., 2008 for an example of this).

Cochrane Risk of Bias (ROB) Scale

The Cochrane Risk of Bias (ROB) tool provides a framework for evaluating potential sources of bias in the: study design, conduct, analysis, and reporting of results, in randomised controlled trials. The ROB tool evaluates the methodological quality of trials based on six bias domains:

  • Selection bias: was participant allocation random, and concealed?
  • Performance bias: were participants and study personal blinded from knowledge of which intervention a participant received?
  • Detection bias: were outcome assessors blinded from knowledge of which intervention a participant received?
  • Attrition bias: is there missing data, and how was this data treated?
  • Reporting bias: is there evidence of selective reporting?

Researchers formulate domain-level “judgements” about the risk of bias (i.e. “low risk”, “high risk”, or “unclear risk”) using evidence from the trial paper, trial protocol, and other sources. These domain-level judgements then provide the basis for an overall assessment of the risk of bias for the study being evaluated.

Meta-Analyses Explained

A meta-analysis is a statistical technique. It involves combining the results of studies that are statistically similar, in order to identify a single conclusion. For example, a researcher may conduct a meta-analysis which examines the effectiveness of a particular cognitive intervention program in individuals with a diagnosis of Mild Cognitive Impairment (MCI). To do this would involve pooling the results of randomised controlled trials which assess the particular intervention program that the researcher is interested in, in participants with MCI.

There are several advantages to conducting meta-analyses. Firstly, when compared to the analysis of a single study, the results of a meta-analysis are statistically stronger. This is due to the increased sample size (i.e. number of participants) and greater variability in the sample (i.e. age, gender, etc.) that typically result from combining studies. Meta-analyses also allow for identification of more accurate estimates of the magnitude of the effect(s) of the treatment being investigated. This information can be helpful in determining how useful an intervention or treatment may be in the wider population.

Effect Sizes Explained

Put simply, the effect size measures the size of an effect. For example, if one study group has had a cognitive treatment and the other group has no treatment, then the effect size would measure the effectiveness of the cognitive treatment. In other words, the effect size tells us how much more effective the cognitive treatment was compared to no treatment.

Most studies use a measure of statistical significance (i.e. the observed differences are not due to chance) to endorse their findings. However, statistical significance is limited in several ways. Firstly, it does not provide any information about the magnitude of the difference between the two treatments/measures/groups being assessed in the study (i.e. how much more effective the treatment was compared to no treatment- as referenced in the above section). In addition, with a large enough sample, most studies will often produce statistically significant results even when the intervention or treatment has only small effects. Small effects, even if significant, may often have little clinical utility. Lastly, statistical significance cannot be compared across studies, which limits our ability to compare the results of different treatments (for example) across different studies.

Common Effect Sizes

Cohen’s d

Measures the effect size between two groups; is commonly used in meta-analysis.

Hedge’s g

Similar to Cohen’s d, however preferred when sample sizes are very small (e.g. <20 people).

Pearson’s r

Measures the strength and direction of a correlation between two variables.

Relative Risk (RR)

Reflects the probability of an event occurring in an exposed group, compared to this same event occurring in a non-exposed group.

Odds Ratio (OR)

Reflects the odds of a desired outcome in the intervention group relative to the odds of a similar outcome in the control group.

How do I interpret effect sizes?

While there are no set definitions for interpreting effect sizes; some values have been offered cautiously as a guideline or “rule of thumb”. For example, Cohen (1988) proposed that a of .2 indicates a small effect, while a of .5 indicates a medium effect, and a of .8 indicates a large effect. However, it is important when interpreting an effect size to refer to prior studies to see where your findings fit into the wider literature, and to also consider the methodological quality of the study, and the clinical significance of the findings (i.e. has the intervention resulted in a meaningful change in the participants’ lives?).


What is brain training? Why is it useful? Learn about cognitive interventions on this page.


Want access to cognition-oriented treatments and dementia care resources?


Explore our database for research studies on cognition-oriented treatments.

References and Further Reading

Research Methodology Quality Scores Further Reading

Read More

Meta-analysis Further Reading

Read More

Effect Sizes Further Reading

Read More