**Coding**: procedure that consists in extracting from primary studies information necessary to perform a meta-analysis.

**Cohen’s Kappa**: statistical test used for evaluating degree of agreement between raters/judgers (for instance, it can be used to test which is the agreement about the inclusion or exclusion of studies from a meta-analysis).

**Confidence interval (CI)**: it indicates the range within which the true effect size is likely to lie. Usually a 95% CI is computed for each study effect size and for the overall (combined) effect size.

**Cumulative meta-analysis**: it is a meta-analysis that is performed first with one study, then with two studies, then with three studies, and so on, until all studies have been included.

**Effect size**: this a measure of the magnitude of a relationship or a difference between groups. It can be based on means (raw unstandardized mean difference, standardized mean difference: Cohen’s d or Hedges’s g, etc.); binary data (risk ratio, odds ratio, risk difference, etc.); correlations (r); survival data (hazard ratio).

**Exclusion criteria**: criteria that define which studies should be excluded from a meta-analysis.

**Fixed-effect model**: statistical model used to combine the study effect sizes. According to this model (which is opposed to the random-effects model) there is a true effect size common to all the studies. In assigning a weight to each study, it takes into account only one source of variance: the within-study variance.

**Forest plot**: plot of effect sizes (with confidence intervals) of all the studies included in the meta-analysis.

**Gray (grey) literature**: literature that is produced on all levels of government, academia, business and industry in print and electronic formats, but which is not controlled by commercial publishers. The term is also used to refer to literature that has a limited dissemination and, being difficulty retrieved, it is rarely included in meta-analyses.

**Heterogeneity**: terms used to refer to differences between studies included in the meta-analysis.

**Inclusion criteria**: criteria that define which studies should be included in a meta-analysis.

**Meta-analysis**: quantitative synthesis of study findings available on a specific topic.

**Meta-regression**: statistical analysis used to test the effect of a continuous moderator.

**Moderator**: variable that might explain differences in the effect sizes. If the moderator is categorical, its effect is tested by a subgroup analysis; if the moderator is continuous, its effect is tested by a meta-regression.

**Primary study**: terms used to refer to a study that is included in a meta-analysis.

**Publication bias**: it exists when published studies differ systematically from unpublished studies (gray literature). Different methods for evaluating the publication bias are available (i.e., funnel plot; Rosenthal’s Fail-safe N; Orwin’s Fail-safe N; Duval and Tweedie’s Trim and Fill method; etc.)

**Random-effects model**: statistical model used to combine the study effect sizes. According to this model (which is opposed to the fixed-effect model) the true effects in the studies are assumed to have been sampled from a distribution of effects. In assigning a weight to each study, it takes into account two sources of variance: within-study variance and between-studies variance.

**Reliability generalization**: meta-analysis of Cronbach’s alpha coefficients.

**Search strategy**: strategy used to identify primary studies to be included in a meta-analysis.

**Subgroup analysis**: statistical analysis used to test the effect of a categorical moderator.

**Vote counting**: the method of vote counting, differently from meta-analysis, does not provide a quantitative summary of the results of primary studies, but it is used to draw conclusions on a specific area of study comparing the number of studies in which significant results have emerged with the number of studies reporting non-significant results or negative results.

**Weight**: in the meta-analysis, when study effect sizes are combined, a weight is assigned to each study. The weight assigned is usually the inverse of the variance and its value depends on the model (fixed-effect model vs. random-effects model) chosen for combining the effect sizes.