For ISD, we use the matching percentiles method, as deployed by Lambsdorff (2007) in their Corruptions Perceptions Index, whereby values are matched across cases based on country ranking. The ranks of successive indicators included in the index are used to assign equivalent values to countries based on their position on each additional measure. Variables are iteratively added to produce the index.

The basic assumption behind this methodology is that for each of the dimensions of social development there is some latent value (Li) representing the objective level of that dimension in country i. Each of the available indicators yi represents, on a different functional transformation (f) and with varying degrees of measurement error εi, level Li such that:

(yi) = f (Li ) + εi Because we are unable to estimate the functional form f, the aggregation methodology is nonparametric, with no assumptions regarding the linearity or otherwise of the distribution of the values in y. We merely assume that the relative position of countries on y reflects a better or worse underlying condition with respect to L. The ranks of successive indicators used in the index are then utilised in order to assign values to countries, based on the values assigned to the same sample of countries already in the measure. Thus if a new indicator is added to the index that has a sample of five countries, Botswana (6.8), Nigeria (5.5), Sudan (2.4), Burundi (3.1) and Tanzania (7.2), and the equivalent scores for these countries in the index thus far are 0.55, 0.40, 0.10, 0.11, and 0.35, then Tanzania will be assigned the maximum equivalent value of 0.55, Botswana the second value of 0.40, Nigeria, 0.35, Benin 0.32, Burundi 0.11 and Sudan 0.10.

The matching percentiles method is iterative, with each indicator being added to the index in successive rounds which progressively refine the country scores (cf. Lambsdorff 2007). The indicators to be compiled are first sorted S1, S2 … Sn for each of n different sources. As successive indicators are added, the standard deviation of the estimate is held constant among affected countries, to prevent their scores from tending toward the mean.

The matching percentiles method brings with it several advantages for creating a set of indices of this nature. First, the matching percentiles method overcomes the problem of sampling bias. This is pervasive when a new data source only covers a limited and unrepresentative sample of countries, as country scores on the new indicator will reflect not only a difference in scaling (β) but also a difference in the constant (α). A further advantage of the matching percentiles technique is that it allows us to keep adding successive waves of indicators, even with very small samples, that can be used to continually ‘refine’ the country scores simply by using information on relative rankings. Whereas regression based techniques of aggregation encounter difficulties in incorporating small sample sources due to difficulties estimating α and β when the sample size is very low, no such difficulties affect the matching percentiles technique. This is critically important for a set of indices of this nature, where the present data remain incomplete, such that it will be necessary to keep adding new indicators in future years as successive data source become available, even where such sources cover relatively few countries.

Why combine indicators?

The basic rationale is that all indicators have some level of measurement error. Observational error may exist because of unreliability in the instrument used to record a particular phenomenon: surveys, for example – the means used to gather many development indicators – may be subject to reporting biases or sampling error; official statistics on the other hand may have been compiled using different methodologies. There is also error that is attributable to the use of indicators with low concept validity, that is, when the selected indicator, however reliably gathered, only imperfectly corresponds to the latent variable under consideration. The percentage of women in employment, for example, is a weaker indicator of gender empowerment than the percentage of women in managerial occupations, as the former includes employment in subordinate positions.

We can deal with measurement error in several ways. The first and perhaps most obvious precaution is to employ greater scrutiny in the selection and consideration of indicators. This however presumes a high degree of knowledge on the part of the analyst: it can be difficult to assess the reliability of any given measure in isolation, especially in the absence of familiarity with the method used to generate those values. Validity is easier to determine, though here again we often have to rely on complex assumptions regarding the causal relationship between what we are measuring and what we seek to measure. For example, it may be open to contention whether civic activism is best measured by features of the institutional environment (the number of media organisations, freedom of information), features of citizen behaviour (engagement in local civic groups, participation in voting, petitions and demonstrations), or some other feature of that society (e.g. the number of international NGOs). We often face a trade-off between reliability, validity, and representativeness: a given indicator, such as the income ratio between different ethnic groups, may be a valid and reliable measure of social exclusion, but available for very few countries; a survey item on attitudes toward other ethnic groups is certainly valid and may be widely available, but subject to survey response bias. There is, in short, rarely a single indicator that adequately measures the concept we are trying to quantify.

The second strategy for mitigating measurement error, besides simply exercising rigor in selection, is to combine multiple indicators. Combining indicators does not eliminate measurement error, but if one assumes that errors are uncorrelated between data sources and that the size of the error is constant across items, then the combination of multiple sources will progressively reduce error as the number of indicators increases. The intuition here is simple: if error ε is randomly distributed around mean 0, asymptotically the sum of ε over repeated draws n will tend to this zero mean. Combination of multiple indicators has therefore become a standard means of quantifying concepts whose presence or absence is difficult to tap directly, as has recently been pioneered in studies of corruption and other dimensions of governance (Lambsdorff 2007; Kaufmann et al. 1999, Kaufmann et al. 2006).


Kaufmann, D. Kraay, A. and Pablo Zoido-Lobaton. (1999). ‘Governance Matters’, Policy Working Paper 2196, Development Economics Group, World Bank.

Kaufmann, D. Kraay, A. and Massimo Maastruzzi. (2006). ‘Measuring Corruption: Myths and Realities’, Development Outreach.

Lambsdorff, J. G. (2007). ‘The Methodology of the Corruption Perceptions Index 2007’, Transparency International.

For most recent information on the CPI and explanations of methods.