The module 4 Corresponds to impacts on beneficiary households where LAPs seek security and legal certainty about land ownership.

Module 4: Household Livelihoods

The various designs for evaluation

Impact evaluation needs a suitable design choice in order to obtain, analyse and interpret the information, and this will be used for making decisions after evaluation. The design determines the ability to detect and understand the processes and impact of a programme in a context with multiple factors influencing its operation and results1. There are many proposed designs which can be used for programme evaluation. This tool proposes two design types, one quantitative and one qualitative.

Quantitative designs

To ensure methodological rigour, an impact evaluation should take into account the counterfactual scenario or alternative, in other words, what would have happened without any project intervention. For this purpose, the main challenge is to identify a group which has not benefited from the intervention but which has approximate or similar characteristics to the treatment group; this group, called the comparison group, must be equal to the treatment group in at least three aspects3.

  1. Both groups should have similar characteristics in the absence of the programme.
  2. The average characteristics of the groups should be the same, although each unit of the treatment group does not need to be identical to each unit of the comparison group.
  3. Neither group should be exposed to other interventions during the evaluation period, or the groups should not be exposed differently.

Taking these two groups into consideration, the impact evaluation shows whether the project has had impacts, what has been the extent of these and who has benefited. Impact evaluation can thus provide a firm basis for the correction and formulation of subsequent policies.
Determining the counterfactual scenario is essential for selecting the various ex post evaluation techniques. These techniques are divided into two categories: experimental (random) designs and quasi-experimental (non-random) designs4. It should be noted that, despite the choice of technique, it is a complex matter to separate the programme effect from the hypothetical conditions that can be affected by the history, selection bias and/or contamination of the sample.

Experimental or random control designs

The experimental or random control design is considered the technically most robust evaluation design, consisting in the random selection of beneficiaries within a wholly defined group of individuals5. The process of the random allocation of programme services or interventions creates two statistically identical groups, one participating in the programme (treatment group, Tr = 1) and one which, while meeting all the conditions for participation, remains outside the programme (control group, Co = 0). Randomization is a process in which the selection of treatment and control groups is random within a clearly defined group of individuals. In this case, there should be no difference (in the expected value) between the two groups.

Measurement of impact therefore involves, once the relevant programme intervention time has elapsed, quantifying the impact variable(s) for both the treatment group and the control group – in both simultaneously and for the same period of time – and then analysing the differences between the two. In operational terms, statistically representative samples of the two groups must be measured and the average impact of the programme on an outcome variable calculated.

It should be noted that the model has a practical drawback, i.e. that public policies are based on diagnostic methods of targeting and identifying beneficiaries and randomization refers to the political feasibility of excluding from cover by a programme, at random, a group of eligible beneficiaries who, as such, need the services of the programme. The random allocation can therefore be questioned if the purposes of compartmentalized and/or targeted policies are taken into account.

Quasi-experimental designs

The main difference between quasi-experimental designs and pure experimental designs is that the participation of individuals in the programme is not a random selection of beneficiaries. Instead participation in the programme occurs:

  1. because individuals themselves choose to take part;
  2. because an official agent makes this decision;
  3. due to both situations together.

In quasi-experimental designs, the counterfactual scenario is defined from individuals who are not taking part in the programme and who will form the comparison group6. It should be noted that this design presents a problem with validity, i.e. the impossibility of generalizing the results of the evaluation to the target population as a whole. This can occur, for example, when the samples are not representative, or when the programmes are not representative, either due to an effect of scale or because the treatment differs from the planned implementation.

This design includes constructed controls or pair-matching methods which aim to obtain an ideal comparison that corresponds to the treatment group in a large population. The type of matching most widely used is propensity score matching, in which the comparison group is compared with the treatment group based on a set of observed characteristics or using the “propensity score” (projected probability of participation given the characteristics observed). The more accurate the propensity score, the better the matching will be. A good comparison group comes from the same economic setting and has been given the same questionnaire by similarly trained interviewers as the treatment group (see Fact sheet questionnaire design).

There are several processes for dealing with information from this design:

1 . Double difference or difference in differences method
This compares a treatment group and a comparison group in an ex ante situation (first difference) and after a programme or ex post (second difference). The comparators must be omitted when propensity scores are used and if there are scores outside the observed margin for the treatment group. In this case, this method corresponds to the quasi-experimental type as group selection is carried out according to eligibility and targeting criteria which establish differences that are both observable and non-observable7.

2 . Instrumental variables or statistical control method
This method uses one or more variables which influence participation, but not the results given that participation takes place. This method identifies the exogenous variation in the results attributable to the programme, recognizing that it has been set up not randomly but intentionally. The “instrumental variables” are firstly used to predict participation in the programme, and the variation in the results indicator with the projected values is then observed8.

3 . Reflexive comparisons method
For this method a basic or reference survey of the participants is carried out before the intervention with a subsequent follow-up survey. The basic survey provides the comparison group and the effect is measured from the change in the results indicators before and after the intervention.

Qualitative designs

Qualitative tools are largely used to understand and evaluate the social processes surrounding the implementation of a programme (disputes caused concerning the programme or reasons why beneficiaries do not use the services offered) or organizational behaviour (culture or organizational climate)9.

 In its place an attempt is made to understand the processes, behaviour, conditions and way in which the individuals or groups studied perceive the results11. Generally speaking, a qualitative investigation will have a high level of “validity” insofar as its results “reflect” an image that is as complete as possible, clear and representative of the reality or situation studied12.

This design includes qualitative methods such as participant observation, interviews, and workshops with focus groups. Qualitative designs can generally combine several models and techniques and can be combined with a quantitative design; these techniques thus provide key information about the perspectives of beneficiaries, the value the programmes have for them, processes that might have affected the results and a more detailed interpretation of the results observed in the quantitative analysis (see fact sheets Guide for workshops with focus groups, Cost analysis and time of registration procedures, Design and processing of household surveys, Monitoring disputes relating to land tenure).


1 De la Orden, A. (1990).
2 Gertler, P. & al. (2011); Blasco, J. & Casado, D. (2009); Bedi, T. & al. (2006).  Baker, J. (2000).
3 Blasco, J. & Casado, D. (2009).
4 Baker, J. (2000).
5 Gertler, P. & al. (2011).
6 Blasco, J. & Casado, D. (2009).
7 Para más información ver: Ministerio de Hacienda Gobierno de Chile (2009).
8 Para mas información ver: Gertler, & al. (2011) y  Sartori (2002).
9 Blasco, J., & Casado, D. (2009).
10 Baker, J. (2000).
11 Cohen, E. & Martínez, R. (2004); Baker, J. L. (2000).