Previous PageTable Of ContentsNext Page


Planning and Designing for a
Targeting Scheme

The wide variation of social contexts in which different activities are undertaken will often lead to wide differences in the final choice of targeting method.

Designing a targeting scheme

The broad goals and more specific operational objectives of a food and nutrition intervention drive the planning and development process. Selection of a targeting scheme is an integral part of the programme planning phase, thereby linking targeting directly to the programme's operational objectives and goals. The latter will normally spell out who are to be the target group(s), such as children under five years of age; pregnant and/or lactating mothers; internally displaced populations in specific areas; food-insecure households in urban, low-income neighbourhoods; or the landless rural poor. The target groups have to be defined in operational ways that allow them to be identified and located.

In practice, important political, cultural, logistic, technical and/or financial constraints often impose limitations on which targeting scheme can be selected and implemented. A targeting scheme cannot be designed on theoretical grounds alone, and the scheme that best supports the specific objectives of a given programme may, in practice, be very difficult or costly to implement.

The success of a targeted programme, as indeed of any programme, depends on detailed planning, efficient management and continuous monitoring and evaluation, with the results of the latter feeding back into improved planning and implementation of the programme. A number of important targeted nutrition programmes have been criticized for poor planning and weak management, or ineffectual monitoring and evaluation. Another important element for success is to establish a solid stakeholder group of partner institutions, and to involve the targeted communities (individuals, households) directly and early on in the planning (needs assessment) and programme management process.

The phase-out stage of a targeted programme should be foreseen and planned during the programme development stage. The phasing out of a programme presumably enters into effect when its objectives and goals have been achieved. However, there may be other reasons for phasing out a programme regardless of whether its objectives and goals have been achieved, such as funding limitations. For example, donors often prefer to spell out from the beginning when their commitments will finish. It is important that the phasing out of a targeted nutrition programme be a gradual process, especially when the programme makes a substantial contribution to the welfare and food intake of the poor. The financial, institutional, political and social sustainability of the programme and its effects also need to be carefully considered in the development and implementation processes. In other words, programme activities have to be planned and designed to strengthen the sustainability of programme effects after the programme has been formally phased out.

When designing a targeting scheme to meet the programme's given objectives, three fundamental characteristics of the targeting process must be considered and defined at the very beginning:

  1. Who designs, implements and monitors the targeting scheme? Normally the programme planners will design the targeting scheme during the development of the programme proposal. Thus, in the case of a public programme, staff and decision-makers of the government institution(s) responsible for the programme will also design the targeting scheme. Donors may participate in this process as part of programme proposal review and discussion. If the programme is to be implemented in partnership with non-governmental organizations (NGOs), they will also participate in the programme development process, and thus in designing the targeting scheme. In a community-based food or nutrition programme, community leaders and local political decision-makers may participate in deciding who in the community is to receive the programme goods and/or services. Programme staff implement the targeting scheme along with other programme activities. Programme supervisory personnel should monitor the implementation of the scheme to ensure that targeting criteria are correctly applied and, when necessary, corrective measures are designed and implemented.
  2. Who is to be targeted? The intended target population(s) is (are) defined by the programme objectives. However, it needs to be decided how the target population will be identified and how eligibility, as well as exit, criteria will be established. The eligibility and exit criteria need to be well understood by both the target and the non-target populations, as well as the programme staff, and to be correctly and consistently applied by the latter. Indicators for targeting need to be identified and, if possible, validated before they are applied. Such indicators may include either a certain age group, sex, nutritional and health status, socio-economic basis, geographic location, group suffering from disaster or specific micronutrient deficiency, or the entire population. The selected criteria should be well understood and correctly applied by programme staff or those responsible for this task.
  3. How will targeting be done? Alternative targeting schemes may need to be considered, and the most appropriate scheme selected by weighing the technical, social, financial and institutional factors associated with each of the schemes under consideration. This is likely to involve consideration of trade-offs among these factors. Will targeting be done on the basis of nutrition-related indicators, such as anthropometric measurements or laboratory indices, or non-nutritional indicators, such as geographic, market-based or self-targeting schemes? The selected criteria should be well understood and correctly applied by programme staff.

The collection and analysis of data are important in the detailed planning of a targeting scheme. These activities also provide a baseline for future evaluation. The kind of assessment needed will depend on the type of programme to be implemented and its primary objectives. Generally, most food and nutrition programme planning will need to assess:

An important concept in the assessment of targeting schemes is that any strategy will exclude some needy individuals while including others who would do well even without the programme's support. Many targeting strategies that have a small exclusion error have a large inclusion error, and vice versa. Thus, a trade-off between these two errors often has to be considered. Normally, when the main concern is reducing food and nutrition insecurity, minimizing undercoverage rates is more important than lowering leakage rates. If a limited programme budget is the main concern, reducing leakage should be given greater weight.

Assessing resources

Perhaps the most important step in the selection process of a targeting strategy is to assess the costs and marginal benefits of different targeting schemes. Most types of programme lend themselves to only two or three possible schemes. Food subsidy programmes can be designed with self-targeting or market-based targeting, alone or in combination with geographic targeting. Food distribution programmes can be designed to target administratively on the basis of community or, again, by a combination in which communities are selected administratively and then establish their own criteria for selecting households.

Assessing the total costs of each targeting scheme is particularly difficult. Apart from direct costs such as staff time and transport, there are also important non-monetary costs, especially the loss of benefits associated with denying services to an individual who may need them, or the loss of community support due to the denial of services to some of its members who the community feels, justifiably or not, should be included. The marginal costs of targeting need to be weighed against the cost savings, giving full consideration to the programme objectives. The relevant question to ask is: At the same level of achievement of programme objectives, what are the net cost savings from targeting and from each of the different targeting schemes? This involves consideration of inclusion and exclusion errors, or undercoverage and leakage rates. Large inclusion and exclusion errors raise the cost of targeting, but may still result in lower programme costs compared with those for non-targeted programmes with the same objectives. As already discussed, different targeted food and nutrition programmes involve different risks with respect to the leakage of benefits directed to the target population.

Cost comparisons across targeting schemes

The direct targeting costs consist of administrative and information costs. The administrative costs, in turn, comprise the costs associated with designing, testing, implementing (i.e. screening and monitoring programme participants), supervising (i.e. applying eligibility rules correctly) and evaluating the cost-effectiveness of the scheme. Information costs are associated with generating and analysing the data and information that are necessary when establishing criteria to define, and indicators to identify and characterize, the target group(s). Such data and information can also serve other purposes, such as programme monitoring and evaluation, and their costs should thus not necessarily all be assigned to targeting. Direct targeting costs vary according to the targeting scheme of the programme as follows:

Administratively targeted programmes require the collection of accurate information to determine eligibility and are generally considered to involve high information-gathering costs, as well as high administrative costs.

Self-targeting mechanisms, which attempt to direct programme benefits to a specific target population (such as those willing to supply labour in food-for-work projects), generally incur lower information costs than administrative targeting does because no eligibility criteria have to be established and eligibility does not need to be screened and monitored. However, detailed information and data are needed in order to understand market behaviour among the low-income or vulnerable group(s). Administrative costs are relatively low, consisting mainly of monitoring programme coverage.

Market-based targeting requires substantial information about demand and supply patterns and their determinants, as well as about changes over time to market conditions. It therefore has moderately high information costs, while administrative costs are incurred by the monitoring of subsidized food commodities in order to ensure that price reductions are passed on to consumers in the target groups.

Community-based targeting involves all community members, or only community leaders, in deciding how to allocate programme food or services within the community, relying on their empirical knowledge of the food insecurity or vulnerability of households or individuals. Direct information costs to the programme are, therefore, negligible and are borne by the community in the form of time costs. Administrative costs are also negligible.

Geographic and regional targeting, which are forms of administrative targeting, incurs information costs that are likely to be moderate because of their reliance on existing secondary data sources, complemented by periodic rapid assessments. Administrative costs consist mostly of monitoring the allocation of programme resources in accordance with established regional priorities.

Household/individual targeting, which is another form of administrative targeting, has high information and administration costs that are similar to those for administrative targeting.

Target populations are often small children, women, internally displaced persons in specific areas, food insecure households in urban areas and the landless rural poor.

Selecting indicators for targeting

When choosing specific targeting indicators, the challenge is to maximize the usefulness and quality of the information for decision-making, while taking full consideration of the costs of collecting, processing and analysing that information. In deciding which indicator(s) to use for targeting, it should be kept in mind that the information provided by the indicator(s) should be:

  1. relevant and valid;
  2. accurate and reliable;
  3. timely;
  4. accessible;
  5. low-cost.


The targeting indicator must be relevant to the programme objective(s). If the programme objective is to prevent malnutrition, the targeting indicator must be capable of identifying people or populations who are at risk of malnutrition.

The importance of the relevance and validity of an indicator can be illustrated in the case of Guatemala, where an NGO providing food assistance used weight-for-height to screen for potential beneficiaries. The very small number of children identified by this screening test prompted the NGO to conclude that malnutrition was not a problem in Guatemala and that food aid should be discontinued. However, while the level of wasting was very low, Guatemala, at that time, had the highest rates of stunting in all of Latin America.

The selection of the indicator should also depend on its intended use - for either individual-level screening or the targeting of populations at some aggregate level. For example, mid-upper arm circumference (MUAC) is useful for community nutritional status screening purposes, but should not be used as a substitute for weighing when individual children are being selected for a supplementary feeding programme.

Linking the definition of a targeting indicator to broad programme objectives is not always simple. For example, there are usually multiple indicators for any given food security concept:

Nutritional status can be reflected by a variety of biochemical, clinical, anthropometric and dietary indicators. A variety of anthropometric indicators can be used for targeting purposes, depending on whether the objective is related to identifying stunted (height-to-age), wasted (weight-to-height) or underweight (weight-to-age) children, for example.

A targeting indicator must also be valid in different social, cultural and ecological settings. The determinants of food insecurity in one setting may be different from those in another. For example, in agro-ecological regions of Eritrea where there are predominantly pastoralist farmers, herd size is a good indicator of the risk of food insecurity, while in other regions of predominantly crop-based farming, this is not the case. Applying a monetary income measure in both urban and rural settings may grossly overestimate the food insecurity risks in rural areas. This means that targeting indicators may have to be determined locally, thereby losing (some) comparability across regions and making it more difficult to allocate resources by region at the central level.


The information provided by a targeting indicator is used to make decisions, and the more accurate it is (all things being equal), the better the decisions based on that information. This means that accuracy is important, which means in turn that the indicator must be subject to a minimum of systematic measurement errors. As a targeting indicator is likely to be applied repeatedly, particularly when the determinants of food insecurity are changing rapidly, the indicator must be reliable (subject to a minimum of random measurement errors). Monitoring the effectiveness of a targeting scheme with unreliable indicators will provide erroneous results and lead to misleading conclusions.


It is important that the indicator be able to provide information in a timely manner. Complex targeting indicators that require time-consuming data collection, processing and analysis may delay programme implementation. Under rapidly changing conditions that involve rapid population movements, such as in a natural disaster or a complex emergency, a targeting indicator or combination of indicators must be capable of identifying rapidly where the at-risk populations are, and what their immediate needs are.


The information provided by the indicator must be accessible and be open to interpretation by many different decision-makers and actors with different social and cultural backgrounds, professional orientations and levels of schooling. Highly complex indicators may make sense only to professionals with an adequate technical background, and this is not conducive to broader participation in targeting decision-making. This argues for simple, common-sense indicators, as long as these are appropriate. It also argues for broad participation in assessing alternative targeting indicators.


The cost of data collection is a common concern in targeting. Cost is typically related to the time, personnel and logistics costs associated with data collection, processing and analysis. These costs may vary significantly according to the indicator and data collection method used. Often, the use of low-cost indicators may imply difficult trade-offs in terms of their relevance or credibility. For example, low-cost indicators of income derived simply from heads' of household lump sum estimates of total household income are not likely to be as accurate as those calculated by aggregating all of the individual incomes reported by each household member.

There are normally a variety of ways of measuring an indicator. Estimates of crop production levels can be based directly on farmers' recall of production or on more complex crop estimation through field measurements. The method used to collect information when constructing a targeting indicator will influence the costs. Data collected during household visits by programme staff are likely to be more expensive than information obtained through on-site data collection efforts at programme facilities. Such costs are part of the cost of programme targeting and, to make targeting as efficient as possible, should be kept to a minimum.

In general, when screening individuals or estimating the proportion of the needy in a population for regional targeting purposes, it is necessary to classify individuals according to their nutritional status on the basis of a cut-off value. Typically, population-specific cut-off points need to be defined for targeting purposes in each case where they are to be used.

The choice of cut-off point may have important implications for the interpretation of an indicator and the understanding of food security conditions. While food-insecure households are often defined as those consuming less than 80 percent of the recommended minimum calorie intake, a reduction in the percentage of households consuming less than 70 percent of that recommended minimum may indicate important advances in reducing extreme food insecurity that would not be fully captured by an assessment based on the 80 percent cut-off. In Guatemala, the cut-off point below which children showed a greater response to supplementation was actually much higher than the standard -2 SD below the median of the NCHS levels. Had the traditional cut-off been used, only a small proportion of children would have been considered at-risk and eligible for supplementation, and a large proportion of needy children would have been missed by the test.

In an operational context, the choice of the optimal indicator is often best made at a fixed cut-off defined on the basis of programme objectives and resource availability. Even where technically defined cut-offs exist for certain benchmark indicators, programme managers may wish to target a particular subsegment of the population which is the most food-insecure or malnourished, or the most likely to benefit from an intervention. This is likely to be the case in programmes where the budget is insufficient to address the entire population suffering from food insecurity or malnutrition. Where programme resources are limited, the best cut-off for targeting is one that will deliver exactly the number of participants for which programme resources are available. In other words, the indicator cut-off used in the evaluation of proxies would be selected at a value that corresponds to the percentage of the population that can be served given available programme resources.

Eligibility and exit criteria need to be well-understood by recipients and programme staff, and consistently applied.

Using proxy indicators


Proxy indicators are alternatives for indicators that more directly reflect the phenomenon or characteristic to be measured. Proxy indicators can serve as targeting indicators. A proxy indicator does not provide a perfectly equivalent substitute to the more direct indicator, and proxy indicators are applied when they are simple and less costly to construct than direct indicators, while still providing useful information. When direct indicators tend to include large measurement errors, such as household income measurement or daily food intake by means of recall methodologies, proxy indicators may be just as valid and capable of discriminating well between the food- or nutrition-insecure and the food- or nutrition-secure. In other cases, the application of a proxy indicator for targeting could increase inclusion and/or exclusion errors. This risk needs to be weighed against the additional targeting costs associated with applying a direct indicator instead of a proxy indicator.

Food frequency questionnaires can be used to obtain information on micronutrient intake indirectly, through a measure of diet diversity. Such information is much less costly to obtain than that obtained through quantitative dietary recall methods or through biochemical measures of micronutrient status.

One major disadvantage of the use of proxy indicators is that they are typically context-specific. The wide variation of social contexts in which food security and nutrition activities are undertaken will often lead to wide differences in the choice of the appropriate proxy indicator which is most closely associated with the direct indicator.

Proxy indicators of household income include:

gender or age of the head of household;

presence of working-age individuals within the household (dependency ratio);

ethnic background, social class or caste;

size of family dwelling or number of rooms;

type of construction materials used for the roof, floor and walls of the dwelling;

ownership of key assets such as land, livestock and luxury goods;

geographic location of the household.


In order to decide whether a proxy indicator is valid, it must be tested to establish its degree of association with more direct indicators in each setting, using either quantitative or qualitative methods.

Quantitative methods. Data from household sample surveys typically make it possible to test various proxy measures against an indicator that is more directly relevant to the programme objective - the so-called "benchmark" indicator. For example, in order to select households with chronic food insecurity, measures such as per capita food expenditure or daily energy intake (or percentage of daily energy requirements consumed) may be appropriate as benchmark indicators. Similarly, if the targeting objective is to identify malnourished children under five years of age, a range of proxies might be tested against a benchmark measure that has been derived from means anthropometry.

The choice of proxy indicator should be determined both by the strength of its statistical association with a benchmark indicator and by weighing the cost savings associated with using that proxy indicator instead of a more direct measure. Among the proxy indicators that have a statistically significant association with the benchmark indicator, the optimal proxy indicator for targeting will be the one that minimizes the undercoverage and leakage rates, subject to given targeting costs, including those for information collection and analysis.

Qualitative methods. Qualitative methods can also be used to identify proxy indicators for targeting, particularly when data collection and analysis costs must be kept low or the technical capacity of the programme staff does not allow for complex statistical data analysis. Even if the proxy identified through qualitative methods shows only a weak statistical association with the benchmark indicator, some level of targeting is likely to result in better programme outcomes than random selection of beneficiaries and allocation of programme benefits.

Using gender of household head as a proxy for poverty

A study in Peru examined the validity of using the gender of household head, as commonly defined, as a reliable way of identifying poor households. Using reported data from a national survey, 17 percent of all households in Peru were classified as woman-headed. However, woman-headed households were not significantly over-represented among the poorest households, accounting for only 20 percent of the poorest segment (quintile) of the population. Targeting the poorest quintile of the population solely on the basis of woman- headship would, therefore, result in a significant level of undercoverage, equivalent to 80 percent of the households in the poorest segment of the population. It would also result in significant leakage, since 76 percent of woman-headed households are not included in the poorest segment of the population.

The study notes that headship as reported by respondents fails to account for important elements of the typical headship concept, which include identifying the individual with the most regular presence in the household, the individual with overriding authority in household decision-making, or the individual primarily responsible for the economic support of the household. To be relevant for policy-making, the definition of headship must be relevant to the policy issue at hand. In the case of poverty targeting, the concept of headship should be defined to identify the primary source of economic support of the household.

Instead of using reported headship, researchers constructed an indicator of "working

headship" based on the total proportion of hours worked in the labour market and on the production of home goods (not including housework). Woman-headship defined in this way is more likely to identify poor households, given a range of evidence from across developing countries:

  1. women's work outside the home tends to increase with the level of poverty;
  2. in the poorest households, women tend to work longer hours than men;
  3. in poorer countries, women spend more time in income-generating activities than in countries where poverty is less of a problem.

Under the definition of working headship, 29 percent of households in Peru were determined to be woman-headed. However, among the poorest quintile of the population, 34 percent were woman-headed. In such cases, the prospect of targeting on the basis of working headship, while better than relying on reported headship, is still likely to result in high levels of undercoverage and leakage. The study concludes that, for targeting purposes, more direct indicators of poverty status may be more useful than reliance solely on gender of household head.

Selecting eligibility criteria

Eligibility criteria are the most important element in establishing and implementing a targeting scheme, and should be directly related to the programme's objectives. For example, the relevant criteria for iron deficiency anaemia prevention and control programmes will be different from those for the prevention of protein-energy malnutrition (PEM) among children. The selected criteria should be well understood by both programme staff and programme beneficiaries, and should be applied correctly by programme staff.

A well-defined programme objective will enable the identification of appropriate eligibility criteria through addressing the specified and intermediate outcomes of a programme rather than the broad and final outcomes. For example, while the broad goal of an activity may be the reduction of child malnutrition, its specific objective could be to improve mothers' nutritional practices, such as starting to use weaning foods at the age of six months, through nutrition education. Similarly, while the overall objective of a microcredit project might be to reduce poverty, being poor might not be the best, or at least the only, eligibility criterion. Appropriate additional criteria may also include the household or individual characteristics of a specific segment of the poor population that completely lacks access to credit, but for whom some access to credit might result in an increase in income.

Objective and operationally defined criteria, expressed as indicators, are needed in order to identify the target population clearly. For example, when improved food security is the objective of a food and nutrition programme, the term "food security" includes different aspects related to food availability, access and utilization, thus the term itself is not specific enough to be operationally useful for establishing relevant targeting criteria, or to identify food-insecure population groups. In this case, identifying increased food production as one of the programme's specific objectives, for example, and then selecting the segment of the population directly involved in food production is more appropriate. Similarly, specific objectives related to improving food access, such as physical access to markets or greater purchasing power, may aid the development of relevant and objective targeting criteria.

If the programme objective is to prevent malnutrition, the relevant targeting population should include all those who are at risk of future malnutrition while, if the objective is to improve the nutritional status of malnourished children, current nutritional status is the appropriate selection criterion.

Identifying and screening programme beneficiaries

When targeting indicators are being used, the target population has to be identified. Once this has been done, members of the identified target population have to be screened or certified for participation, and their eligibility for participation must be monitored over time in programmes that are administratively targeted. The use of targeting indicators can help to rank the population according to relative levels of need; it can separate the needy from the non-needy by applying a determined cut-off point as a criterion; and it can measure individuals' severity of need in such terms as the degree to which each needy individual falls below a certain cut-off point. For example, several poverty measures provide information not only on the percentage of the population to fall below a given poverty line, but also on how the poor population is distributed among the different degrees of poverty (distances from the poverty line). Another example is the classification of stunted children under five years of age as being "slightly" (between -1 and -2 SD), "moderately" (-2 to -3 SD) or "severely" (more than -3 SD) stunted.

At the household or individual level, three basic issues need to be addressed when implementing targeting:

  1. how to determine eligibility for programme participation;
  2. how frequently to obtain information in order to monitor the eligibility of programme participants;
  3. how to allocate programme resources to eligible individuals or households on the basis of that screening information.


Potential participants can be screened either by passive identification, in which potential beneficiaries must present themselves at programme facilities to have their eligibility evaluated, or active identification, in which programme staff seek out potential beneficiaries through home visits during which they obtain the necessary information for establishing eligibility.

Passive identification is a relatively low-cost method of obtaining eligibility information, since it demands less staff time and requires few logistics. However, some self-selection bias may be involved because passive identification requires that candidates are willing to bear the time costs of travel to programme facilities and of waiting for evaluation and services. More than one visit may be required if candidates do not bring the required documentation to establish eligibility the first time, and this increases the time costs to them. Passive identification methods may also limit the ability of programme staff to verify household socio-economic information that can otherwise be observed through home visits. Passive identification is quite common in clinic-based growth monitoring.

Active identification eliminates the problem of self-selection in the gathering of targeting information, although it often involves significant additional cost in terms of staff time, transportation and data processing. Active identification requires near-census coverage of the population defined as being at risk across the entire intended programme area; it is not possible to use sampling methods, since doing so would exclude some of the eligible households or individuals from the screening process. This is where community-based targeting in combination with administrative targeting may provide a cost advantage, because community informants can help identify the households or individuals most likely to be eligible for participation. Such an approach may eliminate the need for a community census. Active identification methods also provide an opportunity for programme staff to verify information on socio-economic status subjectively, based on their direct observations of living conditions. Such methods are more appropriate when the programme area is relatively small, when the at-risk population is more concentrated geographically and/or when programme benefits are sufficiently large to warrant higher data collection costs.


The choice of screening frequency can have important implications for targeting efficiency and programme impact. Much depends on how volatile or temporary the food and nutrition insecurity situation addressed by the programme is, and how resilient or able programme participants are to (re-)achieve food or nutrition security conditions, with or without programme assistance. For example, in a natural disaster many households may be affected, but some will be able to restore their previous food and nutrition security conditions rapidly, others more slowly and others not at all. When chronic or structural factors produce food and nutrition insecurity, changes will be slow. In the case of natural disaster, infrequent screening is likely to lead to large inclusion and, possibly, exclusion errors over time; while, in the case of chronic or structural factors, less frequent screening is probably sufficient. Frequent screening and eligibility monitoring raise information costs, and these have to be balanced against programme costs resulting from inclusion errors and social costs caused by exclusion errors. Eligibility screening includes the establishment of clear exit rules, perhaps with the sustainability of programme effects in mind.


The identification of eligible households or individuals through screening is closely linked to the allocation of programme food and/or services to participants. Screening information will, however, seldom lend itself to the direct determination of the proper level of benefits to be delivered to each individual or household. Some type of technical or administrative allocation rule is required in order to link screening information to the actual distribution of goods and services. Such allocation may be fixed on a per capita basis, such as an appropriate size or composition of food rations, or it may be graduated across broad categories that define various levels of need.

The method and frequency of screening may also influence the allocation of programme benefits. When information is collected on an on-going basis, such as through clinic-based growth monitoring, benefits are often provided to eligible participants on a first-come-first-served basis. When eligibility information is collected on a more periodic basis, additional rules to establish priorities in the distribution of programme benefits can be implemented.

In the Jamaica food stamp programme, social workers conduct home visits in bi-monthly cycles, collecting information from all eligible households during the first month of the cycle and distributing food stamps during the second. The ordered collection of information prior to the distribution of benefits provides some scope for prioritization in the event of resources proving insufficient to the needs of all those who meet the eligibility criteria. In India's ICDS Programme, which provides supplementary rations on the basis of on-going, clinic-based growth monitoring information, rations are distributed on a first-come-first-served basis. In this case, prioritization occurs only informally, when the supplies at centres are limited and staff reserve rations for only the most severe cases of malnutrition.

TABLE 1 Options for obtaining information through household- or individual-level screening







Collecting information on-site, with no verification

Simple and low-cost

Undercoverage may be high Prone to inaccuracies and false responses by applicants

Minimal staff time

One-time, small benefits that need immediate decisions, such as hospital fee waivers Interviewer is based within a community and knows applicants well enough to detect false responses

Collecting information on-site, with direct measurement by programme staff

More accurate and still low-cost Objective verification and opportunity for immediate intervention

Undercoverage may be high Not always appropriate

Minimal staff time Cost of equipment and staff training

Multiple screening of small to moderate benefits Limited to activities where the biophysical attributes of candidates are used as targeting criteria

Collecting information on-site, with required verifying documents

More accurate and still low-cost Verification burden on applicant, with minimal additional staff time

Undercoverage may be high Not always appropriate

Minimal staff time Information system to track verification documents

Large benefit Applicant pool literate and part of formal sector likely to have access to verification documents


Household visits for screening

Allows subjective verification of living standards and other information


Significant staff time Transport and logistics costs

On-going or large benefit Programme staff do not know applicants No possibility of written verification

Outreach to identify those not participating

Improves accuracy of targeting Lowers undercoverage rate Improves aggregate social benefit of activity


Significant staff time Additional budgetary resources to provide benefits to eligible candidates

Adequate budget Likely candidates clustered geographically or readily identifiable through their use of specific social services

Previous PageTable Of ContentsNext Page