PC 82/4

Programme Committee

Eighty-second Session

Rome, 13-17 September 1999

Evaluation in the Context of the Strategic Framework and the New Programme Model



1. When the Conference decided in 1997 to proceed with the introduction of a revised process for planning, programming and budget as outlined in the document JM 97/1 and called for a Strategic Framework to be prepared for FAO, it was foreseen that the evaluation regime in the Organization would also need to change significantly. The Programme Committee thus requested that proposals be made by the Secretariat on the nature and scope of a revised evaluation regime in the context of the Strategic Framework and the new programme model. This paper responds to that request. The proposals outlined below are put forward for discussion by the Programme Committee on the shape of the future evaluation approach and the reporting required by the Governing Bodies.

2. FAO was one of the earliest among the UN agencies to institutionalize evaluation, with an evaluation unit established in 1968, initially for evaluation of field projects. In 1979, the mandate of the Evaluation Service was extended to cover both the Field and Regular Programmes. Evaluation results have systematically been reported to the Governing Bodies, initially through separate reviews of the Field and Regular Programmes and since 1993 through the Programme Evaluation Report (PER).

3. Today evaluation is one part of the overall oversight regime in FAO and is located in the Office of Programme, Budget and Evaluation. The other components of the oversight regime are external audit, internal audit, inspection and investigation. While the External Auditor, as appointed by the Council, is responsible for External Audit, the remaining oversight functions are the responsibility of the Inspector-General with whom the Evaluation Service works in close cooperation.

Evaluation - Purpose and Definition

4. Evaluation is both a management and an accountability tool. It provides managers at all levels with an improved knowledge base and critical assessment for their programmes. Its main purposes are (i) to catalyse improvements in overall planning and in the selection and design of programmes with respect to their usefulness, efficiency, effectiveness and impact; (ii) to support management decision-making for in-course corrections and improved execution; (iii) to provide input to decisions concerning the continuation of programmes at the end of their implementation period; (iv) to promote organizational learning on critical issues; and (v) to contribute to an overall increase in management accountability and in transparency of reporting to the Governing Bodies and other stakeholders.

5. Quality evaluation, which responds to the concerns expressed by management and the Governing Bodies, must be focused on results in terms of benefits to target users of FAO's services, rigorous in analysis, independent in assessment, consultative in process and transparent in reporting. Evaluation thus entails a critical analysis and assessment of performance in the achievements of the Organization, its programmes and projects. The criteria for assessing programmes include:

    1. conformity to organizational strategic objectives;
    2. relevance vis-à-vis the needs of countries, the international community and other target users of FAO's services;
    3. quality and coherence of programme approach and design;
    4. overall performance, particularly vis-à-vis qualitative and quantitative targets for outputs and objectives;
    5. efficiency and cost-effectiveness including the impact of administrative processes;
    6. the extent to which the benefits and improvements realized are likely to be sustained in future;
    7. factors which have contributed positively or negatively to the programme's achievement, and why; and
    8. the uptake and effectiveness of action on key thematic issues such as gender.

On the basis of this analysis, important issues and lessons are drawn for the future.

6. Techniques employed in evaluation include analysis against specific criteria as elaborated above; comparative benchmarking against similar programmes; and various methods of consultation with and feed-back from stakeholders, including questionnaires, sample surveys of results and focus group and key informant interviews. In all cases successful evaluation is based on informed and expert judgement and the application of rigorous means-ends analysis which links cause and effect.


The New Programming Approach

7. The new programme model is intended to achieve a more strategic and coherent structure for FAO programmes. Within the Strategic Framework, the six-year Medium-term Plan (MTP) will provide a rolling plan to be updated biennially. This should contain a result-oriented, consolidated plan for all FAO programmes, with time-bound achievement targets and indicators for the medium term in pursuit of the objectives in the Strategic Framework. The programmes will be elaborated in terms of individual programme entities: technical projects, continuing programme activities and technical service agreements. The Programme of Work and Budget (PWB) will present the biennial workplan for programme implementation and related resources. As a result, programme execution is expected to clearly focus on results to be achieved in line with the Strategic Framework and the MTP which will entail rigorous planning, systematic monitoring and review for in-course correction and biennial updating.

Implications of the New Programming Model for Evaluation

8. Within the general definition of evaluation given in paragraph 4 above, it will be essential in the next few years to establish an appropriate evaluation regime that supports management in making an effective transition to the new programme model.

9. In order to assess overall progress towards Strategic Framework objectives, the new evaluation regime will need to examine the results of programmes, or clusters of programme entities, contributing towards those objectives as well as the performance of individual programme entities. It will also need to assess implementation of the strategies to address cross-organizational issues. The main frame of reference will be the MTP, using a number of different approaches:

    1. assessment of the performance of individual programme entities particularly at the end of implementation periods;
    2. assessment of the performance of clusters of programme entities in terms of their contribution to strategic objectives;
    3. analysis of the effectiveness and validity of cross-organizational strategies, such as ensuring excellence, enhancing inter-disciplinarity and broadening partnerships and alliances; and
    4. studies aggregating information on overall progress towards strategic objectives.

10. The approach implies a need for evaluation to become more comprehensive and systematic in coverage, involving all FAO programmes, and requires the strengthening of the organization-wide system for monitoring, review and evaluation. However, not all programmes can be evaluated at the one time; rather, the evaluation cycle will tend to follow the rolling six-year medium-term planning cycle and, in particular, concentrate on completed programme entities or those nearing completion. In particular, comprehensive coverage for monitoring, review and self-appraisal at the divisional and departmental levels would have to play a critical role. The following will need to be put in place:

    1. a monitoring and review process (auto-evaluation) as a built-in element of programme management in the Organization, particularly at the divisional and departmental levels. This process would support programme managers in making in-course correction during implementation and in deciding the future of the programmes at the end of their implementation period;
    2. methodologies and mechanisms for the new evaluation regime, including those leading to cost-effective assessment of programme results, strengthened analytical techniques and increased objectivity;
    3. a system of reporting to ensure enhanced transparency, usefulness and accessibility to both internal and external users, including the development of internal and external websites; and
    4. an organization-wide learning process, to strengthen a culture of self-critical improvement and accountability and the basis for planning, in particular for formulation of the rolling Medium-term Plan.

Elements for Possible Inclusion in an Improved Evaluation Regime

11. Continuing evolution and improvement will need to occur if the evaluation regime is to be successfully strengthened and this will particularly be the case for the coming transitional years, until the new programming approach is firmly established. As not all the areas for improvement can be achieved within existing resources, dialogue is sought with the Programme Committee on areas of priority. The main components of an optimal regime could comprise:

    1. Field project evaluations: The existing approach involving independent evaluations of projects operated by FAO would continue to form a mandatory part of the process for both extra-budgetary and RP field projects. Individual evaluations will, as now, be organized by operating units with Evaluation Service guidance, and be carried out by teams of external consultants representing countries, donors and FAO. The regime would be further consolidated to ensure evaluation's cost-effective contribution to in-course improvements, learning of lessons and accountability. In addition to the thematic evaluation of the Technical Cooperation Programme, provision will be made to evaluate other small projects1. For the Field Programme as a whole, the emphasis at the corporate level (primarily the responsibility of the Evaluation Service) would be on: (i) more in-depth synthesis of field experience in priority areas; (ii) dissemination of evaluation findings through training and the Intranet internally and the Internet externally; and (iii) a greater use of ex-post and impact evaluations, especially through joint exercises with interested donors and other partners;
    2. Auto-evaluation: This would be a systematic process of evaluation by managers of all operations focused on the achievement of their programmes and undertaken in accordance with an appropriate schedule to ensure that the exercise is cost-effective. The basis for this is greatly strengthened by the inclusion of effectiveness indicators and success criteria in the design of the new programme entities. As a minimum auto-evaluation would be based on the systematic monitoring of these indicators and periodic formal review. This process at the divisional and departmental levels would be supported by the Evaluation Service, which would provide the programme managers with methodological guidelines, its comments on internal assessments, and make periodic synthesis reports to the management. In reintroducing auto-evaluation care will be taken to ensure that it does not result in unproductive additional workload. The Evaluation Service would also, by detailed sampling, verify the quality of the process. Collaboration with the Office of the Inspector-General could be important to gain insight into financial and management aspects;
    3. Programme evaluations: These would assess in depth the relevance, effectiveness (including impact) and efficiency of individual or, more likely, a cluster of related programme entities (technical projects, etc.). Evaluation in terms of the strategic objectives (in line with the Strategic Framework) would be achieved through clustering of contributing programme entities in line with the MTP. Such evaluation would consider the normative and field programmes as a whole and give due attention to programme and management processes in assessing performance against objectives for individual entities and with respect to the higher level corporate strategies. Programme evaluations would be carried out primarily by the Evaluation Service, with judicious use of external expertise, and making use of the assessments and reviews produced in the auto-evaluation process;
    4. Thematic evaluations: While these are similar in many respects to programme evaluations, they would focus on a group of programme entities, field and other activities under selected thematic topics that cut across programmes and individual strategic objectives (e.g. participatory approaches, gender mainstreaming, etc.). These could also cover process-oriented themes, such as the implementation of the cross-organizational strategies in the Strategic Framework, the results of organizational reforms, as well as service-oriented work of the Organization (like publications). These would be carried out primarily by the Evaluation Service, as appropriate, in collaboration with the Office of the Inspector-General or with the support of external independent expert inputs; and
    5. Periodic syntheses of evaluations: These are not evaluations per se but build on the results of individual evaluations, involving analysis and assessment at more aggregate level (e.g. a computerized data base now includes findings from over 1,000 field project evaluations). Syntheses distil lessons and issues of corporate interest, in terms of strategic objectives, technical subjects, and corporate management strategies. As in the past, these would be produced primarily by the Evaluation Service, with additional effort to disseminate the main findings widely within and outside FAO, including through training and the use of the Intranet and Internet.

Modalities of Evaluation

12. As is apparent from the components of the new evaluation regime described above, several modalities are envisaged:

    1. Programme staff - much of the bulk of monitoring and evaluation will be carried out by programme managers themselves. This has the benefit of ensuring optimal feedback of lessons learned to the programmes but may suffer from a lack of independence. This latter weakness can be addressed, at least in part, through rigorous planning standards requiring clear statement of the effectiveness criteria and targets to be met and establishment of standard review and evaluation methodologies for application across the Organization;
    2. Evaluation Service staff - the management of the new regime will be the responsibility of the Evaluation Service. While this service will tend to concentrate on ensuring quality and coherence of the overall regime and on corporate level evaluations, it would also be involved in individual project and programme evaluations to the extent that resources allow;
    3. Office of the Inspector-General (AUD) - AUD has responsibility for internal audit, inspection and investigation with a view to ensuring the effectiveness of the Organization's system of internal control, financial management and use of assets. In developing their respective workplans, AUD and the Evaluation Service consult with a view to avoiding duplication and identifying areas where joint efforts might result in fruitful cooperation. In the context of this paper, it is noted that AUD might often take the lead in addressing the effectiveness of implementation of certain of the cross-organizational strategies (e.g. improving the management process, leveraging resources);
    4. External evaluations - internal evaluation could usefully be complemented by independent external evaluations. These would be conducted by teams of external consultants, supported by the Evaluation Service. The decision by management to conduct an external evaluation would consider such criteria as: (i) particular importance of independence and objectivity for the evaluation; (ii) the availability of resources; (iii) the presence or otherwise of adequate expertise in the Organization; (iv) the request for such an evaluation by the Governing Bodies or perhaps the funding donor on a particular subject; and (v) partnership with interested agency and/or donor willing to share the cost; and
    5. Other external inputs to evaluation - certain alternative forms of external input to internal evaluation could be further taken advantage of by the greater use of consultants and by more structured peer review, including use of expert panels and of systematic questionnaires to obtain peer reviewers' reactions to reports. Some of these techniques can be particularly cost-effective and will be employed by the Evaluation Service whenever feasible.

Constraints on Achievement of Desired Improvements

13. The regime outlined above, if accepted, would have to be developed and refined over the coming years, and be underpinned by significant changes in the process of programme planning and management, as well as by adequate resources. Methodological problems would need to be overcome and a balance maintained between the desire for more and better evaluation and the cost-effectiveness of evaluation's contribution to greater impact from the Organization. Issues include:

    1. Integration of the results of evaluation into the programming process at all levels of management while maintaining the independence of evaluation. This will require: (i) institutionalization of corporate review and appraisal at all levels, including internal peer review cutting across the management structure in line with the structure of corporate strategies and strategic objectives; and (ii) further strengthening of the corporate culture of openness and ongoing learning, including the incorporation of lessons into staff training;
    2. Impact assessment : While FAO can form a view of what its members consider useful and can establish whether outputs have the potential to contribute to a successful development process, assessing a programme's final impact entails:
      1. monitoring systematically that its key outputs (manuals, guidelines, new skills and knowledge shared at meetings, etc.) are received and applied by the users in the way envisaged by the programme;
      2. determining that these users are achieving the expected benefits and other desired improvements in line with the programme's objectives; and
      3. then ensuring that this contributes to a process, in which there are many other players involved, of improvement in human well-being, the environment, etc.
      This requires a set of criteria for ascertaining if a specific result has been achieved and whether, and the extent to which, such an achievement is due to the programme's actions and not to other factors. However, the number of links in the chain and other inputs into the process which permit the ultimate benefits to occur will normally render actual verification either extremely costly or impossible. What can be done is to review cost-effective indicators of user uptake and satisfaction with programme results, assess the use made of outputs, and verify likely lines of causality in achieving ultimate benefits through implementation. In short, while potential impact can be identified, attempts to verify actual final impact are prohibitively costly and seldom provide reliable answers;
    1. Aggregation for assessment of progress at the Strategic Objective level presents a methodological problem. While it is relatively easy to ascertain the proportion of the Organization's output contributing to a particular strategic objective, the aggregate effectiveness of all those outputs cannot easily be evaluated. Methodologies will therefore need to be developed for verification through combination of the results of management monitoring and selective in-depth evaluation;
    2. Prerequisites for improved evaluation: Realism will be required on the rate at which improvements can be achieved in application of the new programme model for programme entities and in auto-evaluation and monitoring by managers. Evaluation can only assess progress towards targets and results for beneficiaries where these are clearly identified with workable indicators. Major demands are being made on staff for introduction of the new programme model and the response has generally been positive but programming skills and time availability limit the rate of change. Similarly, evaluation will rely heavily on monitoring by managers of the use made of outputs and of objective indicators which cannot be achieved without substantial resource commitment. It also needs to be recognized that the results from programme entities formulated in line with the Strategic Framework and on the new programme model will only start to emerge after a further two biennia;
    3. Resources for evaluation: Evaluation is information-intensive, and is thus generally costly, especially for impact assessment. As with all management tools, evaluation must be cost-effective. Also, a substantial proportion of Evaluation Service time needs to be devoted to servicing internal management improvement, including that for the Field Programme, as distinct from accountability reporting to the Governing Bodies. The pool of resources for evaluation may be enlarged through the judicious use of external partnerships and joint evaluations with UN and other agencies, especially for impact and ex-post evaluations. However, most such action in the past has been on topics additional to FAO priorities. It is envisaged that the mandatory requirement for auto-evaluation can be met, at least in part, through the allocation of the necessary resources under each programme entity as is the de facto situation for field programmes.


14. Reporting on evaluation, as part of accountability reporting from the secretariat, provides an in-depth, analytical basis for Governing Body decisions on programmes. Thus, reporting should cater to the needs of the Governing Bodies in the biennial planning and budget cycle, in which the Programme Committee and the Council, with advice also of the technical committees, perform the key preparatory work for final decision by the Conference. To serve the needs of the Governing Bodies, suggestions are made below as to: (i) the level of information required by the Governing Bodies; (ii) the content and presentation of reports; and (iii) the timing of reporting.

15. Level of information: More detailed information is normally required at committee level than in the plenary. The Programme Committee and the technical committees of the Council have the first line of responsibility for programme review. It is the Programme Committee which has, in practice, become the primary target audience for the PER, devoting more time to its review than any other body. It would thus make good sense to target evaluation reporting clearly at the Programme Committee. At the same time, both the Council and Conference still need to receive a report on programme evaluation for accountability purposes and for oversight of the evaluation function. This could, however, be presented in a more synthetic form.

16. Coverage of Reporting to the Programme Committee: The Committee may wish to propose changes in the way it approaches programme review in the light of the Strategic Framework and the new programme model. Programmes selected for in-depth evaluation could, to the extent possible, concentrate on programmes falling within the Committee's review cycle.

17. In-depth evaluation reporting of FAO programmes continues to be selective, given both the limited resources available for evaluation and time available to the Programme Committee. There is a trade-off between depth of analysis, especially as regards impact assessment and breadth of coverage (see previous discussion of impact). The Committee would continue to be consulted on the selection of subjects to be covered by evaluations for the forthcoming biennium. In line with the discussion of the evaluation regime above, products to the Programme Committee could include:

    1. assessment of overall progress towards a selected strategic objective;
    2. assessment of selected programmes (i.e., cluster of programme entities) and/or thematic topics;
    3. reviews of programme management aspects, including those identified in cross-organizational strategies; and
    4. reporting on results of evaluation of the Field Programme.

18. The implementation period to be covered would vary according to the types of evaluation. In general, however, in the case of programme or thematic evaluation, a sufficiently long period (some three biennia) would be needed to permit assessment of effects and impact.

19. Given the current resources of the Evaluation Service, the maximum number of biennial evaluation reports to the Programme Committee cannot exceed a total of four to five in various combinations of the above. It is likely that where a review would require a very substantial investigative effort such as in "a) assessment of overall progress towards a selected strategic objective", the number of other topics covered would need to be reduced to two or three.

20. Report content and presentation: While each evaluation report could be provided to the Programme Committee in a summary form (5-8 pages), supporting working papers to the reports could be made available to interested members of the Committee for consultation. Programme Committee evaluation reports could also be posted on the internet for the information of all members. Although the presentation would vary between field programme, thematic and programme reports the current framework of analysis presented would be retained; that is against the criteria outlined in paragraph 5 of this document.

21. As well as the reports themselves, the Committee would receive, as at present, a management response to the findings and recommendations and, where the report was prepared primarily by the Evaluation Service without significant external independent input, the comments of external peer reviewers.

22. Evaluation reporting to the Governing Bodies: This could thus comprise the following:

    1. Programme Committee - a set of programme and thematic evaluations on selected programmes of special interest to the Governing Bodies, especially in relation to the Programme Committee's biennial programme review process;
    2. Technical committees of the Council - those reports considered by the Programme Committee, which fall within the mandates of the individual technical committees and their agendas; and
    3. Council and Conference - a new form of Programme Evaluation Report (PER) with a biennial synthesis of evaluation reports based on those examined by the Programme Committee containing:

· a four to five page summary of each evaluation;
· a one page summary of the external peer reviewers comments;
· extracts from management's response;
· sections of the Programme Committee report relating to evaluations;
· comments of technical committees of the Council on individual evaluations, as appropriate.

23. Timing of Reporting : From the above discussion it would follow that:

    1. if the Programme Committee is to optimize on feedback from evaluation into programme review, the main reporting should be made in parallel with the Committee's programme reviews. This would imply tabling individual evaluation reports at the two sessions in the first year of the biennium;
    2. reporting to the Council and Conference in the PER would then take place in the second year of the biennium.

24. Transition to the new arrangements: As evaluation reporting will not be able to reflect the results of the new programming arrangements and the approved Strategic Framework until at least two biennia of work have been completed, i.e. in 2004, the Programme Committee may wish to consider a reduced intensity of evaluation reporting in the interim period. This is particularly the case as the Evaluation Service is being called upon to support introduction of the improved programme model and ensure that it provides a future base for both organizational learning and accountability. In the meantime evaluation reports would include a mix of evaluations by technical discipline and theme. For the forthcoming biennium 2000-2001, it is suggested that the Programme Committee would review evaluation reports at its spring and autumn sessions which would then form the basis for the 2001 PER. Possible subjects are presented to the Programme Committee in a separate paper.

25. Programme Implementation Report (PIR) - In addition to the above proposed evaluation reporting, a biennial PIR on the previous biennium could continue to be produced. Given the inherently selective nature of evaluation reporting which is focused on effects and impact, the PIR would need to maintain its comprehensive coverage so as to form an integral part of accountability reporting. However, in the light of the new approach to programming and in view of some reservations expressed in the Governing Bodies on the content of the PIR, further efforts would be made to evolve a more satisfactory format for this report in consultation with the Programme Committee. As the new format of this report cannot be implemented until the completion of the first full cycle (i.e. 2006 at the earliest), the Secretariat intends to put further proposals on this subject to the Committee at a later date.


26. This paper is submitted to the Committee as part of an ongoing process of dialogue. To facilitate this dialogue, certain of the issues identified in the paper have been highlighted below so as to obtain the Committee's views on:

    1. the acceptability of the assessment criteria outlined in paragraph 5;
    2. the appropriateness of the proposed new evaluation regime, as described in paragraph 11, including proposals for field project evaluation, auto-evaluation, programme and thematic evaluation and synthesis of evaluations;
    3. the recognition of the constraints faced (paragraph 13), including for integration of the results of evaluation into the programming process, impact assessment, prerequisites for evaluation and the possible pace of improvement, and resources for evaluation and the choices in emphasis for work that this necessitates; and
    4. the proposed reporting arrangements as detailed in paragraphs 15 to 25, including the level of information required and content, timing of documentation for the Programme Committee and Conference and transitional arrangements.

1 At its Eightieth Session in September 1998, the Programme Committee took note of "the move towards smaller-size and shorter-duration projects than in the past and the consequent need to develop appropriate cost-effective evaluation methodologies" (paragraph 34 of CL 115/8). Annex 1 addresses this issue.




1. At its Eightieth Session in September 1998, the Programme Committee took note of "the move towards smaller-size and shorter- duration projects than in the past and the consequent need to develop appropriate cost-effective evaluation methodologies" (paragraph 34 of CL 115/8).

2. In principle, all Trust Fund and UNDP projects with a project budget in excess of US$ 1 million are subject to formal evaluation. However, as is generally accepted, individual evaluations are not a cost-effective solution for small projects under US$ 1 million. Costs could easily exceed 5 percent of the project value and for very small projects of say US$ 100 000, a standard evaluation as applied to larger projects, even if somewhat scaled down, could cost 15 percent of the project budget.

3. It is difficult to classify small projects (budget less than US$ 1 million). By source of funding for 1996-97, the results by number of projects were: 45 percent FAO-TCP; 40 percent trust funds (of which 8 percent UTF) 14 percent UN funds (principally UNDP) and 1 percent other.

FAO - Technical Cooperation Programme

4. Since the PWB 1996-97, selected TCP projects have been evaluated on a thematic basis covering both ongoing and completed projects. The normal sequence is:

5. Both the preliminary desk studies and the field missions utilize standard check lists to ensure all major points are covered (see paragraph 5 in the main document). Field missions give particular attention to assessing the use made of the project outputs and thus their effect and impact. The inclusion of a considerable number of completed projects facilitates this analysis.

6. The three TCP thematic evaluations initiated to-date review some 80 projects of which 65 are subject to field visits. This represents about 5 percent of the total TCP portfolio in the period 1992-98, a proportion which will steadily increase as more thematic evaluations are undertaken. Funding for the evaluation work has been facilitated by the inclusion, in the standard TCP project document and budget of a provision of US$ 500 as a contribution towards the cost of evaluation.

UN Funds - Principally UNDP

7. In UNDP small projects, the FAO work generally constitutes an input to a larger project (this is also often the case for unilateral trust funds). Technical inputs to larger projects are normally reviewed in the context of the evaluation of the full project. FAO seeks to ensure that terms of reference cover assessment of the appropriateness, quality and results from the technical inputs and to be represented as much as possible on evaluation missions, so as to maximize feed-back. Where FAO is represented on the missions, the standard evaluation questionnaire is completed. This mechanism has proved adequate to date but the extent of project coverage by evaluation missions will be kept under review, as funding agencies may not always appreciate the need for FAO involvement in evaluation or the value of specifically reviewing the technical inputs.

Trust Funds

8. Among small trust fund projects, emergency projects accounted for 13 percent and normative related projects for 27 percent by number. The remaining projects were a mix of activities but policy inputs, pilot and feasibility studies were significant. In pilot and feasibility studies, in-depth review forms an essential part of the process of formulating and appraising for possible follow-up. Also, the normative projects tend to be evaluated as part of the evaluation regime for the Regular Programme which is the subject of the main text of this paper.

9. It is further suggested that all small trust fund projects, including emergencies should be subject to thematic evaluation in the same way as is being accomplished for FAO-TCP and that a small direct charge should be included in the budget as standard practice.