C 2003/4

Undisplayed Graphic

May 2003

Thirty-second Session of the Conference
29 November 10 December 2003

Programme Evaluation Report



Food and Agriculture Organization of the United Nations
Rome, 2003



Table of Contents

Director-General’s Foreword


Chapter One: Evaluation of the Animal Health Component of Programme 2.1.3





Chapter Two: Programme Evaluation of EMPRES-Desert Locust

Chapter Three: Independent External Evaluation of the Special Programme for Food Security

Chapter Four: Thematic Evaluation of Strategy A3

Chapter Five: Summary of the Report of the Evaluation of Codex and other FAO/WHO Food Standards Work

I. Introduction


The role of food standards
The role of Codex
Expert scientific advice and independent risk assessment
Capacity building
The basic problems to be addressed

III. recommendations of the evaluation

Expert advice and scientific risk assessment
Capacity building
In conclusion




Chapter Six: Evaluation of Programme 2.2.2 (Food and Agricultural Information) – Activities related to agricultural statistics in the context of FAOSTAT




International commitment to statistical quality
Strengthening relations with member countries
Internal commitment to statistical quality

REPORT of the External Review Panel


Management Response

Report of the Programme Committee



Director-General’s Foreword

I have pleasure in submitting this sixth edition of the FAO Programme Evaluation Report to the Council and Conference. It comes at a time when the Organization’s evaluation systems are being further reinforced in accordance with our commitment to strengthen FAO as a learning organization and centre of excellence, as articulated in the cross-organizational strategy for Ensuring Excellence.

In November 2001, I issued a Director-General’s Bulletin (DGB) on Strengthening the FAO Evaluation System. Following this DGB, and in line with Governing Body recommendations, we are working to improve annual monitoring and assessment by programme managers and to introduce systematic auto-evaluation of all programme entities. Extra-budgetary funds, both in support of FAO’s work in the field and for normative programmes, are subject to equally stringent monitoring and evaluation procedures. This further enhances results-based programme planning and budgeting, which is in line with the strategy for Continuing to Improve the Management Process as set out in the Strategic Framework.

As is now the established practice, I have sought and accepted the advice of the Programme Committee in selecting the areas of work to be evaluated during the biennium. Similarly, the practice of submitting the evaluation reports to the Programme Committee, together with the management response to the findings and recommendations, has been continued and has permitted in-depth discussion in the Committee of what is valid and feasible.

All the evaluations have been rigorous and independent, focusing on issues of programme results, their benefit to members and cost-efficiency. They have all brought together consideration of technical cooperation and more normative work in an integrated manner.

Two major initiatives agreed with the Conference upon my first taking office, i.e. the Special Programme for Food Security and EMPRES1 (both for desert locusts and animal health aspects), have undergone this evaluation process. These evaluations were timely, in that the programmes are now well-established and have become important to the Low-Income Food-Deficit Countries. The programmes have evolved since their inception in line with experience and the observations of the evaluations which have been extremely useful in further optimizing the programmes to benefit members. It was very encouraging that the Programme Committee welcomed the flexible and people-centred approach which we are applying in the SPFS.

The Programme Evaluation Report 2003 also reflects the move supported by the Programme Committee to increasingly evaluate our work against the Strategic Objectives of the Organization as agreed in the Strategic Framework 2000-15. This report includes a summary of the evaluation of work on Strategic Objective A3 “Preparedness for and effective and sustainable response to, food and agricultural emergencies”. It is an unfortunate characteristic of the state of the world and of food insecurity that assisting the vulnerable to resume agriculture after emergencies is becoming an increasing part of FAO’s work. The evaluation has been useful in identifying lessons for us to become more effective in this role.

Chapter 5 of this report is a summary of the evaluation of the Codex Alimentarius and other FAO and WHO food standards work, which was carried out jointly with our partners in WHO. This has always been a key programme for the Organization and its importance for all countries was underlined both by the evaluation itself and the attention given to it by all sections of the membership, with the report considered at a Special Session of the Codex Alimentarius Commission. As you will see from the management response, we from our side are taking steps with WHO to implement many of the recommendations, but much will also depend upon the willingness of the membership of Codex to streamline their ways of working and priorities, while also being more inclusive of the developing countries and their needs.

The evaluation of statistical activities in FAO (Chapter 6 of this report) addressed one of the core functions of the Organization. The evaluation confirms the importance and utility of statistical functions, but highlights difficulties faced by many developing countries in maintaining and improving their national systems, on which depends the quality of FAO-produced statistics. The evaluation also underlines the need to review priorities in order to better meet the core information requirements among the expanding diversity of demands. Management intends to use the recommendations in making further progress in this important normative work.

I have noted that, in each case, the evaluations have found that the output of FAO should be expanded in particular ways. This is very gratifying to the Organization and demonstrates the value of our efforts. We are working with you, the membership, and with our other partners to overcome resource constraints which make these recommendations difficult to address. However, in our continuous drive for greater cost-effectiveness, I have also asked the Evaluation Service to ensure that future evaluations present alternatives for improvement within existing resources as well as identifying priorities for additional funding.

The role of evaluation in FAO reflects my drive for full transparency and accountability not just on resource use but also on results. The Programme Committee has on several occasions welcomed the candid and critical examination of impacts and issues in evaluation reports. I share this perception even if sometimes it leads management to an equally frank rebuttal of observations and recommendations which we do not consider viable. In this way, I find the evaluation work of increasing value in both internal debate and in promoting dialogue with the membership on how we can provide the optimum service.

It is in this spirit that I look forward to the comments of the Council and Conference on the conclusions and issues raised in this report.


Jacques Diouf   





1. Following the approval by the Programme Committee at its 82nd Session (September 1999), the policy and principles underlying the new system of evaluation were promulgated in Director-General’s Bulletin No. 2001/33 (Strengthening the FAO Evaluation System). Since then, the new evaluation system has been under implementation in order to improve the relevance, efficiency and effectiveness of the Organization’s work in the context of the Strategic Framework and the results-oriented approach. In particular, this has meant: (a) the establishment of a new, comprehensive evaluation system, comprising annual assessment and auto-evaluation by programme managers and programme evaluation at the corporate level by the Evaluation Service (PBEE); (b) further strengthening of programme evaluation in terms of the quality and feedback to programme planning and the implementation process; and (c) facilitating the active participation of the Governing Bodies in the evaluation process, particularly the Programme Committee.

2. The implementation of pre-evaluation monitoring and annual assessment and periodic auto-evaluation is under way in 2003, beginning with an annual assessment of implementation progress by programme managers and a first group of auto-evaluations of selected programme entities under the technical and economic programmes. The implementation is supported by PBE as a whole for the annual assessment, and by PBEE for auto-evaluation, with respect to the provision of and training in methodological and procedural tools. Annual assessments are expected to improve monitoring of programme outputs and their results whereas the auto-evaluation process is expected to help programme managers learn directly from the implementation experience leading to more realistic planning and improved management. The latter should also provide a better basis for, and hence strengthen, programme evaluations. The first synthesis report on the results of auto-evaluation will be made to the Governing Bodies through the Programme Committee in 2005.

3. As regards programme evaluations managed by PBEE, efforts have continued to enhance their quality and usefulness for improvement and learning in the strategic management context as well as their credibility. These evaluations are increasingly framed in relation to specific objectives and approaches set out in the Strategic Framework and the Medium-term Plan. The main aim is to assess the overall relevance, coherence and effectiveness of the ongoing activities linked to the specific strategic objectives so as to provide feedback for improved planning and management of the activities, both individually and collectively. This approach, applied for the first time in the thematic evaluation of activities in support of Strategic Objective A3 (Preparedness for and effective and sustainable response to food and agricultural emergencies), has been well received, both by programme managers and the Programme Committee, and will be further refined. At the same time, the approach is also conducive to improved assessment of programme results, providing a clearer framework and criteria for assessing progress in achieving the planned results. Similarly, more systematic field visits to selected countries and questionnaire surveys of partners are employed to assess the results at the field level and the level of satisfaction amongst key partners. Further improvements in this respect, however, will require strengthening the monitoring of results by programme managers as envisaged under the new system of annual assessments referred to above.

4. Selective use of external inputs to the evaluation process continues to be expanded to reinforce technical competencies available to programme evaluations or to lend greater credibility to evaluation results. The mechanism of an external peer review panel has a special role to play in that it enriches the range of expertise and experience applied to the process, provides an independent validation of the substance and quality of evaluations and, as a consequence, adds to the credibility of the results. Greater use of external expertise is also being made in conducting evaluations, including the use of teams of external consultants for selected cases. For example, the Special Programme for Food Security was evaluated by a team of external consultants, and the Codex Alimentarius and other FAO/WHO Food Standards Programmes was evaluated by a team led by a senior consultant and comprised of a mix of consultants and FAO and WHO evaluation staff. This, accompanied by the practice of a separate management response, has been helpful in enhancing the sense of transparency, independence and credibility of individual evaluations.

5. The Programme Committee has continued to play a very active role in guiding evaluation work. The Committee advises the Director-General in selecting topics to be included in the rolling biennial plans on programme evaluations and, as the primary recipient of evaluation reports, reviews and comments in depth on individual evaluations. It also seeks and receives progress reports on follow-up actions regarding key evaluation recommendations. The latter has proved to be an effective way of promoting feedback from evaluation to programme planning and implementation, which can be a serious weakness in many evaluation systems.

6. As indicated in the Director-General’s Foreword, this edition of the Programme Evaluation Report contains the six evaluations considered by the Programme Committee during 2002-03. Given the favourable reception to the format of the last report, it has been used in this report as well: each chapter contains a summary of the evaluation report, the report of the external peer review panel, the management response and the Programme Committee’s report.

7. In looking at the substance, the Programme Committee found that the activities evaluated were largely relevant to the needs of the FAO member countries and provided them with useful services. It discussed weaknesses and remedial action with the concerned managers in the light of the evaluation recommendations and the management responses. The Committee also expressed overall satisfaction with the way in which the evaluation function has progressed in FAO. In particular, it has appreciated evaluation’s strategic and forward-looking orientation and agreed that increasing emphasis be given to thematic evaluations in the context of the Strategic Framework and the Medium-term Plan. In recent meetings, the Committee also:

    1. appreciated the increasing utilization of external experts and, in particular, the use of external peer review panels to validate evaluations;
    2. in commenting on the external evaluation of the SPFS, suggested that the additional cost could be warranted either by the need for special competencies or to enhance the credibility of the evaluation;
    3. welcomed positive and proactive management responses;
    4. welcomed the increasingly candid evaluations, with their greater degree of “frankness” and “constructive criticism”;
    5. welcomed precise and operational recommendations but also warned against excessive detail and lack of prioritization in recommendations;
    6. highlighted the importance of follow-up to evaluations, including timely action on those recommendations accepted by management;
    7. appreciated efforts to enhance the dissemination of evaluation results in the interests of sharing experience and increasing transparency; and
    8. recognized the cost implications of comprehensive evaluations but emphasized the importance of quality and integrity rather than quantity of evaluations.

8. In considering the last Programme Evaluation Report 2001, the Conference “.. expressed general satisfaction with the progress being made in evolving an appropriate evaluation system in the context of a more strategically-oriented planning, programming and budgeting approach recently introduced...” and “that it would be appropriate to consider the creation of an independent evaluation service”.2

9. The expression of general satisfaction by the Programme Committee and the Council provides strong encouragement to sustain the momentum for strengthening the new evaluation system. The effort will continue. Apart from developing and refining more appropriate methodologies and approaches to evaluation, the focus will be on enhancing evaluation feedback to the programme process as well as on promoting organizational learning from various forms of evaluation, including auto-evaluation.

10. On the question of the possible creation of an independent evaluation service, the Joint Meeting of the Programme and Finance Committees considered the issue in May 2003 and concluded that the independence of evaluation is both important and complex and that the issue will be addressed again at another Joint Meeting of the Committees in September 2003.

11. Nevertheless, one point bears highlighting. This concerns the cost of evaluation in two ways. The direct cost of evaluations is increasing, reflecting a greater use of external inputs, both as part of the evaluation teams and for external peer review panels. Field visits and questionnaire surveys also add to the cost. More indirectly, assessing results requires more systematic monitoring by the programme staff, particularly for such results as outcomes and impact of programmes through the auto-evaluation process. This makes a demand on programme resources in terms of staff time and funds for collecting and reviewing the evidence of such results. In the context of general resource constraints faced by the Organization, this presents a real challenge, requiring a strategy for cost-effective use of evaluation as a management tool.


1 EMPRES – Emergency Prevention System for Transboundary Animal and Plant Pests and Diseases.

2 Report of the Conference of FAO, C 2001/REP, paragraph 82.


  Table of Contents Next page