4. All the programmes and activities of FAO, financed from the regular budget of the Organization (from mandatory assessed contributions) and those financed from voluntarily contributed extra-budgetary resources, are subject to evaluation. The policies for evaluation of these programmes have been set by member countries in the Governing Bodies and the Director-General. Evaluation is designed for:
5. The emphasis on providing evidence-based lessons on the technical content of FAO work encompassing the validity, relevance and future improvement, distinguishes the focus in evaluation from that of audit and ensures complementarity.
6. In establishing evaluation policies, the Council takes advice primarily from the Programme Committee. The Director-General is advised by the internal Evaluation Committee chaired by the Deputy Director-General, which was established in 2004. Three documents have been particularly important in codifying evaluation policy:
7. A further important development was the approval in April 2005 of a set of norms and standards for evaluation in the UN system4 by the United Nations Evaluation Group (which is composed of the heads of evaluation from throughout the UN system). These norms and standards are largely in line with the standards of the OECD-DAC and their purpose is captured in a preambular statement: “Towards a UN system better serving the peoples of the world; overcoming weaknesses and building on strengths from a strong evidence base”. They now provide a baseline against which all organizations and programmes of the UN system can gauge their performance.
8. FAO evaluations currently fall into the following major categories, which are complementary:
9. The Programme Committee is the recipient of major evaluation reports for the Governing Bodies. Its functions with respect to evaluation are to advise the Council on overall policies and procedures for evaluation and to:
10. The Evaluation Service is responsible for ensuring the relevance, quality and independence of evaluation in FAO. It is located for administrative purposes in the Office of Programme, Budget and Evaluation, which forms part of the Office of the Director-General5. The Service receives guidance from the Programme Committee and the Evaluation Committee (internal). It is solely responsible for the conduct of major evaluations for the Governing Bodies and other major evaluations, including the selection of evaluators and the evaluation terms of reference. It thus enjoys a high degree of independence within the Organization. In addition to its responsibilities for the conduct of evaluations, the Service also:
11. Unlike the evaluation units in some other UN organizations, the Evaluation Service does support auto-evaluation by managers but does not have wider responsibilities in results-based management so as to assure a higher degree of independence in its evaluations. The Service is also not involved in evaluation capacity building in member countries. For staff training, it provides comments on training requirements to the Human Resources Division.
12. For the current biennium 2004-05, a 27 percent real increase was made in the budget for evaluation, which stands at approximately US$ 4.6 million6 for the biennium (in total approximately 0.5 percent of resources available for the Regular Programme of work). Resources for evaluation of extra-budgetary work are currently of the order of US$ 1.3 million per biennium (approximately 0.25 percent of trust fund expenditure). The translation and reproduction of evaluation documents for the Governing Bodies, and certain indirect costs of evaluation such as office space, are covered outside the evaluation budget.
13. Evaluations for the Governing Bodies normally cover a strategic objective or cross- organizational strategy as defined in the FAO Strategic Framework, a programme, or an organizational unit. In recent years, these evaluations have tended to deal with large blocks of work at the programme or strategy level in order to maximise their usefulness to the Governing Bodies and senior management.
14. Selection of evaluation topics: In proposing subjects for evaluation to the Programme Committee in the rolling evaluation plan, the Evaluation Service takes account of expressed interests in the Governing Bodies and by FAO managers. The intention is that evaluation should focus on those areas where the Governing Bodies and management have the greatest need for evidence-based information on processes, institutional arrangements, outcomes and impacts. In order to achieve the balanced and progressive coverage of the Organization’s strategies and programmes, key factors in deciding on the proposals to be made include: a) the coverage of evaluations over the past six years; and b) the coverage of auto-evaluations and other studies. Criteria also include: the size of the programme or area of work; the demand from member countries; and areas of work being considered for expansion because of their perceived relevance and usefulness, or for elimination or downsizing. When there is already agreement by management and the Governing Bodies that particular work is not of continued priority, evaluation is not usual, as it can provide accountability but is not likely to deliver forward-looking lessons. The Programme Committee decides on priorities for evaluation from a list of possible topics and may introduce additional topics it considers to be of importance. It also can, and does, request evaluations which are timely outside of the regular evaluation cycle and the plan is adjusted to accommodate these.
15. Terms of reference for the evaluation: An approach paper for each evaluation is developed by the Evaluation Service in discussion with the units most closely involved in implementing the strategy or programme. Increasingly, an evaluation team leader is then selected and participates in the finalization of terms of reference.
16. The evaluation team: In the past, most evaluations were led by Evaluation Service staff, but the pattern of the last two years has been increasingly to have evaluations led by external consultants and for the evaluation team to be composed of external consultants with the Evaluation Service having technical supporting and quality assurance functions. Evaluation consultants7 are selected on the basis of competence with attention also to regional and gender balance. Evaluation team leaders are consulted where possible on the composition of the remainder of the team. The size of the teams is related to both the scale and complexity of the evaluation, 3-4 lead consultants being a typical number.
17. Evaluation scope and methodology: The methods used are tailored to the individual evaluations. Certain features are common. The ultimate determinant of the value of a programme, strategy or process is the benefit it delivers to FAO member country governments and their peoples. Key issues for evaluations include:
18. Evaluations are forward looking. The central concern is thus to identify strengths and weaknesses in FAO’s programmes, approaches and structures with relevance for the future. In examining the effectiveness and impact of programmes, it has generally been found most productive to examine the outcomes and impacts of work completed and ongoing over the last four to six years (for longer periods than that both detailed information and the lines of causality for impact become difficult to trace). For many institutional issues the evaluations are essentially concerned with the efficiency and effectiveness of current, rather than historical, practice as well as the likely benefits of ongoing reforms.
19. Preliminary desk review and SWOT analysis8 have been found essential in designing evaluations and determining issues for in-depth study. The introduction in 2000 of an enhanced results-based planning model for FAO has made it easier to identify the outcomes and impacts (objectives) towards which programmes are working, but it usually remains essential to clarify the programme logic as an early step in the evaluation process and define appropriate verification indicators for use in the evaluation.
20. Evaluations review the work of other institutions comparable to FAO, especially in the multilateral system. This is important for benchmarking on processes, quality of work, etc. As the performance of FAO cannot be judged in isolation from that of its partners and competitors, it is also essential to make judgements on FAO’s areas of comparative strength and weakness.
21. Also with respect to methodology:
22. The evaluation report: The methodology requires the evaluation team to consult with stakeholders, including FAO management but the team is solely responsible for its report, including the findings and recommendations. The role of the Evaluation Service is to assure adherence to the terms of reference, timeliness, and to provide technical support to the evaluation but the Service has no final say on findings and recommendations. Increasingly, external independent evaluation team leaders are present when the reports are discussed in the Programme Committee.
23. All evaluation reports are public documents made available in all languages of the Organization and posted on the evaluation website. The report is required to present evaluation recommendations in an operational form and to include recommendations for improvement with no budget increase (as it was observed that evaluation teams almost invariably made proposals for both expanded work and budget in the area under evaluation and this was not always realistic).
24. Follow-up: The Programme Committee requests management to provide a response to each evaluation on those findings and recommendations it accepts and those it rejects and why. It also requests management, as part of the management response, to provide an operational plan on how it intends to follow up. This is an area where there has been considerable progress in the last few years, and the Programme Committee has emphasised that it would like to see responses in more operational terms. The Programme Committee also requests a follow-up report on the progress made in implementation after two years.
25. The policy of the Organization is that all programmes are subject to evaluation, including those funded from extra-budgetary resources and a generally accepted rule of thumb is that programmes should devote 1-2 percent of total resources to independent evaluation9. The Governing Bodies have also indicated that they do not wish the Regular Programme to subsidise evaluation of extra-budgetary activities. Sixty-one percent of bilaterally funded trust fund field projects over US$ 2 million (excluding emergency) that were completed in the years 2000-2004 were evaluated. The corresponding figure for UNDP projects was 27 percent (because UNDP is now generally evaluating country programmes rather than individual projects) and none of the 14 unilateral trust fund projects paid for by countries themselves were evaluated. As might be expected, the figures are considerably lower for projects in the budget range of US$ 1-2 million. Of 53 emergency projects over US$ 1 million, only two were separately evaluated.
26. Extra-budgetary projects have been subject to evaluation by tripartite evaluation missions, normally as the project drew towards completion and follow-up action was under consideration. Such missions consisted of three or four independent evaluators nominated respectively by the funding source, the benefiting country(s) and FAO. However, a number of developments in the last decade have reduced the applicability of this model. There has been a growth in the number of relatively small field projects, and for these a large mission of this type would not be a cost-effective use of resources. There has also been a major increase in emergency response and rehabilitation programmes where resources are handled in an overall package from various donors, to respond to the crisis. The growth in non-traditional projects which support headquarters work or an integrated mix of normative support and field work at country level, have also been important.
27. Country and regional development projects of US$ 2 million or more: The policy now is that field development projects of US$ 2 million or more should continue to be subject to individual project evaluation by an independent team. This modality continues to include a country visit by the evaluation team and evaluation is timed in relation to when it can make the maximum contribution to the work being assisted under the project. The Evaluation Service clears the terms of reference and team composition and also validates that the evaluation report meets essential quality standards.
28. Smaller field development projects: For smaller field development projects, talks are now being initiated with individual donors to pursue their willingness to set aside a small amount under each project and place it in an evaluation trust fund specific to each donor. In consultation with the donor, groups of projects would then be evaluated by independent teams, in some cases as part of wider country or thematic evaluations10. Evaluation of FAO’s Technical Cooperation Programme projects is already handled in this way and it facilitates ex-post evaluation as well as the evaluation of ongoing projects.
29. Major emergencies: For major emergencies, FAO needs to evaluate in an integrated way the relevance, efficiency and sustainable benefit from its response to the totality of the emergency. To date, ad hoc funding has been used for this11 and a management decision has now been taken to introduce an evaluation component in project budgets from which resources can be pooled to evaluate FAO’s response both during the provision of assistance and ex-post. Although the first evaluations of major emergencies were not carried out in full consultation with donors or the affected countries, it is intended that, wherever possible, there will be fuller partnership and the evaluations will continue to be managed by the Evaluation Service. The independent multilateral evaluation of the 2003-05 Desert Locust Campaign, which is looking at the response of FAO, national programmes and donors and has a steering committee of all partners to oversee its work, may yield some useful lessons in this respect.
30. For extra-budgetary programme funding which supports areas of the Regular Programme, or a mix of normative and field development work, evaluation mechanisms appropriate to each of the individual programmes are being flexibly developed in discussion with the individual donors.
31. Management response and follow-up on implementation of recommendations is required for evaluation of extra-budgetary programmes, as for Regular Programme work, but it is generally acknowledged to be an area of weakness and the actual use made of the findings and recommendations is highly dependent upon the extent to which the various partners to the evaluation become convinced of their validity and thus put them into effect.
32. Evaluation teams: For all evaluation of extra-budgetary work, independent teams are utilised. They are required to be consultative in their approach in order to maximise access to information, and facilitate both realism and ownership by partners to the evaluation but have full responsibility for their report findings and recommendations. In the past, for evaluations of extra-budgetary funded work, team members used to be nominated separately by funding sources, beneficiary countries and FAO. The preference now is for all parties to agree to the evaluation team membership without the individuals being their particular representatives.
33. With financial support from UK-DFID, auto-evaluation was introduced in FAO in 2003 as part of results-based management. Auto-evaluations are conducted by programme managers with use of external consultants, and basic principles and guidelines, quality assurance and technical support are provided to managers by the Evaluation Service12. The manager responsible for the programme entity takes final responsibility for the auto-evaluation report. A formal response to the auto-evaluation is then required from the more senior manager to whom they report, normally a Division Director or Assistant Director-General (although this last stage of the process has not been fully effective). Auto-evaluations are now to be reported in summary form through the Programme Implementation Report and summaries are available on the FAO evaluation website.
34. From 2005, auto-evaluation has been extended to Priority Areas for Inter-disciplinary Action (PAIAs) and on a pilot basis to the administrative areas. Nineteen auto-evaluations were undertaken in 2004, covering 28 programme entities. For 2005, 16 auto-evaluations have been agreed for the technical programmes. One PAIA and one non-technical programme auto-evaluation are being undertaken as pilot studies.
35. The principle is that all programme entities should be subject to either independent external evaluation or auto-evaluation during the course of their lives and that programme entities with fixed duration should normally be evaluated towards their completion in order to assist planning for the future. However, it has become clear that due to changes in priorities and the continuing budget constraints, questions arise about whether to expand certain areas of work and cut others irrespective of the planned lifetime of the programme entities. Considerable emphasis is thus being placed on selecting work for auto-evaluation when major changes in direction are being considered, be they either for expansion or for contraction.
36. The Evaluation Service has analysed the experience with the first round of auto-evaluations, including circulating a questionnaire to those involved. Chart 2 summarises the responses on the perceived usefulness of auto-evaluation. It can be seen that Assistant Directors-General all found auto-evaluation helpful. At the Director level, 33 percent of respondents found it very helpful, 58 percent helpful and only 8 percent found no significant benefit. Overall, managers at all levels found the process either helpful or very helpful.
Chart 2: Usefulness of auto-evaluation as perceived by managers
37. As auto-evaluation is the responsibility of the managers directly concerned, there can be only limited expectation that they will be strongly critical. It was also evident that useful conclusions were internalised by those responsible for programme entities during auto-evaluation, although it was not necessarily reflected in the reports. Auto-evaluation reports do, however, demonstrate whether the entities were able to contribute significantly to outcomes and impacts, which in itself is valuable information for senior managers. Where partners and users were directly consulted in auto-evaluation processes, more criticism and sometimes verification of benefits came to light than when consultation processes were largely internal. It was also found that the use of external consultants and/or external peer reviewers strengthened both the objectivity and the critical questioning in the process.
38. It is too early in the process to analyse the impact that auto-evaluation has had on the actual planning of programme entities and it will always be difficult to disaggregate the extent to which managers have made changes as a result of auto-evaluation, rather than the auto-evaluations reflecting changes which had been decided upon independently. It was found that a large number of the concerned managers concluded in their auto-evaluations that programme entity design and its actual execution needed to be improved by clarifying what outcomes were expected for which target beneficiaries. Many found that a greater proportion of resources needed to be used for disseminating outputs, particularly publications, rather than just producing them. In addition, there was considerable concern on the need to improve accessibility through FAO’s website.
39. The Evaluation Service is now completing the report of its own auto-evaluation of evaluation in FAO, including in particular the performance of the Service itself. In this auto-evaluation, a review was undertaken by peers from other organizations of the UN system and from bilateral development agencies. Structured interviews were held with groups of internal stakeholders and some government representatives. A representative sample of evaluation reports was sent to peers for review against their own criteria. Major findings to emerge from the auto-evaluation process were that:
5 The Service staffing is composed of a Chief, eight professionals (including one provided from extra-budgetary resources) and three support staff.
6 US$ 4.1 million under the Regular Programme allocation to the Evaluation Service and approximately US$ 0.5 million for evaluation of the Technical Cooperation Programme –TCP.
7 Evaluation consultancies are now advertised on the web.
8 Analysis of Strengths, Weaknesses, Opportunities and Threats.
9 Technical cooperation and analytical work justifies the upper end of this range in view of its relative complexity and high potential for benefit.
10 For which donor specific reports would always be prepared as well as an overall report on FAO’s work.
11 Mozambique floods, Balkans, Afghanistan, southern Africa and Tsunami-affected areas.
12 The Evaluation Service also provides matching funds to support the evaluations.
13 It should be noted in this regard that the organizational location of the Evaluation Service was considered by the Programme and Finance Committees at their joint meeting in September 2003. They “agreed that the independence of the Evaluation Service, within the existing location in PBE, should be further enhanced....”