JM 03.1/3


Rome, 7 May 2003

The Independence and Location of the Evaluation Service

Table of Contents

Rationale for the Recommendation
Pros and Cons of Merging AUD and PBEE

Current Status of Evaluation Arrangements – Independence Aspects
Possible Options in Changing Organizational Location of PBEE - Preliminary Analysis


1. The Finance Committee considered at its Hundredth Session (September 2002) the External Auditor’s report entitled Review of the Organization‘s Internal Controls, containing his recommendation that “the Organization examine the possibility of combining the AUD (Office of the Inspector-General) and PBEE (Evaluation Service) with a view to creating a single office or division for oversight”.1 With regard to this recommendation, the Secretariat undertook to submit a paper containing detailed proposals on the subject for consideration by the Joint Session of the Finance and Programme Committees in May 2003. The Finance Committee stressed that the response to be prepared by the Secretariat should take fully into account the Committee’s concerns regarding the independence of the evaluation function.

2. The present paper follows up the Secretariat’s undertaking referred to above, providing its detailed response to the External Auditor’s recommendation, as well as the wider issue of the independence of the evaluation function, a concern also underlined by the Conference at its last session (Thirty-first Session, November 2001). When it considered the Programme Evaluation Report 2001, it stated that “it would be appropriate to consider the creation of an independent evaluation service.” (para. 82 of the report). In short, addressing these various suggestions entails an examination of independence and institutional arrangements of evaluation functions in the Organization.

General Principles of Evaluation and Their Application in FAO

3. Evaluation has been mainstreamed among the UN agencies and international development organizations during the last three decades, and there is by now a broad consensus on the core principles and practices. For the present purpose of examining key aspects of evaluation functions in the Organization, the principles set forth by the OECD DAC serve as an appropriate bench mark since they provide the most comprehensive and authoritative framework2 of best practice among international organizations. This is particularly the case regarding the issue of independence of evaluation.

4. Evaluation is defined as “an assessment, as systematic and objective as possible, of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, developmental efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors”.3 The main purposes of evaluation are (a) “to improve future aid policy, programmes and projects through feedback of lessons learned” and (b) “to provide a basis for accountability, including the provision of information to the public”. In particular, it notes that “the accountability notion of evaluation referred to here relates to the development results and impact of development assistance. It is distinct from accountability for the use of public funds in the accounting and legal sense, responsibility for the latter usually being assigned to an audit institution”.4

5. The DAC evaluation principles include: (a) the establishment of a clear corporate evaluation policy with definition of its role and responsibilities and its institutional place; (b) impartiality and independence of evaluation from the processes concerned with policy-making and delivery and management of development aid; (c) transparency of the evaluation process, including wide public dissemination of evaluation results; (d) application and use of evaluation results through feedback to policy making and operational processes; (e) partnership and participation of the stakeholders involved in the use of evaluation results; and (f) integration of evaluation into the planning process.

6. The current practices in the Organization can be summarized as follows in reference to these principles:

    1. Corporate evaluation policy – this is embodied in the Director-General’s Bulletin 2001/33 (Strengthening the FAO evaluation system – copy attached in the annex), which updates a similar DGB issued in 1984 and reflects the outcome of the review by the Programme Committee in 1999 of the Secretariat’s paper entitled Evaluation in the Context of the Strategic Framework and the New Programme Model (PC 82/4). The DGB defines the key functions of evaluation as part of corporate efforts for strengthening strategic and result oriented management. It sets forth the objectives and main functions of evaluation in FAO, with the Evaluation Service (PBEE) as the central evaluation unit, as well as the institutional process, including reporting arrangements to the Director-General and the Governing Bodies through the Programme Committee. It stresses, inter alia, independence and objectivity in evaluation work as well as integration with programme planning and management process with a view to catalyzing improvements and organizational learning. It may be noteworthy that the evaluation system comprises a combination of self-assessment and auto-evaluation at the programme management level and independent evaluations managed by PBEE;
    2. Impartiality and independence – evaluations are carried out, under specific terms of reference, by independent teams not directly involved in the management and execution of programmes and projects. In particular, PBEE has a mandate for independent evaluations, notably for programme and thematic evaluations which it executes or where it supervises the execution by others – it also ensures independence and objectivity of evaluations carried out by teams of consultants on field and other extra-budgetary supported projects. Programme and thematic evaluations are characterized by measures to ensure their independence and transparency, sometimes by using external expertise in the evaluation team but more frequently through independent assessment of evaluation reports by external peer review panels. The evaluation team’s independence is further emphasized by separately reporting their findings and recommendations from management’s response to them. In the case of auto-evaluations, the implementation of which the Service is overseeing during this year, their independence and objectivity will be maximized by the use of external expertise in the process, either by incorporating such expertise in the review teams or in having peer review panels. Clear methodologies and procedures to ensure objectivity and analytical rigor, including terms of reference for individual evaluations, add to the independence of the process;
    3. Transparency of process, including the dissemination of results – many of the measures mentioned above also directly contribute to enhancing transparency, which goes hand in hand with independence. Furthermore, the stakeholders are involved in key stages of evaluation, from planning, including the preparation of terms of reference, to reviewing and commenting on the draft evaluation report. All programme and thematic evaluation reports as well as the biennial Programme Evaluation Reports are on the Evaluation website ( which is in the public domain. Similarly, for field project evaluations, summaries of individual evaluation reports are published on the same site;
    4. Use of evaluation results – this principle is built into the process of various evaluations. For programme and thematic evaluations, the consultative process provides dialogue between the concerned programme managers, staff and evaluation teams, culminating in the management response to the evaluation results. Further, the review of these evaluations by the Programme Committee, which routinely asks for progress reports on action taken by the Secretariat on the evaluation recommendations, reinforces the feedback process. For the field project evaluations, individual reports are reviewed and acted on by the recipient governments, donors and FAO through project management processes, often at the country level. Furthermore, key issues and lessons at the aggregate levels are disseminated through periodic syntheses of evaluations produced by PBEE. For example, the Programme Committee has been monitoring corrective actions taken by the Secretariat to improve project design quality, an issue highlighted in the synthesis of project evaluations examined by the Committee in May 2000;
    5. Partnership and participation of the main stakeholders – this is reflected in the preceding paragraph. Such partnership also extends to the follow-up of evaluation results where programme managers and staff proactively address issues arising from evaluations, often with the participation of PBEE staff. Illustrative examples during the recent years include: (i) taking up of issues identified by the thematic evaluation on participatory approaches by the informal, inter-divisional working group on participation; (ii) application of the recommendations of the programme evaluation of agricultural support services in the restructuring of the AGS division; and (iii) follow-up actions by the inter-departmental task force for training on the recommendations of the thematic review of training activities; and
    6. Integration with the planning process – as shown in the preceding paragraphs, evaluation is linked with planning and management processes. In particular, programme and thematic evaluations are increasingly framed in the context of the Strategic Framework and MTP, and evaluation results are fed at the corporate level into the programme planning process for the MTP and PWB. The location of the Service in the Office of Programme, Budget and Evaluation (PBE) facilitates such planning-evaluation synergy at the corporate level. In this respect, lessons arsing from evaluation have provided a key basis for the programme planning formats for the MTP process, particularly the result-oriented New Programme Model.

7. Thus, it can be concluded that the FAO evaluation practices are generally consistent with the core evaluation principles.

8. However, it is worth noting here that the two major considerations, i.e. independence, especially in terms of organizational arrangements, and effectiveness of evaluation in influencing decisions, especially through learning, encounter some contradictions in reality, with some trade-offs between the two. Certain types of organizational arrangements tend to strengthen one aspect at the expense of the other. Thus, it is generally recognize that there is no single, universal solution to this issue – rather “an optimal solution should be sought to balance all of these requirements”.5 In this connection, the two principal points are emphasized regardless of particular arrangements used: “every effort should be made to avoid compromising the evaluation process”, and “the organizational arrangements and procedures should facilitate the linking of evaluation findings to programming and policy making”.

Proposed Merger of Evaluation and Internal Oversight Functions

Rationale for the Recommendation

9. The External Auditor’s recommendation seems to be based on his conclusions that:

      1. “there is an overlap between AUD and PBEE for one of the oversight functions”, meaning “evaluation” (para. 17 of the Report);
      2. the necessary liaison between the two units is not framed in the organizational structure but is conducted only in an informal manner, which in any case should be strengthened (paras. 18 and 19);
      3. the general trend in the UN system is to have one single office for all the main oversight functions (monitoring, internal audit, inspection, investigation and evaluation) as exemplified by the UN Office for Internal Oversight Services (OIOS) (paras. 20 and 21).

10. Each of the three conclusions is examined below.

(i) Overlap between AUD and PBEE in the evaluation function:

11. While this conclusion seems to be a major motive for the recommendation, it is based on a rather superficial and narrow appreciation of audit and evaluation, both in the concept and operational function. The External Auditor’s Report bases its case, in paragraph 17, on the inclusion of the word “evaluation” among the functions defined in Article 1 of the Charter of the Office of the Inspector General. It quotes among its functions the reference to “monitoring and evaluating the adequacy and effectiveness of the Organization’s system of internal controls, financial management and use of assets” (the italic added). Thus, “evaluation” in this context represents a rather narrow meaning with a focus on management and use of resources, a point acknowledged by the External Auditor, who says that “the evaluation responsibility entrusted to AUD is rather limited in scope while that of PBEE is more focused on the assessment of the Regular and Field Programme results.” This indeed points to some very fundamental differences between the work of AUD and PBEE.

12. While evaluation forms part of oversight mechanisms, it has some very distinct functions from other oversight instruments, especially those focused on internal control and financial management. As quoted above (para. 4), the generally accepted definition of evaluation underlines a clear distinction between evaluation and audit functions, the unique and major function of evaluation being learning and improvement. The differences extend also to the assessment criteria and related methodologies that are applied. The distinction extends to the reporting arrangements: while both report to the Director-General, audit matters are handled by the Finance Committee and evaluation by the Programme Committee. In short, it is concluded that there is no significant overlap between the functions of AUD and PBEE and that in fact, the relationship between the two is that of complementarity.

(ii) Liaison between AUD and PBEE:

13. The current institutional arrangements reflect these basic differences in the oversight functions of the two units. Nevertheless, liaison between the two units is essential in order to minimize potential adverse effects of uncoordinated coverage of topics as well as to ensure synergy from the complementarity of two functions on the same or similar topics.

14. This is recognized in the functional statements of both units which include the requirement: “Liaises with the Inspector-General [or “Chief, Evaluation Service, PBE” as applicable], to avoid possible duplication and ensure the complementarity of work programmes, in particular those dealing with the assessment of effectiveness of FAO’s substantive field and HQ programmes.”

15. Consequently, the two units consult frequently on their respective work plans, draw often on each others’ work on related subjects, and share staff training opportunities. However, there is always room for improving cooperation between the two units, including periodic sharing of lessons and issues on thematic topics of common concern. The main practical obstacle here is time constraint, rather than the organizational location of the two units.

(iii) Trend for unified organizational structure for oversight units:

16. As indicated in the External Auditor’s Report, several UN agencies have adopted a unified institutional structure for their internal oversight functions, much like that of the UN/OIOS (e.g. Unesco, IAEA, UNEP, UNFPA, ICAO, IMO and WIPO): while WFP had this structure at the time of the External Auditor’s report, the evaluation unit has recently been separated from the office dealing with internal audit and inspection. At the same time, however, many other UN agencies have opted for alternative arrangements: in some, the evaluation unit is an autonomous office, usually reporting to the head of the agency (IFAD6, UNDP, UNICEF, UNCHS/Habitat and UNAIDS); in others, like in FAO, the evaluation unit is part of a larger office, often dealing with planning and programme functions (ILO, WHO, UNHCR, UNCTAD, OCHA, ITU and UNV); and still in some agencies, evaluation is co-located in units dealing with technical cooperation activities (e.g. UNIDO, UNDCP and WTO). The World Bank and IMF, as well as most of the regional development banks, have maintained separate evaluation offices reporting directly to their respective Governing Bodies. Similarly, the most common pattern among the bilateral aid agencies is to have a separate evaluation unit, including US/AID and UK/DFID. Thus, there is no clear trend for creating a unified structure for combining evaluation and other oversight functions.

Pros and Cons of Merging AUD and PBEE

17. While the reasons given in the External Auditor’s Report are not very convincing, the implications of the recommendation could also be examined from the corporate perspective. The apparent advantages for merging the two units could be that:

      1. it would give a seemingly coherent structure for internal oversight functions;
      2. it could lead to greater synergy between evaluation and AUD’s functions; and
      3. possibly it might help reduce the total resources dedicated to the two units.

18. On the first point, coherence is an elusive property, the benefit of which is hard to quantify in this particular case. The most salient argument might be the second point. However, achieving such a synergy would depend, in the final analysis, on the core purposes and issues on which AUD and PBEE are expected to focus as well as how their functions are organized and carried out. It is also questionable if the same purpose could not be achieved by further strengthening the liaison between the AUD and PBEE. On the third point, it is noted that where evaluation as a function is included in an overall office of internal oversight, it generally retains its separate identify primarily because of the difference in functions and in the qualifications and expertise of the staff carrying out these functions. Thus, assuming their output levels are to be maintained, there is little scope for savings.

19. On the other hand, there are several clear disadvantages and difficulties:

      1. The first difficulty arises from the intrinsically different purpose and nature of functions of the two units, with real risks of jeopardizing effective achievement of the most important functions of each. The primary focus of evaluation is to provide constructive suggestions for improvements and to promote learning by the programme managers and staff. It is essential that evaluation work is seen by the FAO managers and staff as a constructive and helpful function, based on objective and rigorous analysis and assessment, buttressed by the evaluator’s credibility in terms of ability to appreciate the substance of programmes and projects involved. These are difficult to reconcile with the purpose and functions, both real and perceived, of audit, inspection and investigation, which are more focused on financial compliance and operational aspects;
      2. Secondly, the proposed merger would remove the very advantage that exists in the current structure for effective feedback to programme planning and management. The synergy between programme planning and evaluation has been, and continues to be, important to the Organization’s successful effort to strengthen strategic planning and the quality of programmes and operations;
      3. Thirdly, the current separate reporting to the Council (through the Programme Committee for evaluation reports and the Finance Committee for the reports of the Inspector-General) reflects appropriately the difference in role and functions of two units and the respective Committees. However, the merging of AUD and PBEE would make the logistics of simultaneous reporting by the one office to both Committees problematic.

20. In conclusion, Management sees no practical value added in the recommended merging of AUD and PBEE. On the contrary, it is concerned with several risks which could jeopardize progress so far made in the Organization’s oversight regime. It is considered that improving coordination between the two functions could be pursued through more practical ways.

Enhancing Independence of Evaluation in FAO

21. Management appreciates that the issue of independence is of basic importance in strengthening further evaluation function in the Organization, as highlighted in the DGB 2001/33. By its very nature, evaluation entails making value judgement by carefully assessing the available information and weighing opinions of various stakeholders. In this context, independence and objectivity are critical in establishing the sense of legitimacy and credibility, especially in the eyes of those evaluated, by reducing potential conflict of interest and possible biases. Thus, independence in this sense also contributes to enhancing the effectiveness of evaluation because it provides credibility for the function and promotes more pro-active learning and initiatives for improvements.

22. On the issue of independence of evaluation, the most systematic examination of this issue is found a DAC report7. It highlights, apart from the corporate policy statement, the following as main considerations: (a) organizational structure for separating evaluation from line functions with the evaluation unit reporting directly to the organization’s senior manager or preferably to the head or a governing body; (b) access to and control over resources for evaluation work (budget and staff decisions); (c) authority over planning and conduct of evaluation (selection of evaluation topics and terms of reference) as well as the process (reviews and revisions of evaluation reports); and (d) linking evaluation findings to programme and policy making. These considerations are used below as the basic criteria for further reviewing the issue of independence of evaluation in the Organization.

Current Status of Evaluation Arrangements – Independence Aspects

23. Organizational structure and reporting arrangements. The central concern is to separate the evaluation function from line management in order to minimize potential conflict of interests and other biases, implying for FAO that PBEE reports directly to the Director-General or to the Council (through the Programme Committee). PBEE is located in the Office of Programme, Budget and Evaluation (PBE) within the Office of the Director-General. The Chief of PBEE reports to the Director-General (ODG) through the Director, PBE, and through the Director-General to the Council and the Conference via the Programme Committee, which is the prime interlocutor for the Council on evaluation matters. As observed in paragraph 14 above, this internal location of PBEE represents a common arrangement among the UN agencies where central evaluation units are internal.

24. The following may be observed in this regard:

    1. While PBEE is subject to the normal lines of authority within the Secretariat, it enjoys a large degree of autonomy in carrying out its evaluation function, including reporting its evaluation findings and recommendations (see (c) below). The reporting arrangement through the Director, PBE ensures the normal administrative and management line of authority. However, as far as the substantive work of evaluation is concerned, the main role of this channel is to ensure appropriate interface between evaluation work and the corporate level programme planning and management on one hand and to facilitate communication with the senior managers on the other. In particular, while the Director of PBE reviews draft evaluation reports, this does not function as control of the opinions and judgement of evaluation teams – in fact his views are expressed as part of the Management Response to evaluation reports. More generally, new measures introduced in the recent years, such as external peer review and separate management response, have served to establish more clearly separation of evaluation function from the line management within the Secretariat;
    2. As noted above, PBEE already interacts with the Programme Committee on evaluation matters through Management, including the preparation of biennial evaluation plans, individual evaluations and methodologies. In fact, this interaction, together with the Committee’s pro-active interest in evaluation, including in follow-up actions by the Secretariat on evaluation recommendations, has contributed to enhancing the status of PBEE as well as the independence and transparency of the evaluation process;
    3. Thus, the present arrangements provide PBEE with a satisfactory degree of independence in managing evaluations. Its location in PBE also provides a facility for linking evaluation to programme planning, monitoring and coordination at the corporate level, including feedback of lessons and issues for improvements.

25. Access to and control over resources. This is indeed a key factor for facilitating independence in planning and conduct of evaluation work. With the increasing trend for programme and thematic evaluations, coupled with the need for greater use of external expertise, the cost of evaluation exercises has been increasing: major evaluations carried out in the recent biennia have required large budgets (US$ 200,000-US$ 500,000 each). Yet, PBEE’s regular budget has not kept up in line, particularly to cover the costs of external consultancy inputs to the evaluation work itself as well as to the external peer review process. Thus, additional resources have to be mobilized on an ad hoc basis, and while the needs have been met so far, this is not conducive to orderly planning of evaluations, including those requested by the Governing Bodies. In short, the resource issue remains an area for improvement, not only for the consideration of independence but also for the quality and transparency of evaluation work. As far as the authority of PBEE for recruiting its staff and consultants is concerned, there is no impediment.

26. Authority over planning and conduct of evaluation, including the evaluation process. The DAC criterion implies that PBEE should have authority and freedom to select topics for evaluation and to determine the ways to carry out evaluation. PBEE’s work programme on evaluation is subject to the approval of the Director-General and its reports are subject to the normal clearance procedures. However, some clarifications are in order:

    1. In reality PBEE has much greater freedom than this situation implies. While the biennial rolling evaluation plans are subject to approval by Management, it is PBEE that initiates the process in consultation with departmental managers, and taking into account interest expressed by senior management and/or the Governing Bodies. The PBEE initiated plans are routinely cleared, without changes, by the Director-General, who consults the Programme Committee on the final selection of topics to be included in the evaluation plan;
    2. Similarly, PBEE is able to formulate its approaches and terms of reference on individual evaluations, in consultation with the departmental and divisional managers concerned. Draft evaluation reports are commented on, often over several versions, by the line managers. While a certain amount of negotiations takes place in the process, it does not go beyond the point of compromising the integrity of independent and objective evaluation. PBEE retains the right to reflect any comments in its reports. Line managers and senior management express their reaction in the management response, which are published as a separate part of the evaluation report;
    3. However, because many of these steps described above have evolved during the recent past, there is now a need to institutionalize more systematically the evaluation process by consolidating the practices into formal guidelines.

27. Linking evaluation findings to programme and policy decisions. This criterion implies that PBEE should have adequate authority and status so that it can express its own judgement and has access to decision making process in order to influence follow-up actions on the evaluation results. This is a real challenge in any evaluation, going beyond the sole issue of independence. However, it can be said that PBEE’s evaluation work is generally respected and accepted in the house, even when its reports are critical. In this respect, the Programme Committee has welcomed a trend for more frank criticisms in the recent evaluation reports. Such sense of authority comes from the perceived transparency and quality of evaluation, including rigour and objectivity in analysis, balanced judgement and relevance of findings and recommendations. Regarding access to decision making for follow-up, the practice of management response provides a good opportunity for this at the Secretariat level. Similarly, the review of evaluation reports together with management response by the Programme Committee facilitates having an influence on follow-up. In fact, PBEE is often involved in planning the detailed follow-up actions by the programme managers.

28. The foregoing analysis indicates that evaluation process in FAO, despite PBEE’s organizational location, can be considered basically independent in terms of the principal criteria, although the issue of resources remains. The Council, for example, recognized “the significant improvements made by the Organization in the evaluation area, not only in terms of the volume and quality of reports, but also in terms of a more constructive approach to recommendations…”8. Thus, it seems that organizational independence per se may not necessarily be the most decisive single factor in the development of a sound, functional evaluation system.

29. In fact, the experience in FAO indicates that a functional evaluation system depends much on factors other than organizational independence per se. Such factors include (i) institutional arrangements for transparency, (ii) credibility and quality of evaluation work with objective analysis and balanced judgement, (iii) quality of the evaluation team commanding respect and trust, and (iv) the overall management attitude for improvement and learning. In this respect, it is important to recognize advantages offered by PBEE’s location. As noted above, it facilitates interface with corporate level programme planning and management for feedback. Similarly, as an internal unit, PBEE staff are familiar with the Organization’s work and the staff, not only through evaluation but through collaboration with other staff members on common issues. This helps create confidence in, and acceptance of, PBEE staff and their work, facilitating a consultative process with various stakeholders but at the same time giving freedom to maintain its conclusions and recommendations. These factors together contribute to pro-active learning from evaluation, perhaps more so than a situation where evaluation results are perceived to be imposed without sufficient trust between the evaluators and programme managers.

Possible Options in Changing Organizational Location of PBEE - Preliminary Analysis

30. Nevertheless, in order to respond to the concerns expressed by the Governing Bodies, possible options are explored here for achieving a greater degree of organizational independence by changing the location of PBEE. In particular, two possible scenarios are considered below, with a summary assessment of likely advantages and disadvantages:

    1. PBEE located within the Secretariat but reporting directly to the Director-General. This would obviate the additional layer of reporting to the PBE Director, with greater organizational independence. It might provide a higher profile for PBEE and evaluation work in the Organization. At the same time, the following considerations arise:
      1. the presently available facility for feedback from evaluation to the corporate planning and programming would be diminished and would need to be replaced through another mechanism. Further, the degree of contribution from evaluation expertise and experience to the development of the planning and programming process, as occurred during recent years, would be weakened;
      2. this arrangement would entail some costs in several forms. PBEE’s status autonomy. It would naturally require an interlocutor within the Office of the Director-General with adequate capacity for supervising and coordinating evaluation work;
    2. PBEE located outside the Secretariat, reporting directly to the Governing Body (Programme Committee). This arrangement would represent the most extreme case of organizational independence, such as in the World Bank. PBEE would be independent of the Secretariat, working directly with the Programme Committee on all matters relating to revaluation. This would have also the following consequences:
      1. This arrangement would separate PBEE from the Secretariat, and feedback from evaluation to the programme planning and management process would rely increasingly on the Governing Bodies’ directive authority rather than through consultation with and ownership by the line managers. This is likely to have negative effects on the effectiveness of evaluation and organizational learning.
      2. As in the case for the first scenario, PBEE would need to be upgraded in its administrative and financial autonomy, probably significantly beyond the current Service status. It would also require a semi-permanent capacity of the Programme Committee to supervise evaluation work in its entirety, including on-going interaction with PBEE through its work cycle. For example, the World Bank’s Executive Board, which meets very frequently (twice or more per month) and is supported by large staff, is able to exercise such supervision adequately.

31. The above analysis indicates that measures aimed at greater organizational independence could have significant costs and likely adverse consequences on the effectiveness of evaluation. Apart from costs linked to organizational changes, the real concern would be with likely risks of impairing effectiveness of evaluation in terms of promoting changes and improvements based on organizational learning. In fact, opinions among the practitioners tend to warn against excessive emphasis on the independence aspect: they recognize that there is an important trade-off between independence and effectiveness in lesson learning and feedback to improvement. The 1998 DAC review of the use of its Principles underlined: “The principle of independence can be overplayed. As the users of evaluations have pointed out, too much independence, in practice, can be self-defeating with the result that recommendations and lessons of evaluations are not taken seriously. …Balancing impartiality and independence with the importance of promoting ownership is an art that can not be prescribed … “.9

32. To conclude this discussion of independence, the following could be noted:

    1. The principle of independence is taken seriously by the Management, and considerable progress has been recently made to enhance the independence of evaluation functions within the current organizational arrangement. The active interest of the Governing Bodies, especially the Programme Committee, as well as the recognition of such a need by senior management, has contributed to the improvements made so far;
    2. In considering the possibility of making changes to the current organizational location of PBEE, the trade-off between the need for organizational independene of the Evaluation Service and the need for optimum feedback and learning from the evaluation exercise needs to be taken into account. In light of the current status of the Organization’s evaluation system and the implications for making changes, it should be considered whether the current organizational arrangements provide a satisfactory basis for further enhancing the independence of evaluation functions, or whether it may require a change in the organizational location of PBEE.


33. The current evaluation system is broadly consistent with the principles and practices observed by UN agencies and international organizations. The FAO evaluation system has evolved towards greater independence, and overall improvements in the evaluation system, including feedback from evaluation, as has been recognized by the Governing Bodies. In this process, pro-active interest of the Governing Bodies, particularly that of the Programme Committee has been also an important factor.

34. The following are recommended for the Committees’ consideration:

    1. Based on the analysis presented above, the Committees may wish to take a decision on the proposal made by the External Auditor on a possible merger of AUD and PBEE;
    2. If the Committees decide not to take a final decision in this regard at this time, efforts could be made to reinforce the independence and effectiveness of evaluation within the current structure through improvements in areas indicated below:
      1. resources for evaluation work – the principle should be that adequate resources needed, especially for major thematic and programme evaluations, should be available in a predictable manner. This particularly applies to resources needed for adequate levels of external expertise in major evaluations, including independent reviews of evaluation reports by external peer groups. For this purpose, resource requirements should be identified at the time of formulating the evaluation plan, and included in the MTP and PWB;
      2. systematization of the evaluation procedures – the processes and practices that have evolved in recent years should be summarized in the guidelines for thematic and programme evaluations. These materials would be posted in the evaluation website and also discussed at internal workshops with FAO staff. This would help add to the transparency of evaluation process to all the main stakeholders and communicate better the Organization’s practices in these evaluations;
      3. greater sharing and dissemination of key lessons and issues arising from evaluation – while some efforts have been made in this area, primarily for field project evaluations, more concerted efforts should be made in identifying and synthesizing lessons, particularly for thematic and programme evaluations. This has also implications on resources needed for this work by PBEE.


1 Report submitted to the Director-General under cover letter of 12 July 2002.

2 While the DAC principles address evaluation in the context of development aid, these principles are considered relevant to evaluation in general, and represent the most comprehensive statement of guiding principles on the use of evaluation for managing development activities.

3 Para. 5, Principles for Evaluation of Development Assistance, Development Assistance Committee (DAC), OECD, 1991.

4 para. 8, Ibid.

5 para. 466, Ibid.

6 At the time of writing of this paper, it seems that the reporting arrangement in IFAD is likely to change to reporting directly to the Executive Board independently of management.

7 DAC report entitled Review of the DAC Principles for Evaluation of Development Assistance, OECD, 1998.

8 Report of the Council of FAO, 123rd session, 2002 (CL 123/REP), paragraph 76.

9 DAC Report on 1998 Review, Ibid.