| JM 03.1/3|
JOINT MEETING OF THE
Rome, 7 May 2003
The Independence and Location of the Evaluation Service
Rationale for the Recommendation
Pros and Cons of Merging AUD and PBEE
Current Status of Evaluation Arrangements – Independence Aspects
Possible Options in Changing Organizational Location of PBEE - Preliminary Analysis
1. The Finance Committee considered at its Hundredth Session (September 2002) the External Auditor’s report entitled Review of the Organization‘s Internal Controls, containing his recommendation that “the Organization examine the possibility of combining the AUD (Office of the Inspector-General) and PBEE (Evaluation Service) with a view to creating a single office or division for oversight”.1 With regard to this recommendation, the Secretariat undertook to submit a paper containing detailed proposals on the subject for consideration by the Joint Session of the Finance and Programme Committees in May 2003. The Finance Committee stressed that the response to be prepared by the Secretariat should take fully into account the Committee’s concerns regarding the independence of the evaluation function.
2. The present paper follows up the Secretariat’s undertaking referred to above, providing its detailed response to the External Auditor’s recommendation, as well as the wider issue of the independence of the evaluation function, a concern also underlined by the Conference at its last session (Thirty-first Session, November 2001). When it considered the Programme Evaluation Report 2001, it stated that “it would be appropriate to consider the creation of an independent evaluation service.” (para. 82 of the report). In short, addressing these various suggestions entails an examination of independence and institutional arrangements of evaluation functions in the Organization.
3. Evaluation has been mainstreamed among the UN agencies and international development organizations during the last three decades, and there is by now a broad consensus on the core principles and practices. For the present purpose of examining key aspects of evaluation functions in the Organization, the principles set forth by the OECD DAC serve as an appropriate bench mark since they provide the most comprehensive and authoritative framework2 of best practice among international organizations. This is particularly the case regarding the issue of independence of evaluation.
4. Evaluation is defined as “an assessment, as systematic and objective as possible, of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, developmental efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors”.3 The main purposes of evaluation are (a) “to improve future aid policy, programmes and projects through feedback of lessons learned” and (b) “to provide a basis for accountability, including the provision of information to the public”. In particular, it notes that “the accountability notion of evaluation referred to here relates to the development results and impact of development assistance. It is distinct from accountability for the use of public funds in the accounting and legal sense, responsibility for the latter usually being assigned to an audit institution”.4
5. The DAC evaluation principles include: (a) the establishment of a clear corporate evaluation policy with definition of its role and responsibilities and its institutional place; (b) impartiality and independence of evaluation from the processes concerned with policy-making and delivery and management of development aid; (c) transparency of the evaluation process, including wide public dissemination of evaluation results; (d) application and use of evaluation results through feedback to policy making and operational processes; (e) partnership and participation of the stakeholders involved in the use of evaluation results; and (f) integration of evaluation into the planning process.
6. The current practices in the Organization can be summarized as follows in reference to these principles:
7. Thus, it can be concluded that the FAO evaluation practices are generally consistent with the core evaluation principles.
8. However, it is worth noting here that the two major considerations, i.e. independence, especially in terms of organizational arrangements, and effectiveness of evaluation in influencing decisions, especially through learning, encounter some contradictions in reality, with some trade-offs between the two. Certain types of organizational arrangements tend to strengthen one aspect at the expense of the other. Thus, it is generally recognize that there is no single, universal solution to this issue – rather “an optimal solution should be sought to balance all of these requirements”.5 In this connection, the two principal points are emphasized regardless of particular arrangements used: “every effort should be made to avoid compromising the evaluation process”, and “the organizational arrangements and procedures should facilitate the linking of evaluation findings to programming and policy making”.
9. The External Auditor’s recommendation seems to be based on his conclusions that:
10. Each of the three conclusions is examined below.
(i) Overlap between AUD and PBEE in the evaluation function:
11. While this conclusion seems to be a major motive for the recommendation, it is based on a rather superficial and narrow appreciation of audit and evaluation, both in the concept and operational function. The External Auditor’s Report bases its case, in paragraph 17, on the inclusion of the word “evaluation” among the functions defined in Article 1 of the Charter of the Office of the Inspector General. It quotes among its functions the reference to “monitoring and evaluating the adequacy and effectiveness of the Organization’s system of internal controls, financial management and use of assets” (the italic added). Thus, “evaluation” in this context represents a rather narrow meaning with a focus on management and use of resources, a point acknowledged by the External Auditor, who says that “the evaluation responsibility entrusted to AUD is rather limited in scope while that of PBEE is more focused on the assessment of the Regular and Field Programme results.” This indeed points to some very fundamental differences between the work of AUD and PBEE.
12. While evaluation forms part of oversight mechanisms, it has some very distinct functions from other oversight instruments, especially those focused on internal control and financial management. As quoted above (para. 4), the generally accepted definition of evaluation underlines a clear distinction between evaluation and audit functions, the unique and major function of evaluation being learning and improvement. The differences extend also to the assessment criteria and related methodologies that are applied. The distinction extends to the reporting arrangements: while both report to the Director-General, audit matters are handled by the Finance Committee and evaluation by the Programme Committee. In short, it is concluded that there is no significant overlap between the functions of AUD and PBEE and that in fact, the relationship between the two is that of complementarity.
(ii) Liaison between AUD and PBEE:
13. The current institutional arrangements reflect these basic differences in the oversight functions of the two units. Nevertheless, liaison between the two units is essential in order to minimize potential adverse effects of uncoordinated coverage of topics as well as to ensure synergy from the complementarity of two functions on the same or similar topics.
14. This is recognized in the functional statements of both units which include the requirement: “Liaises with the Inspector-General [or “Chief, Evaluation Service, PBE” as applicable], to avoid possible duplication and ensure the complementarity of work programmes, in particular those dealing with the assessment of effectiveness of FAO’s substantive field and HQ programmes.”
15. Consequently, the two units consult frequently on their respective work plans, draw often on each others’ work on related subjects, and share staff training opportunities. However, there is always room for improving cooperation between the two units, including periodic sharing of lessons and issues on thematic topics of common concern. The main practical obstacle here is time constraint, rather than the organizational location of the two units.
(iii) Trend for unified organizational structure for oversight units:
16. As indicated in the External Auditor’s Report, several UN agencies have adopted a unified institutional structure for their internal oversight functions, much like that of the UN/OIOS (e.g. Unesco, IAEA, UNEP, UNFPA, ICAO, IMO and WIPO): while WFP had this structure at the time of the External Auditor’s report, the evaluation unit has recently been separated from the office dealing with internal audit and inspection. At the same time, however, many other UN agencies have opted for alternative arrangements: in some, the evaluation unit is an autonomous office, usually reporting to the head of the agency (IFAD6, UNDP, UNICEF, UNCHS/Habitat and UNAIDS); in others, like in FAO, the evaluation unit is part of a larger office, often dealing with planning and programme functions (ILO, WHO, UNHCR, UNCTAD, OCHA, ITU and UNV); and still in some agencies, evaluation is co-located in units dealing with technical cooperation activities (e.g. UNIDO, UNDCP and WTO). The World Bank and IMF, as well as most of the regional development banks, have maintained separate evaluation offices reporting directly to their respective Governing Bodies. Similarly, the most common pattern among the bilateral aid agencies is to have a separate evaluation unit, including US/AID and UK/DFID. Thus, there is no clear trend for creating a unified structure for combining evaluation and other oversight functions.
17. While the reasons given in the External Auditor’s Report are not very convincing, the implications of the recommendation could also be examined from the corporate perspective. The apparent advantages for merging the two units could be that:
18. On the first point, coherence is an elusive property, the benefit of which is hard to quantify in this particular case. The most salient argument might be the second point. However, achieving such a synergy would depend, in the final analysis, on the core purposes and issues on which AUD and PBEE are expected to focus as well as how their functions are organized and carried out. It is also questionable if the same purpose could not be achieved by further strengthening the liaison between the AUD and PBEE. On the third point, it is noted that where evaluation as a function is included in an overall office of internal oversight, it generally retains its separate identify primarily because of the difference in functions and in the qualifications and expertise of the staff carrying out these functions. Thus, assuming their output levels are to be maintained, there is little scope for savings.
19. On the other hand, there are several clear disadvantages and difficulties:
20. In conclusion, Management sees no practical value added in the recommended merging of AUD and PBEE. On the contrary, it is concerned with several risks which could jeopardize progress so far made in the Organization’s oversight regime. It is considered that improving coordination between the two functions could be pursued through more practical ways.
21. Management appreciates that the issue of independence is of basic importance in strengthening further evaluation function in the Organization, as highlighted in the DGB 2001/33. By its very nature, evaluation entails making value judgement by carefully assessing the available information and weighing opinions of various stakeholders. In this context, independence and objectivity are critical in establishing the sense of legitimacy and credibility, especially in the eyes of those evaluated, by reducing potential conflict of interest and possible biases. Thus, independence in this sense also contributes to enhancing the effectiveness of evaluation because it provides credibility for the function and promotes more pro-active learning and initiatives for improvements.
22. On the issue of independence of evaluation, the most systematic examination of this issue is found a DAC report7. It highlights, apart from the corporate policy statement, the following as main considerations: (a) organizational structure for separating evaluation from line functions with the evaluation unit reporting directly to the organization’s senior manager or preferably to the head or a governing body; (b) access to and control over resources for evaluation work (budget and staff decisions); (c) authority over planning and conduct of evaluation (selection of evaluation topics and terms of reference) as well as the process (reviews and revisions of evaluation reports); and (d) linking evaluation findings to programme and policy making. These considerations are used below as the basic criteria for further reviewing the issue of independence of evaluation in the Organization.
23. Organizational structure and reporting arrangements. The central concern is to separate the evaluation function from line management in order to minimize potential conflict of interests and other biases, implying for FAO that PBEE reports directly to the Director-General or to the Council (through the Programme Committee). PBEE is located in the Office of Programme, Budget and Evaluation (PBE) within the Office of the Director-General. The Chief of PBEE reports to the Director-General (ODG) through the Director, PBE, and through the Director-General to the Council and the Conference via the Programme Committee, which is the prime interlocutor for the Council on evaluation matters. As observed in paragraph 14 above, this internal location of PBEE represents a common arrangement among the UN agencies where central evaluation units are internal.
24. The following may be observed in this regard:
25. Access to and control over resources. This is indeed a key factor for facilitating independence in planning and conduct of evaluation work. With the increasing trend for programme and thematic evaluations, coupled with the need for greater use of external expertise, the cost of evaluation exercises has been increasing: major evaluations carried out in the recent biennia have required large budgets (US$ 200,000-US$ 500,000 each). Yet, PBEE’s regular budget has not kept up in line, particularly to cover the costs of external consultancy inputs to the evaluation work itself as well as to the external peer review process. Thus, additional resources have to be mobilized on an ad hoc basis, and while the needs have been met so far, this is not conducive to orderly planning of evaluations, including those requested by the Governing Bodies. In short, the resource issue remains an area for improvement, not only for the consideration of independence but also for the quality and transparency of evaluation work. As far as the authority of PBEE for recruiting its staff and consultants is concerned, there is no impediment.
26. Authority over planning and conduct of evaluation, including the evaluation process. The DAC criterion implies that PBEE should have authority and freedom to select topics for evaluation and to determine the ways to carry out evaluation. PBEE’s work programme on evaluation is subject to the approval of the Director-General and its reports are subject to the normal clearance procedures. However, some clarifications are in order:
27. Linking evaluation findings to programme and policy decisions. This criterion implies that PBEE should have adequate authority and status so that it can express its own judgement and has access to decision making process in order to influence follow-up actions on the evaluation results. This is a real challenge in any evaluation, going beyond the sole issue of independence. However, it can be said that PBEE’s evaluation work is generally respected and accepted in the house, even when its reports are critical. In this respect, the Programme Committee has welcomed a trend for more frank criticisms in the recent evaluation reports. Such sense of authority comes from the perceived transparency and quality of evaluation, including rigour and objectivity in analysis, balanced judgement and relevance of findings and recommendations. Regarding access to decision making for follow-up, the practice of management response provides a good opportunity for this at the Secretariat level. Similarly, the review of evaluation reports together with management response by the Programme Committee facilitates having an influence on follow-up. In fact, PBEE is often involved in planning the detailed follow-up actions by the programme managers.
28. The foregoing analysis indicates that evaluation process in FAO, despite PBEE’s organizational location, can be considered basically independent in terms of the principal criteria, although the issue of resources remains. The Council, for example, recognized “the significant improvements made by the Organization in the evaluation area, not only in terms of the volume and quality of reports, but also in terms of a more constructive approach to recommendations…”8. Thus, it seems that organizational independence per se may not necessarily be the most decisive single factor in the development of a sound, functional evaluation system.
29. In fact, the experience in FAO indicates that a functional evaluation system depends much on factors other than organizational independence per se. Such factors include (i) institutional arrangements for transparency, (ii) credibility and quality of evaluation work with objective analysis and balanced judgement, (iii) quality of the evaluation team commanding respect and trust, and (iv) the overall management attitude for improvement and learning. In this respect, it is important to recognize advantages offered by PBEE’s location. As noted above, it facilitates interface with corporate level programme planning and management for feedback. Similarly, as an internal unit, PBEE staff are familiar with the Organization’s work and the staff, not only through evaluation but through collaboration with other staff members on common issues. This helps create confidence in, and acceptance of, PBEE staff and their work, facilitating a consultative process with various stakeholders but at the same time giving freedom to maintain its conclusions and recommendations. These factors together contribute to pro-active learning from evaluation, perhaps more so than a situation where evaluation results are perceived to be imposed without sufficient trust between the evaluators and programme managers.
30. Nevertheless, in order to respond to the concerns expressed by the Governing Bodies, possible options are explored here for achieving a greater degree of organizational independence by changing the location of PBEE. In particular, two possible scenarios are considered below, with a summary assessment of likely advantages and disadvantages:
31. The above analysis indicates that measures aimed at greater organizational independence could have significant costs and likely adverse consequences on the effectiveness of evaluation. Apart from costs linked to organizational changes, the real concern would be with likely risks of impairing effectiveness of evaluation in terms of promoting changes and improvements based on organizational learning. In fact, opinions among the practitioners tend to warn against excessive emphasis on the independence aspect: they recognize that there is an important trade-off between independence and effectiveness in lesson learning and feedback to improvement. The 1998 DAC review of the use of its Principles underlined: “The principle of independence can be overplayed. As the users of evaluations have pointed out, too much independence, in practice, can be self-defeating with the result that recommendations and lessons of evaluations are not taken seriously. …Balancing impartiality and independence with the importance of promoting ownership is an art that can not be prescribed … “.9
32. To conclude this discussion of independence, the following could be noted:
33. The current evaluation system is broadly consistent with the principles and practices observed by UN agencies and international organizations. The FAO evaluation system has evolved towards greater independence, and overall improvements in the evaluation system, including feedback from evaluation, as has been recognized by the Governing Bodies. In this process, pro-active interest of the Governing Bodies, particularly that of the Programme Committee has been also an important factor.
34. The following are recommended for the Committees’ consideration:
1 Report submitted to the Director-General under cover letter of 12 July 2002.
2 While the DAC principles address evaluation in the context of development aid, these principles are considered relevant to evaluation in general, and represent the most comprehensive statement of guiding principles on the use of evaluation for managing development activities.
3 Para. 5, Principles for Evaluation of Development Assistance, Development Assistance Committee (DAC), OECD, 1991.
4 para. 8, Ibid.
5 para. 466, Ibid.
6 At the time of writing of this paper, it seems that the reporting arrangement in IFAD is likely to change to reporting directly to the Executive Board independently of management.
7 DAC report entitled Review of the DAC Principles for Evaluation of Development Assistance, OECD, 1998.
8 Report of the Council of FAO, 123rd session, 2002 (CL 123/REP), paragraph 76.
9 DAC Report on 1998 Review, Ibid.