PC 93/4 a)|
Rome, 9 – 13 May 2005
Auto-evaluation in the Context of Priority Setting -
The Contribution of Auto-Evaluation to
Programme Improvement and Priority Setting in 2004
1. The Committee initiated its discussion of Priority Setting in the Context of Programme Planning at its 89th session in May 2003. At this meeting it agreed that in order to strengthen priority setting there was a need for:
2. In September 2003, the Committee concentrated its discussions on b) above, i.e. how the Committee could enhance its own input to the process. It concluded that this would be strengthened by an analysis of the balance of resource allocations between strategic objectives; an analysis of evolving external factors that might necessitate a change in that balance; and also a summary of preferences as to specific priority areas expressed by Members in FAO fora, including reference to programmes on which no priority has been expressed. In May 2004, following a review of the Secretariat’s analysis1, the Committee noted that in general it was more appropriate that it address priorities at an aggregate level, although this did not preclude assessment at programme entity (PE) level, as necessary.
3. At its last session in September 2004, the Committee concluded that application of the criteria detailed in the Medium Term Plan (MTP) 2006-11 in analysis of the priority for programme entities helped provide assurance to Members that the priorities included in the MTP did, in fact, reflect the needs of Members at large. The Committee agreed on the basic soundness of the three criteria used by FAO management to assist in establishing relative priorities between programme entities:
4. On the other hand, it noted that:
Important issues in the discussion of priority setting were whether resources were too thinly spread across various activities and whether current entities had sufficient critical mass to lead to effective results.
5. A simple proxy for these factors might be the absolute level of resources allocated to a programme entity. If an allocation is low, all other things being equal, it is less likely to provide the level of resources necessary to have a material impact than a larger amount. The definition of what is a “sufficient critical mass” is problematic in that it depends upon the relationship between the amount of resources (i.e. human and other) and a number of variables including the size and nature of the problem being addressed and the role of the Organization vis-à-vis the contributions being made by other partners. In addition, because there is a fair degree of flexibility in entity design, there is diversity in approach. Thus, for example, for a given set of tasks one manager may design an entity with three major outputs whereas another might create three entities. In conclusion on this point, it may be futile to try and measure critical mass in an over-simplified manner and even lead to adverse behavioural reactions on the part of programme managers.
6. Because of the complexity of the issue and in an effort to avoid unforeseen consequences, the Secretariat proposes to study the issue of critical mass, identifying the factors that need to be taken into account in quantifying what we mean by the term, and developing the modalities for applying such a criteria at the entity design stage. It is planned to present the outcome of the study to the September 2005 session of the Programme Committee.
7. At its session in September 2004, the Committee also noted that the discussion on priority-setting would be facilitated by the consideration of a report on the first batch of auto-evaluations, which would also seek to explain the related process. It emphasised the potentially important role of auto-evaluation in deciding on future work as existing programme entities finished their programmed life or came up for review on a six year cycle.
8. The new programme model with its focus on results in terms of benefits to Members in the context of the FAO Strategic Framework was first applied across all the technical programmes of the Organization in the 2000-01 biennium. Since that time, the system has been steadily improved in the light of experience and staff and managers have become more familiar with the system and its results-based orientation. As described in the MTP 2006-11, for the 2006-07 biennium, the application of the programme model, appropriately adapted, will be extended to the non-technical and technical cooperation programmes. In doing this, an important enhancement was the introduction of service level standards and service improvement measures, identified in part through SWOT (strengths, weaknesses, opportunities and threats) analysis.
9. The whole system is facilitated by the Medium Term Planning Module of the computerised system PIRES2. This contains the programme entity documents, which are very similar to project documents. For the technical and technical cooperation programmes these include definition of the intended outcomes at the level of primary beneficiaries and how this is expected to contribute to sustainable benefits for member countries.
10. Within this context, this paper discusses the potential contribution of auto-evaluation to the priority setting and programme improvement process in the light of the first year of experience of the application of auto-evaluation as an integral part of results-based budgeting and management in the technical programmes of the Organization.
11. A summary of the auto-evaluations carried out in 2004 is provided in the companion document to this (PC 93/4 b). Auto-evaluation was introduced at the end of 2001 as part of the strengthened approach to Results Based Management (Director-General’s Bulletin 2001/33). The first full year of operation has been completed, with financial support from the UK Department of International Development (DFID). Nineteen auto-evaluations were undertaken in 2004 covering 28 programme entities. For 2005, 25 auto-evaluations have been agreed for the technical programmes. Two PAIAs3 and three to five non-technical programme auto-evaluations will be undertaken as pilot studies, thus gradually extending auto-evaluation to the entire Regular Programme of the Organization. The intention is that all programme entities and PAIAs of the Organization will be covered through either auto-evaluation or by independent evaluation by the Evaluation Service at least once in every six years. Technical project PEs should normally be auto-evaluated towards their completion and in general auto-evaluation should coincide with consideration of changes for the future.
12. The Evaluation Service has analysed the experience with the first round of auto-evaluation, including circulating a questionnaire to the managers directly responsible for, and engaged in, auto-evaluation and to the responsible senior managers (Assistant Directors-General and division directors). These questionnaires were designed to gain feedback on the perceived usefulness of auto-evaluation and to determine to which extent it was being used in decision-making.
13. Chart 1 summarises the responses on usefulness from the questionnaires. It can be seen that all those ADGs responding found auto-evaluation helpful. At the director level 33% of respondents found it very helpful, 58% helpful and only 8% found no significant benefit. Overall managers at all levels found the process either helpful or very helpful. All found it either very helpful or helpful for improving programme entity planning and for identification of areas for improvement.
14. It is too early in the process to analyse the impact that auto-evaluation has had on the actual planning of programme entities and it will always be difficult to disaggregate the extent to which managers have made changes as a result of auto-evaluation rather than the auto-evaluations reflecting changes which had been decided upon independently.
15. Some examples of the types of changes made following auto-evaluation include:
16. It was found that a large number of the concerned managers concluded in their auto-evaluations that programme entity design and its actual execution needed to be improved by clarifying what outcomes were expected for which target beneficiaries. Many concluded that a greater proportion of resources need to be used for disseminating outputs, particularly publications, rather than just producing them. In addition, there was considerable concern on the need to improve accessibility through FAO’s Web site.
17. The extent to which it was found possible to base the auto-evaluation assessment on the programme entity design as included in the Medium Term Plan was constrained by the rather limited usefulness of the indicators, which were often found difficult to measure. This was exacerbated by the fact that there were very few cases where units had been collecting the necessary data on them prior to the auto-evaluation. With a few exceptions (such as the numbers accessing Web-based information), relevant indicators are best assessed in special studies. The design of indicators and their means of verification in the Regular Programme thus remains an area which will require further refinement.
18. As auto-evaluation is the responsibility of the managers directly concerned, there can be only limited expectation that they will be strongly critical. Experience so far has been that generally those managers who felt confident in the usefulness of the programme entities and their performance were more ready to be self-critical in their findings and recommendations than those whose programmes were weaker. However, it should be clear from auto-evaluation reports whether the entities were able to contribute significantly to outcomes and impacts. This in itself is valuable information for senior managers. It was also evident that useful conclusions internalised by those responsible for programme entities took place during auto-evaluation, although it was not necessarily reflected in the reports.
19. Auto-evaluation is a learning process which contributes directly to the improvement of the work carried out by the programme entity managers. The challenge is to maximise the benefit to senior managers from auto-evaluation in their decision-making on overall direction and priorities, without losing the internalised learning by junior managers and resulting improvements which take place during the auto-evaluation process. The questionnaires found that there are a number of ways in which auto-evaluation can be improved, but most staff directly involved, including Service Chiefs, felt that the present intensity of the process for auto-evaluation is about right. This finding together with the findings on the usefulness of auto-evaluation lead us to conclude that future auto-evaluations should be at the same level of intensity for the process as at present.
20. The first years’ experience has also indicated the following lessons which will be further reviewed as more experience with auto-evaluation is gained:
21. In addition, the complementarity between auto-evaluation and independent evaluation will be strengthened. It is generally agreed that entities which have been subject to a recent programme evaluation (i.e. independently run by the Evaluation Service) do not need to also be auto-evaluated.
22. Because auto-evaluation, in general, deals with individual programme entities or small groups of programme entities, it cannot satisfy the requirement of the governing bodies for aggregate level information in setting central priorities; rather it strengthens the internal decision-making process. There is also relative lack of objectivity in auto-evaluations as they are carried out directly by programme entity managers. The value of auto-evaluation is thus in reinforcing the internal basis for both in-course lesson-learning and decision-making by senior managers. In addition, if governing bodies were to choose to use the findings of auto-evaluations as a basis for resource allocation decisions, there would be a risk that the programme manager’s objectivity might be further threatened, thus undermining the essential lesson-learning benefits of this process.
23. The requirement for reports on the implementation of auto-evaluation and the presentation of auto-evaluation summaries to the Programme Committee introduces a discipline which would not be present if this were purely an internal Secretariat process. It also produces an incentive for managers to show-case the outcomes and contribution to objectives of their work, as presentation of summaries to the Programme Committee places them in the public domain.
24. The Committee may thus consider whether it wishes to endorse the role of auto-evaluation in the FAO results-based management system and the formal integration of auto-evaluation into the Organization’s Regular Programme of Work and Budget4.
25. It is suggested that the Committee may wish to consider at its September 2005 session, when it discusses the formats and coverage of the Programme Implementation and Programme Evaluation Reports, the form in which it wishes to continue receiving information on auto-evaluation.
26. Finally, the Committee may wish to confirm its interest in pursuing further the issue of priority setting and, in particular, how the concept of critical mass might be included in the criteria for priority setting.
1 PC 91/7 Priority Setting in the Context of Programme Planning
2 Programme Planning, Implementation Reporting and Evaluation Support System
3 Priority Areas for Inter-disciplinary Action
4 CL 128/3 Summary Programme of Work and Budget 2006-07 (para. 175 of the English version)