Investment Learning Platform (ILP)

Evaluate and Capitalize

The Evaluate and Capitalize phase is about understanding the results achieved by investment, ensuring that successful practice is continued and institutionalized and that positive impacts will last, and building upon insights from experience to inform future action. Evaluation after completion aims to assess whether a project or programme achieved its intended results, how the results were achieved and whether it could have achieved more, or more efficiently, and to highlight insights from its implementation to inform future action. In this way, evaluation plays a dual role: promoting accountability on the one hand and enhancing knowledge and learning on the other. Different methods of evaluation, using both quantitative and qualitative approaches, can provide different and complementary insights. Investing in good evaluation from the start of implementation is important not just to review performance but to identify good practices that should be scaled up, as well as pitfalls to avoid. Involvement of key stakeholders in the evaluation process, from the field to the policy level, will ensure that a full picture of the project can emerge and that lessons can be reflected in future practice. Evaluation marks the end of the investment cycle, but it should also feed directly into the next planning phase.

Handover and Exit

In order to achieve lasting impact, realize the full potential of investment made and ensure that relevant activities continue, project or programme closure should be accompanied by a clear exit and handover strategy. Ideally this should be envisaged from the time of design and prepared throughout implementation. In some cases investment support will have created sufficient capacity for lasting impact without additional financial or technical support. In many cases, however, a need remains for continuation of activities or services provided, or for follow-up support.

Effective handover requires that the various levels of institutions that are expected to sustain outcomes or continue activities have the capacity to continue or provide complementary follow-up support. Equipment, methodologies and approaches and relevant skills should be transferred to the relevant partner when this has not yet occurred during the implementation period.

It is important that institutions have relevant and recognized mandates, appropriate operational systems and procedures, adequate staffing and skills, sufficient financial resources and access to relevant information and communication channels.

A conducive policy context is an important factor for continued success. While investment should have contributed to creating this context it may not be possible to ensure this fully at the time of project completion. Constraints therefore must be highlighted and respective strategies discussed, in order to assist key institutions to operate effectively [see Scaling up].

Evaluation - Purpose and Focus

The purpose of evaluation is to make “an assessment, as systematic and objective as possible, of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfillment of objectives, developmental efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process of both recipients and donors.”1

At completion, the focus of evaluation is on results rather than individual activities. However, it is important to assess which activities contributed to the results and which were most effective and efficient. This helps to inform the design of other projects and programmes.

Impact on development goals, such as poverty levels, can generally only be measured at some point after the end of a project and usually an individual project can only contribute towards these goals rather than being solely responsible [see Impact Evaluation]. However, intermediate evaluation and end-of-project evaluation can measure direct results in terms of project outputs and contributions towards project outcomes as defined in the Results Framework. Where activities have been phased – for instance, in different geographic areas – impacts can be evaluated, in some cases, from the first batches of activities.

Key questions for end-of-project evaluation – focusing on major achievements and shortcomings – are:

  • How successful was the undertaking? To what extent were outputs resulting from the activities delivered as expected? If to a lesser (or greater) extent than planned, why was this the case? Were the objectives achieved within the anticipated time frame and within the budget?

  • What changes have occurred as a result of the outputs? Are these in line with expected outcomes? Have there been other unanticipated outcomes? If so, why did they occur? To what extent are these likely to affect the desired project impact?

  • Can the project activities and outputs be continued after project funding ends? How? Will the achievements of project activities endure without further additional resources or support? If further support is needed, has this been arranged?

Specifically, evaluation needs to conduct the following analyses:

  • Targeting and social impact [see Social Analysis and Social Safeguards] – How well has the intended target group been reached? What impacts have the project outputs had on them to date? What level of women’s involvement has there been? Has a specific gender analysis been conducted?

  • Technical analysis – Has the technology used been appropriate and as effective as planned? If not, why not? How sustainable is the technology? What is the uptake to date and how is this likely to change after this project ends?

  • Environmental impact assessment [see Environmental Safeguards] – What effects has the project had on the environment? Are they in line with the assessment carried out at preparation, and have impacts been mitigated effectively through the environmental management plan? Are there likely to be long-term effects?

  • Financial management, procurement performance, general project management performance – Have the rules and regulations been respected? Has implementation adhered to agreed-upon schedules?

  • Do impacts and effects justify the costs? What can be concluded from cost/benefit analysis or an assessment of the rate of return on investment?

Evaluation should also ascertain whether cross-cutting aspects such as gender, climate change, or nutrition are addressed appropriately [see Gender, Climate Change and Nutrition].

How should evaluation be carried out?

An independent perspective in evaluation is important to ensure impartiality and credibility. Evaluators should be external experts. The lead implementing agency should organize the project evaluation and support the independent evaluators. Representatives of all major stakeholders should be involved in the evaluation to ensure that a broad range of information and perspectives is taken into consideration.

Evaluation needs to build upon direct interaction with key stakeholders in the field and in relevant organizations, in addition to careful review of the regular project reports and analysis of monitoring data. Formal quantitative and qualitative surveys in critical areas may be required to assess outcomes such as uptake of technologies and attitudinal change among target groups [see M&E and Impact Evaluation].

Evaluation findings and recommendations should be shared in relevant formats with the main stakeholders involved and with decision-makers to ensure that the results of evaluation can inform future action.

Capitalizing on Project Evaluation

Evaluation is not just about accountability, it also has a critical role in contributing to good practice by providing evidence about what works and what doesn’t and indications about pitfalls to avoid and levers or mechanisms that have been shown to overcome certain obstacles to good implementation.

Although evaluation reports generally include a section on lessons learned, many of the same mistakes continue to be made in other projects.

History repeats itself. Has to. No one listens.”
Steven Turner

If lessons are not learned from project evaluation, investing in evaluation may strengthen accountability but not good implementation practice. Project evaluation is not just the final phase in the project cycle but also the first phase in the next cycle, and planning needs to begin by looking at the successes and failures of previous projects and programmes. Lessons learned from one project or programme should be considered for all other ongoing and planned projects and programmes. Some lessons should be applied to existing projects, others may be considered in the next planning cycle, and some projects or aspects of projects may be considered for scaling up. Scaling up good practice experience will contribute to achieving impact beyond the confined scope of a particular project. However, this is not just a matter of replicating and multiplying successful small interventions, but identifying suitable conditions and creating the appropriate enabling environment to ensure that it is the results that reach scale, rather than a particular mechanism for implementation [see Scaling up].


1OECD/DAC, 1998: Review of the DAC Principles for Evaluation of Development Assistance

Key Resources

A guide for project M&E (IFAD) (2002)

Overall guide to facilitate the development and use of effective and participatory M&E systems as tools for impact oriented management (RBM), shared learning and accountability

Aid Delivery Methods - Volume 1 (EU, 2004)

Project design and management tools to enhance effectiveness of programmes and projects supported with EC funds.

Scaling up the fight against rural poverty. An institutional review of IFAD’s approach (IFAD/Brookings, 2010)

Overview of IFAD’s “scaling up” approach, including successful interventions, operational policies, processes, instruments, resources and incentives.