Previous Page Table of Contents Next Page

7. Monitoring and evaluation of communication programs in natural resources management in agriculture (NRMA)

Ma. Theresa H. Velasco

Upon completion of this module, readers would be able to:

1. Differentiate between monitoring and evaluation;
2. Define key concepts in monitoring and evaluation;
3. Explain the steps in drawing up a monitoring and evaluation plan; and
4. Differentiate between conventional and participatory monitoring and evaluation.

Monitoring and evaluation (M&E) are two activities that are more often than not taken together.

Each one feeds into the other to complete a vital aspect of strategic communication planning.

This chapter is organized around the following questions:


Two questions best represent the differences between monitoring and evaluation (Piotrow et al. 1997):

Monitoring literally means watching, observing, checking, or keeping track of a process for a special purpose. The output of monitoring consists of observation and description of how the communication project/program is being conducted.

Evaluation, on the other hand, means determining the value, significance, or worth of something through careful appraisal and study. It looks at the interpretation of data about the communication program's results or changes or impact over time.

The importance of monitoring and evaluation was impressed upon the Communication Team at the very start of the project. The questions posed in this module were also posed to, and answered with, them.

Project or program monitoring looks at how the communication project or program is being implemented, specifically in terms of coverage and delivery. Data about program inputs, activities, and results are collected periodically or at specified times during implementation.

For example, are project activities carried out on time? Is money spent on specifically allocated items at the time it is needed? Is the project achieving positive results on specific activities?

If the answer to each of these questions is no, then what factors are causing the difficulties? Likewise, are there some positive notes that the project could capitalize on for further gains?

Monitoring is especially useful in three areas of program implementation:

Monitoring for management makes it possible for managers of communication programs to keep track of how the program is being implemented.

This is particularly important at the early stages of the program so that whatever feedback is obtained could be used to make the necessary changes.

A manager who fails to monitor a program to gather coverage and process information misses out on the opportunity to start desirable activities, change directions when necessary, and stop doing unproductive activities (Piotrow et al. 1997).

Monitoring for evaluation contributes to the accurate interpretation of final evaluation results. It ensures that the correct parameters are monitored and measured. Is the appropriate information being given and to the right users of information?

If a program is monitored carefully, problems concerning what to keep track of and how to gather data that would contribute to evaluation results will be identified early on.

Monitoring for evaluation also contributes significantly to program diffusion and expansion. Keeping track of the essential features of the communication program enables program implementers to describe them in detail for possible replication later.

Monitoring for accountability is carried out as an expression of program implementers' responsibility to those who are contributing to the undertaking. These include governments, donors, boards of trustees, pressure groups, and taxpayers themselves.

Careful monitoring shows to these groups that scarce resources for the communication program are being watched closely.

Evaluation is generally looked at as an investigation designed to determine the effectiveness of the communication program in terms of meeting its objectives (Torres and Velasco 2005).

Evaluation entails the following:


Together, monitoring and evaluation contribute the following to program implementation:


1. Make sure that M&E are part of the program cycle from the very beginning.

Too often, development programs suffer from lack of monitoring and evaluation data because M&E have been considered as an "after-thought" or something that planners remembered too late. It is a pity when planners remember too late that they should have looked at the progress of activities at important points during the project implementation, like quarterly or every two months or at any important points in program implementation. When they do realize that there should have been objectively verifiable indicators, it is already towards the end when most activities have been done. One reminder is worth remembering: If it is not written down, it did not happen.

Process documentation is an even more rigorous way of doing M&E. The beauty of process documentation lies in the fact that it records the process as it unfolds. It looks not only at the success standards and indicators but also at how the different activities in the project/ program are carried out towards achieving the objectives set at the start.

2. Allocate project resources. These include provisions for time, money, and personnel.

When planners make sure that M&E are part of the project/program cycle right from the very beginning, they must also ensure that resources are properly allocated for these important activities. It is important to decide at what particular times M&E should be undertaken.

Should M&E be carried out every two or three months or should process documentation be adopted? Budget is likewise an indispensable component.

Money is important for the following expense items:

Who will do M&E is a critical question that needs to be answered early on in the project cycle.

When project/program personnel are not sure about who will take care of specific M&E needs, the project/program loses valuable time and opportunity to look at the conduct of important activities.

It solves a lot of problems when several questions pertaining to personnel responsibilities are addressed right at the start. M&E could proceed more smoothly if team members know exactly what their assignments are.

3. Set the standards based on objectives.

Standards are preset target levels of performance against which a project should be evaluated as "success or failure."

Standards should be based on clear objectives set by the project/program at the beginning.

Sample communication objective: By the end of the six-month communication campaign,
50 per cent of the 100 fishermen in Chong Khneas can list their responsibilities in community fishery.

The standard here is 50 per cent, the number of community fishermen that the project/ program hopes to reach for the duration of the activity. There are two ways of interpreting results vis a vis the standards: dichotomous and continuum.

30% - success
29% and below - failure

11 - 19% - high
21 - 29% - moderate
30% and below - low

4. Identify indicator/s for each standard.

An indicator is a variable that measures one aspect of a project/program. An indicator gauges the value of change that occurred in meaningful units for project implementers; it uses metrics.

In the sample objective in Step 3, one indicator could be number of responsibilities the fishermen from Chong Khneas could identify. The table below shows the possible standards and indicators that M&E implementers could look at in terms of the given objective (Step 3).




By the sixth month, 50% of fishermen in Chong Khneas can list their responsibilities in Community Fishery.

At least 50% of fishermen in Chong Khneascan give at least three responsibilities pertaining to community fishery.

Number of responsibilities given

Quality of information given (Correct vs. Incorrect)

By the sixth month, 50% of fishermen in Chong Khneas can explain why Community Fishery is important.

At least 50% of fishermen in Chong Khneascan give at least three reasons why Community Fishery is important.

Number of reasons given

Quality of information given (Correct vs. Incorrect)

Order of importance of reasons given

5. Determine data sources and data gathering methods.

There is a variety of data sources and data gathering methods that could be useful to M&E implementers.

Data Sources

Data Gathering Methods

M&E implementers must remember three criteria in deciding on what data gathering method/s to use. The method/s must be simple, practical, and manageable.

The most commonly used methods of data gathering for M&E are survey and participatory rural appraisal methods consisting mainly of key informant interviews, and focus group discussions.

Survey is a method that enables M&E implementers to gather empirical data or data based on observation or experience. Its main advantage is its wide and inclusive coverage.

Many social scientists favor the use of survey method because it is cost-effective in the sense that it can produce a mountain of M&E data in a short time for a fairly low cost. The survey can reach so many more stakeholders than, say, the key informant interview or the focus group discussion.

Survey data also lend credibility to generalized statements made on the basis of large volumes of quantitative M&E data, mainly because survey data allow for appropriate statistical tests to be performed.

Participatory Rural Appraisal (PRA) methods refer to a family methods that allow M&E implementers to gather qualitative data within a short period.

Through these methods, researchers can look at issues with more depth because of the opportunity to talk to the people or the stakeholders themselves.

The most commonly used PRA methods are: 1) key informant interviews; and 2) focus group discussions.

a) Key informant interview (KII) involves gathering information directly from an individual engaged in the project/program. With a simple question or topic guide, an interviewer can elicit information from a person who is considered to be most knowledgeable about the issue being investigated.

KII is useful because it allows the person doing M&E to probe or ask further questions until he or she gets the necessary information. This method demands certain skills from the interviewer, particularly attention to detail and the ability to recall and record responses.

b) Focus group discussion (FGD) gathers about 8-15 individuals to talk about a specific aspect of the project/program that would be useful to M&E.

A facilitator initiates discussion using a topic guide and encourages each participant to share his/her idea or opinion about the topic. FGD allows greater opportunity for group interaction and richer responses.

Since there is interaction/exchange of information among the participants, new and valuable insights on the topic emerge from the discussion.

Moreover, the exchange and dynamics provide first-hand insights into people's perception of the M&E issue.

6. Identify who will gather what (data) and when.

Drawing up a plan (Gantt chart) for M&E - complete with activities, timeline, budget, and persons responsible for every activity - is a good way to organize M&E activities

7. Collect data on program implementation and outcome.

Compare program outcomes with prior or expected outcomes. This is where baseline data become truly important.

The strength of the communication strategy may be gauged by comparing, for example, baseline data on stakeholders' KAP (knowledge, attitudes, practices) about a technology with their KAP levels taken within a reasonable time after the project/program.

An increase in KAP levels would indicate a certain degree of success in the communication strategy adopted.

8. Assist in making policy and management decisions.

Monitoring data will be useless if these are not used to improve program implementation.

For example, if data from interviews reveal that communication materials are not reaching the area, these should be fed back right away to program managers so that logistics could be improved.

Below is a sample M&E plan that specifically shows the important aspects. The time line could be added or drawn up in a separate plan.



Data Source

Data Gathering Method




The two qualifiers of monitoring and evaluation - conventional and participatory - seem to represent two opposing ends of a continuum.

Conventional implies something ordinary or commonplace or of traditional design. Participatory, on the other hand, is a term of the '90s that has come to be associated with current thinking, with a new paradigm that has shaped ways of doing things away from the usual top-down approach.

Conventional and participatory M&E, however, are not mutually exclusive.

While each one has its unique characteristics, they are not incompatible. One is not meant to supplant the other but the two should complement each other.

Thus it is best to understand the nature of both conventional and participatory M&E, including their strengths and weaknesses, towards the development of a workable M&E plan suited to specific groups in specific environments (Torres and Velasco 2005).


Conventional M&E, as the term suggests, has been practiced in program implementation much longer than participatory M&E has been done.

The latter is a product of the last two decades' emphasis on people's participation in the conceptualization and implementation of development projects that directly affect the stakeholders themselves.

M&E in participatory development communication, for instance, do not constitute an isolated step but an integral part of a process whose "specific steps are not primarily about applying techniques, but about building mutual understanding and collaboration, facilitating participation and accompanying a development process" (Bessette 2003).

Conventional M&E and can be differentiated from participatory M&E in terms of three parameters:

A. Involvement of stakeholders
B. Focus of data gathering
C. Methods and Instruments

Involvement of Stakeholders. In conventional M&E, the stakeholders are usually expected to be respondents in a survey or testing procedure.

The criteria for these data gathering methods have been developed by outsiders, normally by consultants who are considered experts on M&E and/or experts in the field under study.

Their participation is rather limited in the sense that they are not involved in the planning of the M&E mechanism and content; neither will they be involved in the processing and interpretation of results.

The very essence of participatory M&E is involvement of stakeholders in critical steps of the program cycle, even as early as the planning stage.

Because they are the ones who have a stake in the whole process or those who have something to gain or to lose by being involved in the program, they are consulted on the following areas of concern:

Focus of Data Gathering. Conventional M&E attempts to achieve breadth of information while participatory M&E focuses on depth of information.

Breadth of M&E information or data means that the data gathered should be wide in scope and the quality should be comprehensive enough to be representative of a large sample of stakeholders affected by the program.

Through conventional M&E, data on specific indicators are gathered from as many people as possible so that the resulting figures can be taken as holding true for a bigger population. Conventional M&E is usually dubbed as quantitative research.

Methods and Instruments. Depth of M&E information, in contrast, is achieved through qualitative research. While quantitative M&E deals more with numbers and statistics (percentages, means, modes, averages), qualitative M&E treats and presents data in the form of words, sentences, and paragraphs.

Qualitative data are gathered by documenting real events, recording what people say (with words, gestures, and tone), observing specific behaviors, studying written documents, or examining visual images (Neuman, 1997).

The differences between conventional and participatory M&E are presented below using the simple question format of: Who do M&E? What indicators are used? How is M&E done? When is M&E done? Why is M&E done?




External experts



Predetermined indicators

Participatory identification of indicators


Scientific objectivity

Open, adapted to local culture


Mid-term, completion



Accountability, % accomplishment

Empower stakeholders


Triangulation is the use of two or more different types of measures and data collection methods/techniques in gathering M&E data. Just like in surveying land, triangulation in research means viewing a project/program from different angles.

It involves gathering data from a number of informants and sources, in the process comparing and contrasting accounts. Data gathering and measurement improve when diverse indicators and different techniques are used to compare and contrast findings.

Triangulation is the complementation of conventional and participatory methods of gathering M&E data. It makes possible the best combination of quantitative and qualitative data.

For example, a KAP survey may be conducted, after which focus group discussions and key informant interviews involving key stakeholders and project/program personnel may be conducted to shed in-depth explanations on the statistics generated by the survey.


One thing M&E implementers should be careful about is confusing first impressions with evaluation.

First impressions are usually initial judgments based on limited evidence, without reference to a standard or criterion. Impressions in general are highly subjective and putting much credence on such may adversely affect the evaluation process.

Evaluation, on the other hand, is a deliberate process of systematic data collection and analysis with conclusions based on evidence. A good evaluation process has the following characteristics:


Anyaegbunam, C., P.Mefalopulos, and T. Moetsabi. 1998. Participatory Rural Communication Appraisal: Starting with the People. Harare: SADC Centre of Communication for Development and Rome: Food and Agriculture Organisation of the United Nations.

Bessette, Guy. 2003. Isang Bagsak: A Capacity Building and Networking Program in Participatory Development Communication. Canada: International Development Research Centre.

Kouzes, James M. and Barry Z. Posner. 1995. The Leadership Challenge. San Francisco, California: Jossey-Bass, Inc.

Piotrow, P.T., D.L. Kincaid, JG Rimon, and W. Rinehart. 1997. Health Communication: Lessons from Family Planning and Reproductive Health. Johns Hopkins School of Public Health, Center for Communication Programs: Praeger Publishers.

Senge, Peter M. 1994. The Fifth Discipline. New York: Currency Doubleday.

Torres, Cleofe S. and Ma. Theresa H. Velasco. 2005. Participatory Monitoring and Evaluation. Manila: Brotherhood of Asian Trade Unionists-ASEAN Sub-Region and College of Development Communication, University of the Philippines Los Baños.

Previous Page Top of Page Next Page