E-Agriculture

Question 4 (opens 4 Dec.) What are appropriate targets/data to monitor our progress in “e-agriculture”

Question 4 (opens 4 Dec.) What are appropriate targets/data to monitor our progress in “e-agriculture”

Question 4 (opens Dec.) What are appropriate targets/data to monitor our progress in “e-agriculture”, to demonstrate that post-2015 goals (or other development goals) are being met? Who should be responsible for these?

[NOTE: As one example of ICT4D monitoring, see the "Measuring the Information Society" report from ITU. For more about the post-2015 development agenda click here.]
Megan Mayzelle
Megan MayzelleUniversity of California Davis International Programs OfficeUnited States of America

Steph, once again thanks for a very thoughtful post.  I believe you are correct that publicly accessible data would enable a effective macro-scale approach to measurement of impact.  Of course, however, that approach is not without obstacles:

1- establishing practice. Perhaps the quickest way to encourage the practice of sharing data would be via large donors requiring it.  Over time such a convention could also come about via large projects setting the example, the "good reputation" such transparency may offer the project, and other public perception benefits.

2-business practices. If the project is a self-sustaining business, it may have valid concerns about making its data publicly available

3-project competition- unfortunately, projects tend to be competitive rather than collaborative, thus discouraging their willingness to make their data available to their "competitiors"

4-privacy concerns- how much information can you share about a user before you begin to invade their right to privacy and anonymity?  Especially in a bottom-up, user-generated system and/or a system that utilizes user information (ex: location, crops produced, season, consultation history, etc) to narrow the options given to a particular user when they connect to the system, the knowledge that data is publicly available may deter the user from taking advantage of what the system may have to offer.

5-situational factors - oftentimes keys to project success lie in how well the project understanding and accommodates the local culture (gender disparties, indigenous knowledge/practices, etc).  Obviously these are unique to each community and difficult to compare across project.  Using data alone, it may be difficult to account for such facets of the project and their role in the impact of the project.

6- consistency - even if every project shares its data, if there is no standardized way to measure impact, then those data will not be comparable.  

Thoughts?

stephane  boyera
stephane boyeraSBC4DFrance

Hi Megan,

Concerning your first point, you are totally right. However, enforcing the requirements in grant agreement can only be imho half of the strategy. It must be easy for practionner to do so. So i think there is a need for a first global steps:
*developing tools and extensiosn of existing tools that make the sharing of data easy for non-technical people
*developing an architecture with a (set of portals) where it is easy to publish and make available the data to the community.

Concerning all other points, I think there are critical too, but not specific to the domain. All initiatives in the open data world are facing these issues.
For instance, privacy/anonymization is not only important but legally required in most countries. However, there are now processes in place that allows e.g. government to publish all their data in such anonymized form. 
About business practices, there are different levels. You may well have data that are critical to your business models, while their anonymized version has no real value but are good for measurement.
About competition, you are right again, competition is happening, and this will not change. But it is essential for the domain to define the right boundaries of competitions, to ensure that while competition exists, it is not detrimental to the domain, or more speicfically to the development outcomes in the ground. This is where enforcing the need of sharing in grant agreement is imho the best way to define those boundaries.
About situational factors and consistency, this is again fairly common in the open data world: you may have lots of data that are very specific to the culture, communities and people you are working with, and in that case those data might not be helpful for people working in other regions. However, it is likely that those data are helpful for other organizations working within the same context but in different domains (health, etc.). 
About consistency, i'm not a big fan of standardization here, and in practice this is not the way e.g. government portals work. You publish data, and as far as your datasets are correctly documented, anyone can reuse them and mash them up with other sources. However, it would help if for a core set of informaiton, such standardization in vocabulary and semantics happens on the model of e.g. IATI for aid spending (http://www.aidtransparency.net/ ).

All in one, this discussion is bigger then just measurement. I think that there is a need to go for an holistic approach where publication of data is only one part of the equation. The other part is about increasing impact by reusing other org. data. It is about taking advantages of what other have done and not restart from the beginning. In my experience, you can convince more easily people to publish their data when they understand the value and ues the data of others. 
Moreover, it is important to note that because this open data revolution is already happening at government level and at international organization level, there are already tons of information available for free. Therefore, part of the ecossytem is already in place, and opportunities are already here.

cheers
steph

Megan Mayzelle
Megan MayzelleUniversity of California Davis International Programs OfficeUnited States of America

Steph, Thanks as always for a thought-provoking post.

The concept of making data sharing easy and common practice is a good one.  I'd be interested in hearing others' thoughts on the best approach to this/lessons from existing platforms.

*Implementor --would the ideal be a business?  think tank?  national organization? government entity?

*Scope-- would national systems be most appropriate, or would a regional or global system for data sharing have the most impact?  

*Motivation -- what would motivate projects to add their data to the platform?

* Design-- How could the platform be designed to ensure that all data types can be acommodated?  Would this data collection project also serve as a source of suggested indicators/ impact assessment tools?  

*Unity -- would copycat data sharing systems detract from the concept?  If so, how could this be avoided?

Megan Mayzelle
Megan MayzelleUniversity of California Davis International Programs OfficeUnited States of America

Is customer satisfaction the ultimate mark of a successful initiative?

My own project uses outputs (ex: access frequency, bounce rate, number of topics visited, etc) as  "suggestions" of success--ergo if the product was not useful, no one would be using it!  But ultimately user feedback is the greatest measure of impact; there is nothing we value more than a user's comments on what they found useful and how the product could be improved.  

Surveys capture the cultural appropriateness of the project and its approach.  Especially given the short time frames of many projects, surveys also better capture what may not be yet visible in the numbers.  

I.e. there's been a drought since the project was implemented; harvest numbers haven't increased; families are farming together, and youth and women feel included in information access; female farmers state that their knowledge of GAPs has substantially increased and they feel confident that they can increase their yields.

Did the project meet its goals?  

These sorts of outcomes could only be captured by eliciting user feedback.

As with all approaches, survey has disadvantages: time-expensive, perhaps difficult to extract honest opinions.  However, its advantages may make it a worthwhile componenet of measuring progress. 

In Spanish we usually use this aphorism “Lo mejor es enemigo de lo bueno”. It can be translated as “the best is the enemy of the good” or “perfect is the enemy of the good” or just “keep it simple stupid!”

We know that we are dealing with a complex subject that can be affected by multiple factors that are constantly changing. So we have to decide whether to try to monitor something that can´t be completely monitored or to choose a reasonable amount of indicators that can give as a fair idea of what is happening and what may be the causes of that situation.

In that sense technology -especially mobile- can be a powerful partner or a big obstacle to simplicity. I will say that mobile technology has the ability to connect us directly to the field, process and report data faster and better, which translates in faster and better decision making. So why are we going to measure thousands of indicators when we can concentrate in the most relevant ones and then chose when to make a deeper research on a particular subject, group, etc.?

To keep this post simple and concrete I will recommend to anyone trying to monitor progress in “e-agriculture” the following things:

1.    Include simple socio-demographic indicators that can give you a good idea about the HH poverty level. There are many tools available such as the Multi Dimensional Poverty Index, NBI “Necesidades Básicas Insatisfechas”, Progress Out of Poverty Index (PPI), etc. Many governments have studies in which they conclude which are the most common characteristics of poor families. Choose indicators that reflect the basic needs that a HH should access specially the ones that your intervention can affect.

2.    Include a poverty measurement tool that is both simple, accurate and which results can be benchmarked with other regions, countries, etc. My suggestions are PPI and PAT.

3.    Ask farmers about their attitude towards their livelihood. Understand their struggles and pain points.

4.    Understand the farmer´s attitude towards the services different ag providers are delivering (credit, supplies, TA, etc).

5.    Concentrate your farm´s diagnostic tools on outcome indicators and on the best agricultural practices you think they need to adopt in order to achieve them. Understand your theory of change; measure if farmers are adopting best practices and if those best practices are producing the desired outcomes.  

6.    Benchmark your results with all possible data available from trusted institutions.

7.    Go to the field! Try to see if what your data is saying is reflected on what you see and hear from farmers.

Whitney Gantt
Whitney GanttGrameen FoundationColombia

As other participants have pointed out, technology and especially mobile devices open up enormous opportunity for capturing data at a low cost. While on its face, this good news, the ease with which mobile devices can capture numerous indicators and vast quantities of data has often been a detrament to good design in M&E in the ICT4Ag field. Interventions often capture far more data than they can effectively analzye and sometimes confuse capturing data with interpretting it in ways that improve results in the field. In that regard, the previous post on keeing it simple provides some easy to implement techniques to avoid the trap of measuring everything.

Another temptation ICT4Ag projects seem to fall into, as noted in other posts, is the tendency to over rely on output measures, and especially absolute numbers reached, and to then equate large scale and lower cost/interaction with impact. Because technology is a means for achieving low-cost scale, it's not surprising that many ICT4Ag initiatives quickly reach a scale that other Ag4D projects cannot. However, not enough effort has been put into measuring and communicating outcome related indicators or proxies associated with interactions generated from ICT.  

Lastly, ICT4Ag initiatives often jump directly to measuring impact before getting the operations right. While it's important to build in outcome measures from the start, understanding what's working and what's not and where users see the most value, should be a high priority during early stage piloting and scaling.

The previous post which suggests measuring user satisfaction, which can be measured via repeat usage, is a solid approach for capturing the value that users percieve. It is also one that can "fall out of the data" if designed well since most technology products capture each interaction and associate that interaction with a unique user, this is particularly true when users are paying for the service. Disaggregating usage by poverty level, gender and other characteristics, which can be captured via mobile registrations, can help practioners better design and target services. Analyzing usage by product or content area (e.g. cattle diseases vs. chilling hub services) can help programs zero in on where to focus new content and services and when to course correct. These types of insights should serve as the basis for deciding what to scale but unfortunately are often frequently overlooked in the rush to scale.

Similarly, capturing adoption of best practices that have proven ranges for productivity and/or quality increases can be another way to get at outcomes in a short period while awaiting phase II, more costly Randomized Control Trials and other impact assessment approaches.

Finally, in our own projects, the most effective indicators have been those that value chain players already measure. By monitoring changes in key indicators that are important to value chain players and that are measured regularly as part of doing business, for example number of boxes of bananas rejected for ripeness issues each week, we can assess how particular ICT4Ag efforts (for example sending SMS on when to harvest bananas) are affecting results and tie those results back to farmer returns (e.g. a decrease in number of boxes of rejected bananas is tied to an increase in weekly payments to a farmer, both of which are already measured by the commercializer). Linking these types of indicators back to a cost-benefit analysis that captures the cost of introducting technology into business processes can also make the business case for ICT4Ag initiatives by not only measuring impact but also demonstrating value that commercial players will be willing to pay for.

Antoine Kantiza
Antoine KantizaPromotion de l'Education à Distance/Promotion of Education and Learning in Distance, PLEAD in short Burundi

I think that the establishment of the same indicators is the beginning of a common language for the building of digital databases in order to monitor the impacts of ICT in agriculture and the forum alike is the best way for setting up the common understanding of those indicators nevertheless it has been proved that each sector of the Millennium Development Goals has its specific indicators. For my understanding, the  increase of productivity and income for small farmers remain the main factors to be monitored after the implementation of mobile applications in agriculture nevertheless the mobile applications are on horizontal line towards many sectors of development where the agriculture have its specific indicators and the business have to be measured with others indicators as it is shown by the indicators agreed by the World Bank: http://data.worldbank.org/indicator and of course the best farmer is not necessary the best trader.
Indeed, I think that the impact of mobile applications in agriculture is already a based evidence, as the mobile and interactive communication is being more and more the new way of living even for the poorest of the world meanwhile the exponential progress of mobile applications has no common measure with the low progress of agriculture but many relevant studies have shown that there is a causality between the penetration of mobile applications in the populations with the income driven by farmers in the agriculture and livestock: http://econ.duke.edu/uploads/media_items/houghton-daniel.original.pdf
Also, it is worth mentioning that such subject has been talked in the former forums and I remember that I wrote that the mobile applications in agriculture should not be the silver bullets for small farmers and that the manipulation could happened anytime in the canal of communication like mobile applications reason why we must arrive on the ground to remonitor what the remote sensors and others reports by ICT applications have evaluated remotely and I posted that“The right information is not shared among farmers and traders”: http://www.e-agriculture.org/.../question-1-market-information-users-mobile-technology
Besides, I do believe that the construction and the exploitation of digital databases related to the achievement of ICT4Ag are costly and are not in emergency nowadays for smallholders who need the minimum for surviving, by the way, I recall my  post named “The robust data collection is needed to boost farmers not dbases” available on the link below:
http://www.e-agriculture.org/forumtopics/question-1-icts-collecting-agri...
economic-or-me-data-open-11-june
Prof Antoine Kantiza,-         

Juan Forero
Juan ForeroColombia

Dear all,
I just wanted to share a few ressources that could be interesting for the discussions and for further conversations.

Reports and Case Studies on the PPI: In this site http://www.progressoutofpoverty.org/case-studies-reports you can find all our case studies in order of construction. There is a short description of each one so you can choose which one you find interesting. I recommend the "Gratia Plena Social Action Center and the PPI®" case study, which is one of the firsts.
 
Please find below some extracts of a report we made on the use of the PPI by organizations. As you may see most of them are in a very early stage. I am including some that are working in the agricultural sector (you will find renowned players such as COSA, SFL and CIAT):

a. ASOCATI and Fundauniban
ASOCATI and Fundauniban are two organizations that have been using the PPI as part of the Community Knowledge Worker’s pilots that Grameen Foundation is conducting in Colombia. Fundauniban began using the PPI in August 2012, even though they made a small pilot in July 2011; ASOCATI began using the PPI in June 2012. Because both organizations are part of the CKW pilots the purpose of using the PPI is the same: to first understand the poverty profile of its associates or beneficiaries and to ensure that the project is focusing on the poor population. The second objective was to build a sample that could serve as a baseline for tracking the progress of the communities over time.
The PPI was administered by Community Knowledge Workers (CKWs) who use smart phones to capture responses from beneficiaries. Alongside the PPI, other types of indicators were entered related to socio-demographic, agronomic, financial, and technology access indicators. The idea is to analyze PPI data with the data from the other indicators, characterize different groups within the baseline group and to track the relationships between the behaviors of these different indicators, including food security. In the future they want to improve the focus on the beneficiaries and identify the possible causes of the improvement or worsening of their initial economic situations with the help of more in depth qualitative studies.
ASOCATI has collected 500 surveys and Fundauniban has collected 411 surveys. The collection methodology used by both is on a census basis of all small holder farmers (SHF). Currently Uniban has five CKWs in the field and ASOCATI has five CKWs and seven technicians. All the CKWs from both organizations have been trained in PPI use, and the management is familiarized with interpreting PPI data.

b. Centro Internacional de Agricultura Tropical (CIAT)
CIAT was hired by CRS to build a baseline for an impact study that would evaluate the results of a project in the border between Colombia and Ecuador called “The Borderlands Coffee Project.” It is aimed to help 3,200 smallholder farmers in conflict-affected communities to expand high-value market opportunities and reduce their vulnerability to hunger and environmental degradation. CRS and its local partners will be working with 1,600 smallholder farmers in the highlands of Nariño in Colombia, and 1,600 family farmers in the Amazon provinces of Orellana and Sucumbíos in Ecuador. The project’s objective is to help farmers increase coffee productivity and quality, as well as their income. In addition, it works to expand non-coffee livelihood alternatives and reduce vulnerability to hunger and assist with adapting to climate change.
CIAT built a survey[1] comprised of more than one thousand questions that was able to capture data related to the farmers’ socioeconomic characteristics, the farm’s characteristics, the different forms of farm ownership, production, commercialization, associability and general agriculture practices. They also captured farm geo-positions and applied two other tools: the Household Dietary Diversity Score (HDDS) and the Months of Adequate Household Food Provisioning (MAHFP) for measuring household food access, both by FANTA[2]. (to be continued)

Juan Forero
Juan ForeroColombia

(continued)
The main objective for building the baseline was to test if the control and treatment groups were homogeneous in order to check at the end of the project if the differences between them are due to the intervention. Another objective was to have an “initial photograph” of the current status of the farmers in the municipalities involved. The survey provided information on the socio-economic situation of producers, their connectivity and access to information, property characterization, production and coffee marketing, and access to capital and division of labor (gender component), among others.
CIAT designed a stratified sample that was taken from the population of beneficiaries (1,600 for Colombia and 1,600 for Ecuador) and made a geographical dispersion map based on the number of small holder coffee farmers in the project’s municipalities for selecting the control group. In total they conducted 510 surveys in Colombia (228 for the treatment group and 282 for the control group) and 519 in Ecuador (235 for the treatment group and 284 for the control group). They began collecting the PPI in April, 2012 and spent one and a half months completing the process, using 12 surveyors in total for Colombia and 23 for Ecuador. The agronomic part of the survey was applied at the farms and the socio-economic part was applied at the beneficiaries’ households.
c. Sustainable Food Labs (SFL)
The main objective of this project is to gain a better understanding of the realities faced by small-scale sugar cane farmers. To accomplish this, SFL is collecting farm level data from certified fair trade, organic smallholder farmers in Paraguay.
This project is designed to accomplish two goals:
1.     Provide greater insight on the livelihoods and challenges of cane farmers; and
2.     Test the concept of a lightweight, cost effective set of core metrics for smallholders.
An initial baseline survey of 45 farmers in 3 organic, fair trade certified cooperatives was completed in April 2012 by SFL and local experts in collaboration with Fair Trade International’s Paraguay staff.
Additional surveys have been being completed since early 2013 with the support of the Ford Foundation, who has also founded the new PPI for Paraguay released in December 2012. These surveys have reached a larger number of farmers and include interviews with hired cane workers to understand the livelihood profile of this group.
In March 2013 SFL started using the PPI in Paraguay, where they work with sugarcane farmers, in order to complement their pre-existing performance measurement work. The aim of implementing the PPI with their larger original survey is to arrive at a straightforward assessment of their impact on the livelihood statuses of small-scale producers. The questions they aim to answer with their original survey and PPI are: “Who are you reaching?” “Are you reaching the poor with this value chain?” “Is poverty decreasing overtime?”
An essential part of SFL’s measurement of livelihood statuses is household income; however, this is very difficult to calculate quickly. In this approach assets are the best measurement of income and income potential; but, said assets must be assigned a value related to their given context which can be hard to determine and very time consuming. SFL thought that PPI could be a possible shortcut to circumvent this costly process of household income analysis and easily determine if value chains were reaching the poor and if producers are less poor over time.
The surveys were conducted in the field by technical assistants from six producer organizations and administered to a random sample of sugarcane farmers, all of whom were fair trade certified cooperative members. These technical assistants also gave tech assistance to farmers on different visits.
As of now, SFL has collected 300 PPI surveys in Paraguay; this data is just now coming in and getting ready to be reported. Besides SFL, other organizations such as Unilever, Sab Miller, and cegenta, are all in the thought stage of exploring PPI use; Root Capital has actually tested it.

Juan Forero
Juan ForeroColombia

(continued)

d. The Committee on Sustainability Assessment (COSA™)
COSA works with cacao farming in Nicaragua in IFC funded projects aimed at increasing acreages planted with cacao. They have been working with Lutheran World Relief and Catholic Health Services who have likewise been using the PPI in Central America. The PPI is also being used in an evaluation of a technical assistance certification program in Veracruz.
COSAS’s data collection has been taking place in rural communities with producers of cacao and coffee in Central America (Honduras, Nicaragua, El Salvador, and Veracruz, Mexico) and Colombia. Usually the interview takes place at home or on the producer’s farm. COSAS’s surveyors talk to the farmers on their farms or at their homes and at the end of the interview the PPI is given as a separate interview segment. Depending on who the client is and what they are looking for, the entire survey could be as long as 100 questions.

[1] http://coffeelands.crs.org/wp-content/uploads/2012/04/Borderlands-Colomb...
[2] http://www.fantaproject.org/.

Please let us know if you have any questions or need more information on our projects and we will be more than happy to tell you more about it