E-Agriculture

Simon Wilkinson

This member participated in the following Forums

And just for some fun: A drone with an onboard video camera transmitting the live view back to a pilot wearing virtual reality goggles. Interesting to see how calmly the wildlife react to the machine as well.

https://www.youtube.com/watch?v=-1PHN1R454M

UAV imagery will typically be much more detailed (depends on the camera quality) but also covering a much smaller area, because it is usually taken at a fairly low altitude, maybe a few hundred meters or lower. Satellite imagery covers vast areas but the image resolution is much more limited.

One practical way that farmer (or other data provider) rights can be protected without encumbering research with red tape barriers is through the use of open licenses (eg. the various Creative Commons licenses). These licenses essentially i) allow creators to assert copyright/ownership over their data while ii) permitting it to be freely used in certain ways.

For example, almost everything we publish goes out under a Creative Common by-attribution license. Anyone is free to redistribute, copy or build on our work, the only restriction is that they have to acknowledge the source. The software we write goes out under a GNU General Public License, which is quite similar but applies to code.

There are different types of Creative Commons license, and some restrictions can be applied such as "non-commercial" use or "share alike", depending on the wishes of the creator. If farmers wished to prevent application developers analysing their data and selling it back to them they could probably achieve that with such a license.

Open licenses are not without problems though. One of the main issues is that variations in conditions often make different "open" licenses incompatible with one another, so it becomes difficult or impossible to combine code or data that have been distributed under different licenses.

The "open data" benefits vs risks discussion has some parallels to the debate about "access and benefit sharing" of genetic resources. Both have the potential to impact farmers and/or traditional owners in positive or negative ways and have the potential to create conflicts with external agencies.

Some time ago took a look at how governments tried to deal with this issue, in particular the consequences of introducing intellectual property rights and other measures to "protect" the rights of traditional owners, farmers and others. It's a couple of pages so I don't want to post the whole thing here, but you can download the article if you wish.

On balance it seemed to me that these measures done in the name of access and benefit sharing have done more harm than good. Why? Because they typically involve restricting access to data, thereby stifling the very research required to develop benefits in the first place. The fisheries example in the article is a personal experience.

To my mind the potential benefits of open data vastly outweigh the potential risks. Farmers also tend to be naturally sceptical and conservative about sharing information with governments and other external agents, so I do not see them as powerless in this particular sense.

The ranking system is a useful reminder that a lot of capacity still needs to be built in the  1-3 star range before many stakeholders can move on to address stars 4 and 5. I suggest that the strategy should include training and mentoring activities to help organisations at this level progress- eg. those that are just taking their first steps in web publishing or setting up their first OAI repository. 

Regarding groups ready to tackle more advanced issues such as LOD/RDF, it would be useful to develop documentation to guide people that are developing for or implementing these technologies. For example:

* Real world case studies of LOD/RDF/vocabulary usage in agricultural information systems / an agricultural context.

* Practical guidelines on LOD/RDF/vocabulary implementation and consumption in an agricultural information systems / an agricultural context.

The Open Archives Initiative did a great job with its OAIPMH implementation guidelines, it would be great to have similar guidance available for these too.

 

Maybe I should offer an example. We have been doing some work to add support for Dublin core and OAI-PMH to the content management system we use, ImpressCMS.

The first module we have released is a 'podcasting' tool for publishing audio and video recordings. As you might expect, recordings can be browsed online, downloaded, streamed or accessed with podcasting clients via RSS feed with enclosures, and shared via social media.
 
Installation is a simple two-click process. The data entry form uses unqualified Dublin Core fields to capture metadata in a convenient form. The module also supports a zero-configuration implementation of the OAI-PMH.
 
The key point is that the operator doesn't *need* to know the details of Dublin Core or understand OAI-PMH in order to establish an OAI repository with this module. It 'just works' out of the box and can be used by *non-specialists*. 
 
A live demo of the Podcast module is available here (info about the archive including the base URL is available here). A second more general purpose "library" module is currently in beta. Both are distributed under the GPL V2.

I feel compelled to appeal for consideration of an "appropriate level of technology". There is a huge gap in the capacity of institutions to deal with a lot of this technology.

A case in point: Most of the research centres in our network do not have an IT department. When you get down into the state/provincial level institutions they may not have any IT staff or information specialists at all. Whatever systems they have tend to be run by enthusiasts from some other discipline rather than professionals in this area.

This group needs simple tools that will allow them to share their data in a meaningful way (perhaps with larger systems operated by better-resourced institutions that can archive it). They need things that are easy to set up, use and maintain.

If I may labour the point, tools that require specialised server environments and complex configuration to set up and a programmer or engineer to maintain are beyond their capacity. Such resources are just not available to them.

There's a place for both 'high end' or complex tools that require the resources of a large institution to run and 'low end' tools that don't. I guess I'm saying that the low end should not be neglected, as this will help perpetuate the digital divide.

I agree that interoperability looks to be inevitable. The only question is how quickly it will happen and what can we do to make it happen faster?

Education is one thing. Although it may seem obvious to the likes of us, there is still a lot of basic awareness raising needed about the benefits of standards and sharing.
 
Building standards and mechanisms for data exchange into the tools that people use in their everyday life is another, so that the data is captured in an interoperable form from the start.
 
Standards compliance should 'just happen' without people having to think about it.

Anyone mind if I start?

Hi, Simon Wilkinson from NACA, a regional inter-governmental organisation that facilitates cooperation in aquaculture R&D between member states.

NACA is a highly distributed organisation with a limited budget so we have a great interest in sharing information and experience electronically, although most of research centres in the network still have limited capacity to make use of IT. Aquaculture is lagging a long way behind terrestrial agriculture in this regard.

The main thing we share in electronic form is aquaculture publications. Since 2002 we have had a policy of making all publications available for free download from our website (www.enaca.org), mainly in PDF. More recently we have also started publishing audio recordings of technical presentations in MP3 for download/streaming and podcast feeds.

The NACA website is produced with a conventional content management system (CMS). Like most such tools, it was designed to build websites to view on screen. It is Google-friendly, ties into various social media and is great for presenting information to people. However, it doesn't follow any accepted metadata standard and does not offer any structured way to share data with machines.

We would like to share/federate our publication metadata with other digital libraries. So we have been doing some work to develop publication modules for the CMS that i) use standard Dublin Core metadata fields and ii) support the Open Archives Initiative Protocol for Metadata Harvesting (OAIPMH), which are released as open source projects.

Regards

Simon Wilkinson
Communications Manager
Network of Aquaculture Centres in Asia-Pacific