Previous Page Table of Contents Next Page


CHAPTER 4 - PREPARING DATA FOR INPUT TO A MARINE
FISHERIES RESOURCES GIS

4.1 Introduction

The primary and secondary data which has been collected, and which forms the essential input to the marine fisheries GIS, will be in numerous formats. Though some of it may already be suitable for immediate input to the GIS, the majority of it will require conversion to a suitable digital form. Once converted this data can be stored ready for input, either on computer compatible tapes (CCT's), floppy disks or CD-ROM's. This chapter is concerned with the various methods of undertaking this data preparation or transformation. It is important to note that the discussion will be in general terms only, i.e. particular GIS software packages may require that the digital data be encoded in a specific format or using a particular database, and the GIS user will need to check the requirements of individual software systems. It is also important to point out that much of the content of this chapter, and of Chapters 5 and 6, may not be particularly oriented towards marine fisheries per se.

Three concepts which are of fundamental importance to data preparation for a GIS, and over which there has been some confusion, are those of data models, data structuring and databases. Various sources tend to use the terms “data models” and “data structuring” interchangeably. We will briefly discuss data models in this introduction and our interpretation of data structuring will be considered in Section 4.2. A background to databases for GIS will be given in both Sections 4.2 and 4.3, but the management of databases will be left until Chapter 6, i.e. since this usually forms part of the GIS operating process. A diagram which illustrates the necessary steps from the real world to the output of maps from the GIS, is shown as Figure 4.1. At each stage it is necessary for the data models to be structured in an organised way.

“Data models” (or data modelling) refers to the way in which spatial information is organised from the conceptual viewpoint. Thus, when we are working on a GIS task, what we are essentially doing is manipulating models of the real world. Because we are concerned with geography, or with analysing space, then the real world has usually been changed to the model we know of as a map. As in the real world, the map will consist of a large number of different objects, e.g. roads, houses, forests, coastlines, each of which are depicted in their own way using some form of symbolism, and each of which we should be able to recognise (perhaps with the help of a key). Since the map is a simplification of the real world, it is important to recall that whatever is put onto the map is a transformation of the real world situation. Thus, in order to generalise for mapping purposes, aggregations may have been made of any type of spatial phenomena, and in many cases classifications will also have been made, e.g. complex classes of coastal zone land use may be combined under a single definition. A fisheries example might be that a consideration is necessary about what categories different fishing vessels are going to be subdivided into, and it may be necessary to even consider what counts as a fishing boat. We point these facts out since it is important to show that any mapping and GIS exercise, starts off from a position of using very much simplified (modelled) information. It will have been essentially the cartographers task to have transformed the real world into a mapped world with the minimum of information loss and distortion.

Figure 4.1

Figure 4.1 Schematic Diagram Showing How the Real World is Transformed to Obtain GIS Output (from Bernhardsen, 1992)

As a further introductory note, we need to emphasize strongly from the beginning the importance of data quality. The data capture of both primary and secondary information can be a costly and time consuming part of setting up a GIS. Having completed this capturing task, the data itself may well have a life which far exceeds that of the hard or software being used. In view of this input of effort, it is vital that the data secured has a high degree of utility, and this will only be achieved if the data is of the highest quality. There are many ways to ensure data accuracy and quality, and we recommend a study of Goodchild and Gopal (1989) or Thapa and Bossier (1992) for overviews. It is also important to note that attempts are being made by GIS communities in many countries to define and adopt certain standards for the collection of, and transference of, digital data. Standards have already been drawn up in many countries and these will have an enormous impact on helping not only to ensure data quality, but also in providing cost savings in not having to embark upon various data conversions which would be necessary if the data was transferred in any variety of formats. Thus, in the establishing of any databases, it is essential that consideration be given as to whether or not there are certain criteria which should be followed if it is intended that the data should either be available for transferring to other databases or GIS's, or if it to be utilisable by specific GIS's. Further details on standards in GIS data can be found in Sowton (1992) and Cassettari (1993) and, recognising the importance of this issue, the whole edition of the journal “Cartography and Geographic Information Systems” for July, 1994 (Vol.21, No.3) was devoted to the subject of standards.

4.2 The Structure of Spatial Data

Data structuring is the actual method adopted for organising and storing data in a database. Essentially this means that, to be of value to a GIS, data must be processed and arranged in a database in such a way as to be meaningful and consistent to a wide range of users both in the present and in the future. For the GIS to function it requires the input, in digital form, of two sorts of information: (i) where an object is located on the map, and (ii) what an object on the map represents. These two sets of information make up the two main kinds of databases which necessarily must be built up as part of a GIS - they are commonly referred to as graphical (containing geometric or spatial information) and non-graphical (containing attribute or feature code) databases. Although in practice some GIS's store the graphical and the attribute data together in the same database, in this section we shall only be considering the structuring of the spatial data; the structuring of non-spatial data is examined in Section 4.3. Further Information on the importance of data structuring is given in Burrough (1986), Star and Estes (1990), Martin (1991), Bernhardsen (1992) and other basic GIS texts. Rackham and Gower (1993) discuss an important model which describes all the characteristics (or specifications) which data should adhere to if it is going to be of optimum use in a GIS, i.e. in terms of data content, classification methods, data structure, data capture and update, data presentation and data quality. A data inventory method has recently been proposed for the development of fisheries GIS applications, i.e. within a West African marine fisheries GIS being developed by the FAO (Taconet and Bensch, 1995).

From our brief discussion above on modelling, it is clear that the GIS must have some way of recognising where any data is referring to. An easy way to conceive of how this is accomplished is to first understand that in essence all geographic or spatial entities, i.e. the symbolism that has been created on the map, may be subdivided into only three categories:

i)   Points. These can be imagined as dots on a map. The dot may represent anything which is located in one place, e.g. a port, a dock, a hatchery, etc. It will immediately be noted that, whether or not a feature can be represented as a point, depends on the scale of the map on which it may be shown. Thus, on a small scale map, e.g. 1:1 000 000, a town will be shown as a point but the same town would not be a point on a 1:10 000 map.

ii)   Lines. These may represent any linear feature such as a road, railway, stream or a fence. At most mapping scales these features will retain their linear form, though, as with points, the degree of detail and generalisation will vary with scale. In GIS terminology, the point at which two lines (also called segments, arcs, chains or links) intersect is called a node.

iii)   Areas. These represent 2-D unitary blocks of surface space. In GIS or spatial analysis jargon they are referred to as polygons. They may be EEZ's, lagoons, mangroves, lakes, etc. When shown on maps at a very small scale these features may also eventually become points.

Obviously the mapped objects themselves will also have another dimension, that of height, and indeed data on this is frequently compiled and stored. But for GIS and mapping purposes, physical output from the system can only be in two dimensions, although it is possible to display 3-D looking images on the computer screen.

A GIS functions by having an internal coding system which utilizes various ways of allocating location to these three types of spatial entity. Thus points may be readily located by using a co-ordinate referencing system; linear features may be represented by breaking their “stretches” down into straight sections and then allocating co-ordinates to the nodes at the ends of each stretch, and areas may utilise co-ordinates to locate the boundaries (corners) of enclosed polygons. When geo-referencing, it is important to the GIS software that a unique identifier is assigned to all data, i.e. an identifier which is able to be recognised by its pre-programmed label or co-ordinate, and one which is able to be linked to the data in an attribute database. We can now give some examples of geo-references which are acceptable and commonly used.

a)   Topographic grid references. All official Ordnance Survey (topographic) maps have a system of grid referencing. This is often based on the Universal Transverse Mercator (UTM) system, but other map projection systems may also be used. So for any point feature, or intersections of linear features, or the corners of areas, etc., a co-ordinate can be given. The accuracy of this point reference will be a function of both the mapping scale and the task being carried out. A commonly used variation of grid referencing is the use of latitude and longitude co-ordinates.

b)   Post-codes or ZIP codes. These identifiers usually refer to an area as delimited for postal servicing (Figure 4.2). The hierarchical nature of many post codes gives the ability of analysing spatial data at differing resolutions.

c)   Street names. These are easily allocated and identifiable but data can only be aggregated to street level which may be sometimes be inconvenient for mapping purposes. Obviously other linear features may also have names such as motorway numbers or river names.

Figure 4.2

Figure 4.2 Example of the National Zip Code System Used in the USA (after Robinson et al, 1995)

d)   Other named aerial units. Aerial units names may progressively vary in size from villages, towns, cities, states and countries or they can obviously refer to other non-urban features such as lakes, forests, seas, etc. In many cases information may relate to areas which have been created (and labelled) for special purposes such as conservation areas, natural vegetation zones, sales force areas, divisions of the sea, fishery statistics areas, etc.

e)   Census division names. Many countries have divided their area, for the purposes of gathering census and other social or economic data, into a hierarchy of recognised ward or district areas.

Having established that there are the three types of spatial data (points, lines and polygons), and that these can be georeferenced in various ways, it is now important to show how these features can best be structured in ways that the GIS software will understand.

The most successful data structures are those that offer the greatest flexibility in the various manipulations and analyses which will be described in Chapter 6. There are two basic data organisational structures (also called modes, models or formats) which GIS programmes use for spatial data: (i) the vector mode and (ii) the raster mode, though Cassettari (1993) gives information on other ways of structuring data. A simple comparison between the vector model (A) and raster model (B) is shown in Figures 4.3.

Figure 4.3

Figure 4.3 Comparison Between the Vector Model (A) and the Raster Data Model (B)

4.2.1 The Vector Data Structure

In the vector data structure points, lines and polygons are all recorded in terms of their geographic x and y co-ordinates. The vector data structure is thus concerned with boundaries. In practical terms, this means that the person operating the GIS will, for any particular GIS task, have instructed the software as to the possible extremes of the map, i.e. a geo-reference for the top right hand corner and bottom left hand corner. The detail of the geo-reference in terms of its measurement precision, e.g. grid references to four decimal places, will determine the accuracy of the finished map. When the particular mapping details (co-ordinates) are then entered into the graphical database, they must have geo-references which are within the extremes of the map as programmed. Obviously, for the drawing of any particular map, many thousands of co-ordinates may be necessary. Together with the co-ordinate data, instructions are entered on which points along a line are connected or unconnected. These instructions will trigger “pen up” or “pen down” functions during map drawing. To obtain perfect curves or parabolas which, if drawn in detail, could involve the entry of hundreds of co-ordinates, it is possible to simply enter any curve equation function which the GIS recognises or the use of spline functions can be made (see Burrough, 1986 or Laurini and Thompson, 1992, for details).

The actual vector graphical database which is built up to store the locational information on the mapped entities (points, lines or polygons) will be in a coded digital format. The allocation of unique identifiers to mapped objects will often be based on a hierarchical coding system as shown in Figure 4.4. Here we can see that there are major object groups and within each of these there may be various sub-category levels. The database itself then might consist of a string of numerals, one portion of which might read:

-541004831864689434279598052-5

This would be intelligible to the GIS software as:

Numeral-5= Start of the drawing sequence
 4100= unique identifier
 48= line thickness to be drawn
 31864= first Easting co-ordinate
 68943= first Northing co-ordinate
 42795= last Easting co-ordinate
 98052= last Northing co-ordinate
 -5= end of line sequence - start of next sequence

There would usually be a separate string of numerals for each object type on the map and all object types would need to be registered to a similar geo-referencing system. This particular type of numeric string could clearly be used to produce a mapped image which was composed of lines, points and polygon boundaries. It is normal for all the co-ordinates in one database to refer to only one class of objects, e.g. coastal boundaries, or roads, or rivers, etc. Each of the databases can then be used to produce a single mapped “layer” and several “layers” may be superimposed to produce a single vector map (Figure 4.5).

Figure 4.4

Figure 4.4 Numerical Coding Allocated to Vector Graphical Data in Order to Identify Different Object Categories

Figure 4.5

Figure 4.5 A Model Showing the Layers Used to Construct a Basic Vector Coastal Zone Map

The vector data structure can be subdivided into two models. When vector data is in a crude form it might simply consist of strings of lines which follow the co-ordinates which have been entered into the database. The lines might be unconnected, they might cross one another, common boundaries might have been registered twice and there is generally no logical structure to the data. This is called “spaghetti” data. Although it can be efficiently presented on the screen or printed out, i.e. because its co-ordinates are recorded in an attribute table, it may be of little use to a GIS because the relationship between points, lines and polygons is unknown. For geographical search or analysis, the second vector model must be used, i.e. the topological model (or “intelligent vectors”). A vector topological data structure is shown in Figure 4.6. This structure is concerned with establishing the location of objects with respect to each other, by defining connectivity, adjacency and containment. In topology, end points and the intersection of lines are recorded as nodes, the lines are links between the nodes, and the enclosed areas defined by a chain of links are recorded as polygons. The topological structure allows spatial relationships to be explicitly defined in an attribute database as per the topological encoding shown in Figure 4.6. Once topology has been established for any mapped layer it is then easier to update the data, and it means that information storage requirements are less since data on shared lines or nodes does not need to be entered twice. It also means that many true GIS manipulation or analytical functions can then be performed.

4.2.2 The Raster Data Structure

This data structure is concerned not with boundaries but with the space between boundaries, i.e. all areas of the map must be allocated an attribute or a value for this attribute. Boundaries are only implied as lying where value changes occur. The basic raster structure, covering the same “mapped” area as shown in Figure 4.6, is shown in Figure 4.7. It is sometimes called the grid model because data is stored in a matrix of cells (which themselves may be called pixels). These cells are usually square but they may be rectangular, triangular or hexagonal, or indeed any regular shape which is capable of tessellation. The raster data structure is favoured for mapping where the main objects dealt with are spatially extensive areal units, i.e. rather than linear or point units.

It is clear that the resolution of objects depicted using this data structure will depend entirely upon the size of cells used. A comparison of Figures 4.6 and 4.7 make it clear that use of the raster format may produce maps which have a very crude resolution, but which require little computer storage space. Also, since each cell can only record one value per map theme, then it is important to remember that this value will either be the “average” or the modal value of all variations within a cell, or it may be the value obtaining at the central point of the cell. The importance of selecting an appropriate cell size in which to divide up any mapped area has been outlined by Valenzuela and Baumgardner (1990), and this size will be a reflection of several factors such as the accuracy of the original data, the purposes of the mapping exercise, and the availability of computer disk storage space.

Geo-referencing, and hence location, in the raster structure is by means of simple alphanumeric codings for vertical and horizontal cell columns and rows. Each code will usually correspond to an object on the map, although it can also correspond to a different colour shading or reflectance value, i.e. as per the pixel dot shadings which make up a newspaper picture or the reflectance values which are obtained via a satellite RS image. Like the vector model, raster maps are also built up in thematic layers. However, in the raster structure it may be necessary to have far more layers - this is because there is only one value per pixel and therefore this value cannot contribute to a detailed attribute table.

Using the raster structure it is essential to record some value for every pixel (cell). This can lead to very large data storage requirements, especially if a fine pixel resolution is used. For instance, a single Landsat Thematic Mapper image contains about 35 million pixel values. In order to overcome this problem, two main data compression techniques have been devised:

(a) Run length encoding. Figure 4.7 showed how run length encoding has been applied to the map shown. Since many adjacent cells share the same value, for each row it is only necessary to specify a cell value and column number where that value begins and ends.

Figure 4.6

Figure 4.6 A Simple Vector Map having a Point, Lines and a Polygon With Their Associated Topological Encoding

Figure 4.7

Figure 4.7 A Simple Raster Map Plus the Run-Length Encoding Structure Used for data Storage

(b) Quadtrees. This technique is based on the successive subdivision of the area under study into smaller and smaller quadrants, i.e. according to whether or not areas of a similar value are wholly in the quadrant or are not. The lowest level of subdivision is a single pixel. Figure 4.8 (b) shows the successive subdivision of the stippled area in 4.8 (a). This subdivision is typically shown by a quadtree 4.8 (c). Here it can be seen that there are links and nodes in a directed tree starting from the root node. From each node there are four “edges” representing the four quadrants of N.W., N.E., S.W. and S.E. Sub-quadrants only emanate from nodes which are shown as being sub-divided in the raster map, i.e. data only has to be saved for cells which have the “value” of the layer being coded. Savings of at least 50% in data volume are possible via the use of quadtrees. Bonham-Carter (1994) provides an interesting comparison between a raster and vector map in terms of the amount of digital storage space required (Figure 4.9). Thus map A is a raster image and, given that 1 byte equals 1 pixel, then it requires 709 042 bytes of storage. The run-length encoded version requires 21 903 bytes, and the quadtree version 19 473 bytes. Map B, the vector version, requires 17 890 bytes.

Figure 4.8

Figure 4.8 Three Stages in the Construction of a Quadtree Data Encoding Structure (from Burrough, 1986)

Figure 4.9

Figure 4.9 Comparison of a Raster Map (A) With a Vector Map (B) (see text for details)

The advantages and disadvantages of each data structure are shown in Table 4.1. Developments in GIS are now at a stage where all sophisticated GIS packages are able to handle both raster and vector processing and it likely that the vast majority of data inputs in the future will be in raster format.

Table4.1 A Comparison of Raster and Vector Data Models

RASTER MODEL

Advantages

1.It has a simple data structure.
2.Overlay operations are easily and efficiently implemented.
3.Scanning technologies can supply huge quantities of data cheaply.
4.Image processing techniques produce data for integration to GIS in a raster format.
5.Area and polygon analysis is simple.
6.Overlaying and merging are easily performed.
7.The technology is cheap and in the future it will have greater cost advantages.
8.It is well suited to subdividing spatially continuous variables.
Disadvantages
1.The sheer volume of data to be stored and handled can be very high.
2.There can be a serious loss of detail with larger pixel sizes (poor resolution).
3.Final maps can be fairly crude, especially those produced on cheaper GIS software.
4.Linear type analysis is more difficult.
5.Topological relationships are difficult to represent.
VECTOR MODEL
Advantages
1.It has a relatively compact data structure so storage requirements are less.
2.Features can be accurately located.
3.The topology can be completely described with network linkages.
4.Very small features can be shown and all features can be accurately drawn.
5.Data about individual features can easily be retrieved for updating or correction.
6.Linear type analyses are easily performed.
Disadvantages
1.It has a more complex data structure.
2.Overlay operations are difficult to implement.
3.The representation of high spatial variability is inefficient.
4.Manipulation and enhancement of digital images cannot be effectively performed.
5.Data capture can be very slow.
6.Area or polygon analyses are difficult.
7.This is generally a more expensive data structure in terms of data capture and software purchase.

We should conclude this discussion on the structuring of spatial data by mentioning the third dimension. To build up a structured database which contains “altitude”, or for marine applications “depth”, then triangulated irregular networks (TIN's) are used. These consist of a series of non-overlapping polygons, each defining a flat surface, which completely covers the topographic surface (Figure 4.10). Each vertex of a triangle is encoded with its location and it has a height associated with it. Given this information, the TIN can be reproduced as a digital elevation model in 2.5-D, the detailed resolution of which depends upon the accuracy of the original TIN. To model volumes in a structured way, a geo-referencing system is needed which encodes in x, y and z axes. A way of doing this is to extend the raster concept of using an array of cellular pixels so as to model the added dimension, i.e. a 2-D square becomes a 3-D cube. These cubes have been called voxels (volume elements). Geo-referencing and attributes can be assigned as in the raster data structuring. Gargantini (1989) describes the work which is in place on “octrees”, i.e. as quadtrees are a way of compressing pixel data, these are ways of structuring data which is in voxels. Clearly, the use of voxels will eventually be essential in much marine fisheries GIS work, but as of the present they have been mostly used in the geological GIS field.

4.3 Building Non-Graphical GIS Databases

It is important to commence this section by having a clear idea of what a non-graphical database is. A database may be defined as “a comprehensive collection of related data stored in logical files and collectively processed, usually in tabular form” (Bernhardsen, 1992, p4). From the GIS perspective there are two types of non-graphical databases:

(i) Those databases which may have been compiled as a tabular listing of almost anything.

(ii) Those databases which hold the attribute information relating to specific graphical databases.

We can examine them together since there are no fundamental differences in their structures. A database itself forms the top of a structured hierarchy which consists of fields, records, files and databases. To exemplify this, it can be envisaged that a database could be established on fish landings at a specific port. The database could be organised so that each row (or line) would be a record of landings from each fishing vessel. For these records there would have to be a large number of columns, each of which represented different fields. These fields would each need a title such as “date”, “name of vessel”, “type of vessel”, “fish species landed” (with perhaps a column for each species), etc. At the end of the recording of one day's landings at the port, then a file will have been created. All the files for perhaps a year would constitute a “fish landings database” for the given port. Presumably all the other fishing ports in a country or region would be keeping similarly structured records. Figure 4.11 shows an actual file sheet for recording ichthyoplankton. 15 different “fields” are shown and there is room on the file for 13 records.

The construction of non-graphical databases can be the most expensive, problematic and time consuming aspect of implementing a GIS. Since the data in these databases represents the non-spatial characteristics of objects which could be mapped, such as the name, age, function, height, etc. of the object, then the data will be largely composed of alphanumeric text. Much of it may exist as stored data which has been collected by the various primary collection data methods, e.g. by questionnaires, surveys, etc, or which has been collected from various secondary sources. Most of the attribute data which is directly tied to the graphical data, will have been entered during the digitising process (see Section 4.4.1). The range of non-graphical data is almost infinite. At a very local scale there might be a database covering local retail fish markets, which was kept and maintained by a local council, whereas, at a different scale, a world database might list all the fisheries statistics for each country, as kept by the FAO.

Figure 4.10

Figure 4.10 A Typical Triangulated Irregular Network (TIN) and the Type of Topological Storage Tables Required

Figure 4.11

Figure 4.11 An Ichthyoplankton Data Recording Sheet (File)

Non-graphical data can be entered via a keyboard into a conventional database, e.g. an EXCEL spreadsheet or into a commercial database package such as Oracle or dBase, or in some cases the GIS software package will contain its own internal database system. It is important that the input format used throughout a single database is identical, e.g., the same number of fields, widths of fields, the same alphanumeric coding structure within each field, etc. Again there would be an hierarchical attribute coding structure so as to establish levels of object definition, as was shown in Figure 4.4. An alternative method of entering non-graphical (usually tabular) data into any database which is readily accessed by a GIS, is via the use of Document Image Processing (DIP). Here paper based documents are scanned (see section 4.5.2) and the data obtained is stored on some form of electronic storage media (disks or CD-ROM), from whence they can be readily retrieved when desired.

It is of fundamental importance that each record in the attribute database contains a unique identifier, and that this identifier is one that can be recognised by each entry in the graphical database. This will be the only method by which the GIS can link any two sets of data. Frequently, the attribute database will have been compiled with no intentions of it being used in a GIS. In this case it will invariably be essential to add a new “unique identifier” field to the database. The name, grid reference, or perhaps postal code used will need to be in accordance with those existing on any of the graphical data which is to be used. Attribute data may be analysed or queried by conventional methods which do not require reference to location, and attribute databases can be linked to one another by a common unique identifier which is not necessarily the geo-reference. If any useful data is found which is not held in a convenient tabular form, then data conversion packages can often be used to make the data accessible to a GIS.

To conclude this section it might prove useful to mention an actual example of a general fisheries database, i.e. one that is not a specific GIS database. The NAN-SIS software is basically a menu-driven database package which is integrated to various routines, giving it the capacity to perform a range of analyses which are relevant to trawl fisheries surveys (Stromme, 1992). The routines include retrieval, listing, sorting, querying and statistical analyses at different levels. The package has been developed by Norwegian scientists over the past 18 years, as part of the Dr. FRIDTJOF NANSEN research programme. Figure 4.12 illustrates a main section of the database. Note that the data held may be both numeric and alpha-numeric. From the GIS viewpoint it is important that this database contains geo-referenced information giving the location of the trawl start point and the length of haul in nautical miles, i.e. allowing for any data collected to be subsequently mapped. Do Chi (1994) points out that NAN-SIS can be linked to many other specialist packages for the importing or exporting of data. These packages include several major GIS's including ARC/INFO and IDRISI (see section 5.4.2).

4.4 Metadata

With the potential development of a huge array of both graphical and non-graphical databases, then there will increasingly be a need to keep some kind of record on these. This record will be a means by which background information can be kept on the nature of the data, the sources from where it was derived and the overall quality of the data and data sources. It is all this information which constitutes metadata or meta-information. Table 4.2 shows the types of information which need to be stored in a meta-database (Cassettari, 1993).

Table4.2 Information Which Needs to be Stored in a Meta Database

*Sources of spatial information.
*Accuracy, quality and currency statements for data sources.
*Explanations of how source data were used to compile the dataset.
*Definitions of attributes and the rules by which they have been coded.
*Rules and procedures adopted for data capture.
*Results from geometric accuracy tests.
*Types of analysis procedures performed on the original data and the constraints and quality of the results.
*Rules for the display and cartographic representation of data.
*Spatial and temporal coverage of any data.
*Methods by which further data can be transferred.

It is likely that some of this data will in fact be stored in an attribute database, e.g. a dataset giving results of a fishery survey would almost certainly give the date when the survey was performed. However, most of the information shown in Table 4.2 is distinctive. The existence of a meta database will greatly help in managing the data side of GIS activities. For instance, by consulting the metadata, then a programme of re-surveys can be instigated or more up-to-date map sources can be sought and obtained. Instructions can also be passed on to data gatherers as to the format for gathering additional information.

Figure 4.12

Figure 4.12 The Strucure of the Main File in the NANSIS Database

4.5 Capturing Mapped Data

Much of the spatially related data which will have been assembled for GIS input, i.e. the maps, photographs or other images, will not be in digital format. Conversion to this format is performed in one of two major ways: (i) digitising and (ii) scanning. Both of these processes are essentially automatic ways of structuring the mapped data (as we have described in section 4.2) so that it can be“read” by the GIS software. Though the rest of this section describes these two capture methods in some detail, it is important to note that it is increasingly the case that pre-digitised mapped data is able to be purchased either as ready made data sets, or as a customer specified mapping task performed by a specialist digital mapping bureau. The advantages and disadvantages of “in house” versus “agency” data capture are discussed in Clegg (1993).

4.5.1 Digitising

This is the task of converting hardcopy map or aerial photograph outlines into a digital format so that the outlines can be displayed on the screen. A common, and comparatively inexpensive way of doing this makes use of a digitiser. Digitising as a method of outline capture is useful if the purpose is to capture user defined mapping features, or if you wish to add linear features to an existing map. A digitiser consists of a flat, non-conducting surface into which is embedded a cartesian grid of fine wires. These wires are energised and they allow wire intersections to be resolved to a typical accuracy of 0.01 mm. Attached to the digitising table (or smaller digitising tablet) is a tracking pen (or cursor or puck) which features a magnifying glass fitted with cross hairs which itself can be moved around the digitiser recording its position relative to the embedded wire grid (Figure 4.13). A computer (M) is required for viewing digitised work and for recording the digital data captured. Any map or photograph can be fixed to the digitiser surface, and the operator simply selects an outline to follow with the cross hairs on the cursor. As the line is followed one of the buttons on the cursor is continuously clicked so as to record the shape of the line (or the position of a point) as a string of spatial co-ordinates. The cursor may be equipped with up to 16 buttons, which allow for other inputs or operations such as adding identifiers (to link the object being digitised to the attribute data) or for working in stream mode (where the cursor cross hair position is automatically recorded at a pre-set time or distance interval).

Digitisers vary in size from small “tablets” at A4 size, costing about US$500, to larger than A0 size, i.e. 1.0 × 1.5 metres, at US$4 000 plus. Some have background illumination to make line following easier. Most can be linked directly to the computer monitor. This function is useful in that error detection via the zoom facility is easy, and obviously it is useful to know which lines have actually been recorded! The digital information obtained, which is typically vector boundary or network lines (or sometimes point data) with associated attribute names or other information, is either recorded to a floppy disk, or directly on to a hard disk if the digitiser forms part of an integrated GIS.

Figure 4.13

Figure 4.13 Simplified Diagram Showing the Principles of Digitiser

Digitising is usefully organised so that data is captured in “layers”, i.e. with each layer representing a different type of feature, or theme, having its own file name. This allows for easy spatial analysis. Digitising itself can be a time consuming job if done accurately, especially in view of the fact that, to produce a satisfactory final map it is usually necessary to do a great deal of error correction and other editing. An idea of some of the errors which can occur when digitising, and for which editing procedures must be applied, is given in Figure 6.1. Digitising costs can therefore represent a high proportion of a GIS operation. Additional problems are that huge data files can be created from a single map sheet, and manual digitising can only be performed for about four hours daily if accuracy is to be maintained. The degree of detail to which digitising is carried out will reflect the accuracy of the source data, the time and cost efforts which are available and the purpose for which the task is being carried out (Figure 4.14).

Over the last few years there has been a considerable move towards commercial digitising whereby specialist firms provide either digitising services or they provide ready made digital datasets, e.g. the major atlas makers, government mapping agencies and automobile associations are all selling such products. It is also possible to purchase a data maintenance contract whereby a specialist digitising bureau will agree to keep a firm's digital records up to date. A summary of the advantages and disadvantages of digitising as a data capture method are given in Devereux (1992,a).

4.5.2 Scanning

Scanning is also a method for converting hardcopy mapped (or graphics or textual) outlines into a digital form, but unlike digitising, which relies on human judgement to select desired mapped features, scanning captures all of the information on a given sheet very quickly. A typical large topographical map measuring 1200 mm × 1200 mm, scanned at a resolution of 400 dpi would take 30 minutes to scan in colour mode and produce an uncompressed file of about 340 MBytes. Scanners are ideal if the purpose is to capture all the information from a paper image or map, or if you wish to add information to an existing base map. Scanning is also a useful method of archiving existing maps.

Figure 4.14

Figure 4.14 Increasing Digital Simplification of the Coastline and Hydrology of Sardinia (from Robinson, 1995)

There are a range of scanners all of which use photosensitive pick-up devices to register different grey or colour tones from the surface of a map (or any document). Hence they convert an analogue image into a digital image. Data is stored as a raster image via pixel values which vary with differences in light intensity. Most scanners are capable of 8-bit resolution, or 256 brightness levels. There are software programs which convert the raster image into a vector format. This may be necessary if topology is to be added to the scanned data. Some of the main scanning devices and methods available include:

i)   Flat-bed scanners having a photosensitive pick-up which is mounted on a beam allowing the pick-up to traverse the map along an x axis. Meanwhile the whole beam is able to move slowly along a track in the y axis direction (Figure 4.15 a). Resolution is typically 600 dots per inch (dpi). Prices start from about US$1 000 for an A4 mono version.

ii)   A drum scanner uses the same process as the flat bed scanner except that the map is mounted on a drum which slowly revolves beneath a fixed track upon which the pick-up rides backwards and forwards in the x direction (Figure 4.15 b). On both flat-bed and drum scanners the step size between each row controls the cell or pixel size, which is typically 0.025 to 0.050 mm. These scanners are at the top end of the scanner price range, and have resolutions of 400-800 dpi.

iii)  The continuous feed scanner (Figure 4.16) is a further scanner which operates on similar principles. Here the map or document to be scanned is fed past a fixed row of sensors each of which record an image intensity. Accuracy is not yet as good on the flat-bed or drum scanners, hence they are sometimes referred to as “document scanners”. They can produce mono or colour output at resolutions up to 400 dpi. Prices for A4 models again start at about US$1 000.

iv)  There are now available small hand held, or desk top, scanners. Some of them can produce true 24 bit colour at resolutions of 400 dpi. However, despite the claim of having sets of rollers for straight scanning and warning lights regarding scanning speed, care should be taken in doing GIS work since distortions are almost inevitable given some “hand wobble” and variations in scanning speeds. Prices start from just over US$100.

v)  Scanning is also possible via electronic video-digitising. Here a normal video camera captures images as collections of up to 512 × 512 pixels each containing a level of light intensity. This is a cheap form of data capture but the resolution is poor.

vi)  An automatic line scanner can be employed to do the job of a digitiser. Here a narrow laser beam automatically follows a line on the map until the line ends or a junction is reached when the operator must intervene to redirect the process. Greater accuracy can be obtained than from manual digitising and it can be faster, but it is user/time intensive compared with other scanning methods.

vii)  Drummond (1992) describes the process of “on-screen digitising”. Here the scanned image is displayed on a screen and the operator follows any desired lines on this image by the use of a refined screen cursor which can digitise in point or stream mode. Obviously the vector lines produced can be made “intelligent” by also capturing any attribute data.

viii)  Lever (1993) reports the impending use of imaging technology. This essentially means that scanners will be able to be pre-set so as to only capture data from a map which is of a given reflectance value, i.e. allowing separate layers to be built up based on the different colours of an analogue map.

There are a number of drawbacks to scanning. Scanners can be said to produce an “unintelligent” image since they have no means of differentiating between different types of map features, i.e. unlike digitising, they cannot add attributes. This problem can sometimes be overcome by scanning from the original colour print masters. Scanners also sense everything which is on the original map including labels, stains, wrinkles, etc. This can mean that extensive editing is necessary, i.e. unless the map has been purposefully drawn. Obviously, scanning also produces very high data quantities which need expensive storage. Many commercial scanning companies now operate who can offer expertise, and given the cost of good scanners, then this is frequently a sensible source for obtaining digital outlines. A summary of scanning devices and methods is given in Devereux (1992,b) and Dickinson and Reid (1992).

Figure 4.15

Figure 4.15 Diagrammatic Representation of Flat-Bed Scanning (A) and Drum Scanning (B)

4.6 The Integration and Processing of Other Digital Data

In Chapters 2 and 3 it was made clear that a large amount of data was available which was already in a digital format. Most of this data was from RS or hydro-acoustic sources. Here we will be concerned with some of the main processes which might need to be performed on this raw data in order that it can be readily utilised by a GIS. These operations might be called post capture processing or more usually image processing. Some of the processes performed on aerial photographs were briefly mentioned in section 2.3.4.3 and Cassettari (1992) gives further details. Much of the processing can actually be performed as part of the existing software routines within the more sophisticated GIS packages, otherwise there will be specific software available which carries them out.

Since the use of existing digital data from sources such as RS, digital photogrammetry, hydro-acoustic surveys and aerial videophotography is likely to greatly increase over the next decade, integrated image based systems which link the necessary image processing with GIS are likely to develop. Edwards et al (1990) and Faust et al (1991) outline many of the benefits to be obtained, and Cassettari (1993) suggests several ways in which this integration might progress, e.g.

(a)  GIS software packages might include all the necessary functionality, although the majority of GIS users are likely to find many of the routines too complex.

(b)  Individual GIS software developers could take the modular approach so that buyers would only purchase the functionality that they needed or understood. Each module would be transparently integrated with each other.

(c)  Present GIS software producers could enter arrangements with image processing software houses to ensure that their packages were fully compatible.

(d)  An open systems environment where multi-tasking can be operated (Figure 4.17). Here, through the use of windows, several displays could be made together, e.g. an RS image, a map and a photograph. The user can choose the data to be displayed and comprehensive menus help with any necessary processing.

Figure 4.16

Figure 4.16 An A0 Size Continuous Feed Scanner

Despite these positive approaches to the integration of RS digital data into a GIS, there are undoubtedly still many problems to be overcome before the requisite tasks can be routinely performed. For readers wishing to know more, the problems have been clearly set out by Estes (1992).

Figure 4.17

Figure 4.17 Integrated Spatial Information System Based on a Multi-Tasking Environment (from Cassettari, 1993)

The degree to which raw RS derived reflectance values need to be processed depends on the ultimate use of the digital data. Thus, for instance, if the images are simply to be used as a “back-cloth” or drape behind a vector foreground map, then minimal image processing is necessary, i.e. basic ortho-corrected imagery is required and for this the basic software is readily available. For those interested, Cassettari (1992) and Havercroft and Fox (1993) suggests ways of accomplishing this. But for other purposes it might be necessary to refine the image to an extent whereby a great amount of accurate detail can be extracted. Figure 4.18 gives a fisheries related example of the type of hardcopy output which can be obtained once processing has been performed. These images were produced by the Atlantic Centre for Remote Sensing, using water temperature data obtained from the NOAA Advanced Very High Resolution Radiometer (AVHRR) satellite sensor. The following pre-processing functions are not described in great detail since in practice they are best accomplished by image processing experts.

4.6.1 Radiometric Correction

The reflectance values capture by any RS system will suffer changes or distortions over time for a number of reasons, e.g. by the presence of aerosols in the atmosphere, by the effect of variable relief on reflectance, by changes in the attitude of the sensing platform, from changes in the angle of the sun or slow changes in the functionality of detector sensitivity. There are a number of radiometric correction measures which may be applied, with each being based upon a different assumption or model, some of which are described in Butler et al (1988). The aim of the measures is to achieve uniformity of pixel value representations across the image.

Figure 4.18

Figure 4.18 Optimal and Sub-Optimal Habitat Conditions for Atlantic Salmon in the North West Atlantic Ocean

4.6.2 Geometric Correction

This pre-processing can also be necessary for a number of reasons, e.g. changes in the velocity, altitude or depth of the sensing platform, any oblique angles used in collecting the data, the earths curvature and rotation, atmospheric refraction, satellite instability, etc. To effect geometric correction it is necessary to follow a logical sequence of processes which gradually ensure that the registration of the image is accurate. The level of accuracy can often be verified using a sufficient number of ground control points which are readily identifiable on both the digital image and a map. Geometric correction of a large surface area will need to be performed using different mapping projections, e.g. Mercator, Peter's, Conformal Lambert, etc.

4.6.3 Image Display and Enhancement

There are a large number of functions which can be performed on digital data in order to meet particular requirements. These functions will enable the image to acquire specific characteristics which may be necessary to fulfil a special task. Table 4.3 briefly describes some of the main methods used to enhance satellite RS imagery. Further details can be found in Serra (1982), Niblack (1986), Jensen (1986), Gonzalez and Wintz (1987), Harrison and Jupp (1990) and Cracknell and Hayes (1991).

Table 4.3 Some Image Enhancement Processes Which May be Applied to Raw Satellite Sensed Digital Data
ENHANCEMENT METHODRESULT ACHIEVED
Contrast stretchingReflectance values are changed, using a linear, uniform, gaussian or other stretch, so as to occupy the full value range.
Edge enhancementKnown or perceived boundaries, which do not show up clearly on the image, can be more clearly displayed.
Density slicingThe computer software assigns thresholds and colours to different classes of pixel values.
Spatial filteringLow pass or high pass filters can suppress or enhance pixel values in an image.
Principle component analysisThis compresses multi-spectral data sets as an aid to image classification and pattern recognition.
Image classificationHere the software is “trained” to detect spectral signatures for one area, which have been ground truthed and therefore known to be true, and then apply it across the image.
Change detectionData sets collected at different dates or times can be registered and compared pixel by pixel to work out rates of change.
Digital mosaicsThis is simply the “patchworking” of different images to produce one clear image of a large area.


Previous Page Top of Page Next Page