Previous Page Table of Contents Next Page


SECTION 4

4. SENSOR SYSTEMS

A sensor is a device which detects emitted or reflected electromagnetic radiation and converts it to a physical value that can be recorded and processed. Sensor systems may be divided into two major categories:

i)global acquisition systems, e.g., photographic or T.V. cameras, which record an entire scene instantaneously;
ii)sequential acquisition systems, e.g., radiometers, radars, lidars and sonars, which acquire line-by-line information of the scene. This second category is generally diveded into:
 a)passive sensors, which record reflected or emitted EMR from natural sources;
 b)active sensors, which illuminate an object with their own source of radiation and record the“echo”.

4.1 Global Acquisition Systems

The global acquisition sensors commonly used for oceanographic studies are aerial, vidicon and underwater cameras. Underwater cameras are not considered in this manual.

4.1.1 Aerial cameras

These are one of the simplest forms of imaging system used in fisheries and ocean studies (refer to Figure 4.1). The detector is a photographic emulsion (film) which is sensitive to the visible or near- infrared parts of the electromagnetic spectrum. Cameras, films and photo interpretation are discussed in more detail in a companion training manual “Marine Resource Mapping: an Introductory Manual” (FAO Technical Paper 274).

Cameras have been in use for a long time and a great deal of knowledge has accumulated regarding techniques of image recording, image interpretation and data extraction. Cameras are less expensive and less cumbersome than other sensors and photographic materials are available world-wide. In addition, photography still produces superior resolution in comparison to electronic sensors.

One of the major disadvantages in the use of aerial cameras is the constraint imposed by adverse weather conditions. Also, photography is only operative within a narrow band of EMR (0.38 to 1.3 micrometres).

The quality of a photograph will depend on several interrelated factors: focal length; angle of view; scale; contrast; resolution; and film speed.

4.1.1.1 Focal length (f):

The distance between the centre of the lens and the focus is called the focal length and determines the size of the camera and the scale of the photograph. The focus of a convex lens is the point through which all refracted rays will pass. The image of a distant object is formed at the focus; the film has to be placed at that location, which is referred to as the focal plane.

4.1.1.2 Angle of view (d):

The angle of view of a lens is the angle between the rays that go to opposite corners of the film. The angle of view is also referred to as view angle, angle of field and covering power. The lens collects the light rays from an angle of view ranging from 45° to 60° (normal range of angle of view) and projects them onto the film with- in the same angle.

Figure 4.1

Figure 4.1 Principal components of a single-lens frame camera. (After T.M. Lillesand and R.W. Kiefer, 1979)

4.1.1.3 Scale (s):

The scale of an aerial photograph, or of a map, can be expressed in terms of the ratio of the distance between two points, as seen in the photograph, to the actual distance between those points on the ground. Due to the difficulties that arise in accurately measuring distances on a photograph, however, the scale of an aerial photograph is expressed as the ratio of focal length to the altitude at which the photograph was taken. The scale (s) is equal to the focal length (f) divided by the height of the camera above the ground. The latter is determined by subtracting the height of the terrain above sea level (h) from the altitude of the aircraft (camera) above sea level (H). Hence,

4.1.1.4 Contrast:

Contrast is a measure of the relationship between the lowest and highest brightness levels (Emin and Emax respectively) of the photograph. The contrast of a photographic image may be referred to qualitatively as being “high” or “low”, or in terms of its contrast ratio (Emax/Emin). Images with low contrast are often referred to as “washed out” with monotonous, nearly uniform tones of grey. Low contrast may result from the following causes:

i) the object and its background having nearly uniform electromagnetic response;

ii) scattering of EMR by the atmosphere. This effect is more pronounced in the shorter wavelength (violet) portion of visible light;

iii) the camera or film lacking sufficient sensitivity to record the contrast of the scene.

4.1.1.5 Resolution:

The quality of an aerial photograph is in part described in terms of its resolution or resolving power. This may be defined as the ability of an optical or photographic system to reproduce fine detail, expressed in terms of the greatest number of lines or cycles per mm which can be separated visually in an image or recorded on photographic material. The resolving power or resolution of a photograph is dependent on several factors:

i) lens resolution (optical quality): the ability of the lens to separate visually the greatest number of lines or cycles per mm;

ii) film resolution: the ability of the film to separate visually the greatest number of lines or cycles per mm;

iii) film flatness: the degree of flatness of the film held at the focal plane of the lens;

iv) rotational movement of platform: yaw, pitch and roll of the platform (i.e. the steadier the platform the higher the resolution);

v) optical quality of filters: parallelism of the filter surface, cleanliness of the filter.

The resolving power is usually measured by imaging a standard target pattern and by determining the spatial frequency in lines per unit length at which the image is no longer distinguishable.

4.1.1.6 Film speed:

Film speed is directly related to the light sensitivity of the film. It may be defined as being inversely proportional to the exposure required to produce some desired response. Speed values can be quoted in terms of photometric or radiometric units. Some of the standards used in measuring speed values are American Standards Association (ASA); Deutsche Industrie Norm (DIN); British Standards Institute (BSI).

4.1.2 The Return Beam Vidicom camera

As a remote sensor, the vidicon camera stands second best to the aerial camera which serves as the standard against which all global acquisition systems are compared. The Return Beam Vidicon (RBV) camera is able to produce synoptic images (i.e. all parts of the scanned area are seen by the detector at the same time) as does an aerial camera. The basic operating principle of a vidicon camera is similar to that of a television camera. A distant object is focused by a converging lens onto a photoconductive target. The image produced at the photo conductive target is held electrically until an electron beam scans the “frozen” image to read off the elements line by line. Having scanned the image the target is cleared to receive a new image.

This type of sensor was present on the earlier LANDSAT satellites. It recorded an image of 185 km square every 25 seconds using three cameras on LANDSAT-1 and 2 and an image of 90 km square using two cameras on LANDSAT- 3.

The RBV camera produces images with a very high spatial resolution compared to those of other electronic sensors and the image data can be transmitted to earth via radio signals.

4.2 Sequential Acquisition Systems

4.2.1 Passive sensors

The sensors in this category are called radiometers. They receive and record, line-by-line, the EMR reflected or emitted by the earth and the atmosphere (refer to Figure 4.2). Depending on the type of detector used, passive sensors can record different parts of the EMR within the ultraviolet to microwave wavelengths.

The following radiometers are commonly used in fisheries and ocean studies. They are described in Section 5.

 SensorPlatform
MSS MultiSpectral Scanner LANDSAT
TM Thematic Mapper LANDSAT
AVHRRAdvanced Very High Resolution RadiometerNOAA
HCMR Heat Capacity Mapping Radiometer HCMM
CZCS Costal Zone Colour Scanner NIMBUS-7
HRV Haute Résolution Visible SPOT

Figure 4.2

Figure 4.2 Components of a passive microwave singal. (After T. M. Lilesand and R.W. Kiefer, 1979)

Some categories of passive sensors include:

i)       Scanning radiometers: These sensors collect a single line of data by using a rotating mirror to “scan” the view perpendicular to the line of flight, e.g., a thermal IR sensor and the MultiSpectral Scanner (MSS) of LANDSAT (refer to Figure 4.3 and 4.4 respectively). The forward movement of the satellite or aircraft produces subsequent lines of data.

ii)       Push-broom radiometers: This type of sensor, e.g., the HRV of SPOT, has one or several arrays of detectors. A line of acquisition is seen instantaneously without any mechanical motion which is a significant improvement on scanning radiometers (refer to Figure 4.5).

4.2.1.1 Spatial characteristics of passive sensors:

Passive sensors have two major spatial characteristics:

i)     Instantaneous Field of View (IFOV) (refer to Figure 4.6): This is defined as the angle (radians or degrees) over which the detector is sensitive to radiation. The MSS of LANDSAT-2 has an IFOV of 0.086 milliradians. By knowing the altitude of the satellite to be 920 km, it can be calculated that the picture element (pixel) represents an area on the earth surface which has dimensions of 80 m by 80 m (ground resolution cell) at the nadir point.

ii)    Swath width (refer to Figure 4.6): This is defined as the linear ground distance covered in the across-track direction. For a scanning radiometer this depends on the angular field of view (AFOV) or scanning angle; for example, the scanning angle of the LANDSAT-2 MSS is equal to 11.52° and, at an altitude of 920 km, results in a swath width of 185 km. For the push - broom radiometer, the field of observation is related to the size of the array; for example, the 6000 detectors of SPOT's HRV cover an angle of 4.13° and, at an altitude of 832 km, result in a swath width of 60 km.

Figure 4.3

Figure 4.3 Thermal IR scanning system. (After F.F. Sabins, Jr.,1979)

Figure 4.4

Figure 4.4 LANDSAT MSS orientation

Figure 4.5

Figure 4.5 General characteristic of a push-broom radiometer. (After T.E.Avery and G.L. Berlin, 1985)

Figure 4.6

Figure 4.6 The concept of Angular Field of View (AFOV) or scanning angle and Instantaneous Field of View (IFOV). (After T.E. Avery anf G.L Berlin, 1985)

4.2.1.2 Spectral and radiometric characteristics of passive sensors:

The spectral resolution of a sensor is its ability to differentiate the wavelengths of the electromagnetic spectrum. The radiometric resolution is its ability to distinguish different levels of intensity of EMR in a given spectral band.

4.2.2 Active sensors

Active sensors (e.g., radar, sonar) are able to illuminate an object with their own source of radiation. The illumination will either induce an object to emit radiation (fluorescence) or cause it to reflect the sensor-produced radiation. Active sensors often are used when natural radiation in a particular band of the spectrum is not sufficient to adequately illuminate the target, i.e. the natural radiation is below the signal-to-noise threshold.

4.2.2.1 Echo-sounders and sonars:

Echo-sounders and sonars (Sound Navigation and Ranging) are based on the principle of directing acoustic waves at a target and receiving the reflected echo. The echo sounder transmits a fixed, vertical beam of sound whereas the sonar beam can be orientated. The main components of an echo-sounder and a sonar are: transmitter, transducer, receiver and display unit.

The function of the transmitter is to produce energy in the form of pulsesof electrical oscillations. In the transducer, this electrical energy is converted to sound energy in the water and, conversely, the sound waves of the returning echoes are converted back to electrical energy. The receiver amplifies the weak electrical oscillations produced in the transducer by the echo so that they can be recorded on paper, displayed on a CRT (Cathode Ray Tube) or broadcast as an audible signal.

4.2.2.2 Radars:

A radar is an active microwave sensor which uses radio waves to detect the presence of objects and to determine their range (position). This process entails transmitting short bursts or pulses of microwave energy in the direction of interest and recording the strength and origin of “echoes”or “reflections” received from objects within the system's field of view. The resolving power of the radar (its ability to differentiate between targets) is determined by the wavelength transmitted by the radar. Active microwave sensing is done in several wave bands which are designated by letters of the alphabet as indicated in Table 4.1. The transparency of the atmosphere to the microwaves or hyper-frequencies (refer to Section 2) allows a radar to acquire data regardless of the weather conditions. Microwaves penetrate clouds and are not scattered by haze or rain.

Radars can be either imaging or non-imaging:

i)      imaging radars: Imaging radars display the radar backscatter characteristics of the earth's surface in the form of a strip map or a picture of a selected area. An example of an imaging radar includes the Side Looking Airborne Radar (SLAR) which is carried on aircraft. This sensor scans an area not directly below the aircraft but at an angle to the vertical, hence the term “Side Looking” (refer to Figure 4.7a, b and c).

Figure 4.7a

Figure 4.7a Principal of operation of side looking radar. (After J.A. Richards, 1986)

Figure 4.7b

Figure 4.7b Propagation of one radar pules (indicating the wavefront location at time intervals 1-17). (After T.M. Lillesand and R.W. Kiefer, 1979)

Figure 4.7c

Figure 4.7c Resulting antenna return. (After T.M.Lillesand and R.W. Kiefer, 1979) TABLE 4.1

RADAR WAVELENGTHS AND FREQUENCIES USED IN REMOTE SENSING (After F.F. Sabins, Jr., 1978)

 μν
 WavelengthFrequency (megahertz)
Band Designation cm (106 cycle sec-1)
Ka (0.86 cm*) 0.8 to 1.1 40,000 to 26,500
K 1.1 to 1.7 26,500 to 18,000
Ku 1.7 to 2.4 18,000 to 12,500
X (3 and 3.2 cm*) 2.4 to 3.8 12,500 to 8,000
C 3.8 to 7.5 8,000 to 4,000
S 7.5 to 15.0 4,000 to 2,000
L (25 cm*) 15.0 to 30.0 2,000 to 1,000
P 30.0 to 100.0 1,000 to 300

* Indicates wavelengths commonly used in imaging radars.

To achieve a useful spatial resolution in an image of the ground from the altitude of a satellite would require an antenna with a length of several kilometres. Synthetic Aperture Radar (SAR) was developed to overcome this problem. SAR takes successive signals transmitted and received by a small real antenna and uses them to reconstruct (synthesize) the signal which would have been received had the antenna been several kilometres in length (refer to Figure 4.8). In addition, the reconstruction of the image of a moving object from SAR data involves the consideration of the Doppler effect (refer to Glossary of Terms).

Finer details of a target (i.e. greater resolution) can be seen in an image produced with microwaves of shorter wavelengths. For example, sea surface which appears smooth with L-band may not be smooth when sensed with X-band. In common with other EMR, microwaves are polarized into vertical and horizontal components.

At present, imaging radars have few applications in oceanographic studies, although intensive research has been carried out to measure wave length and direction.

ii)     non-imaging radars: Unlike imaging radars, the non-imaging radars record a specific physical parameter. Examples of non- imaging radars include the radar scatterometer and the radar altimeter. A radar scatterometer measures the roughness of the sea surface, icebergs, etc., in a broad swath on either side of the spacecraft (refer to Figure 4.9). Measurements yield the amplitude of short surface waves that are approximately in equilibrium with the local wind and from which the surface wind velocity can be estimated. The radar altimeter uses a pencilbeam microwave that measures the vertical distance between the spacecraft and the earth. Measurements yield the topography and roughness of the sea surface from which the ocean grid, surface current and average wave height can be estimated.

Figure 4.8

Figure 4.8 A synthetic aperture radar system. (After T.E. Avery and G.L. Berlin, 1985)

Figure 4.9

Figure 4.9 Scatterometer output from iceberg as a function of time for different angles of incidence. (After D. Harper, 1983)

4.2.2.3 Lidars (Laser-radars):

A lidar is an active sensor emitting and receiving light in the visible and near-infrared wavelengths. Laser (acronym for Light amplification by stimulated emission of radiation) is a device for producing light by emission of energy stored in a molecular or atomic system, when stimulated by an input signal. Lidar uses laser to generate short, high power light pulses. As the pulse passes through the atmosphere, back-scattered light is detected by an optical system and is electronically analyzed to provide a measurement of intensity of light back-scattered by target constituents as a function of the distance from the sensor.

Due to physical limitations it is impossible, to date, to include such sensors in a satellite payload. They are therefore limited to airborne missions. Two kinds of lidars have interesting applications in oceanographic studies: the bathymetric lidar and the fluorescence lidar:

i)      bathymetric lidar: This lidar, which is used for bathymetric studies, generates a blue-green signal and a near-infrared signal simultaneously. The near-infrared signal does not penetrate the water and is directly reflected by the sea surface and recorded by the sensor. The blue-green signal, in contrast, penetrates the water, is reflected by the bottom and reaches the sensor at a later time. The difference in time is a direct function of the water depth (refer to Figure 4.10).

ii)     fluorescence lidar: This lidar records the emitted fluorescent light induced by the interaction of the lidar transmitted blue- green light with the target. The fluorescence of the target is often unique and therefore provides a means of recognition. This instrument has been used to identify and quantify chlorophyll in water and also to identify and measure the thickness of marine oil slicks (refer to Figure 4.11).

Figure 4.10

Figure 4.10 Principal of operation of airborne lidar bathymetric system. (After D. Harper, 1983)

Figure 4.11

Figure 4.11 Principal of operation of airborne flurosence lidar. (After D. Harper, 1983)


Previous Page Top of Page Next Page