Previous Page Table of Contents Next Page


Chapter 5. Water conservation


Need for water conservation measures

Water conservation measures are the first-line option for the control and management of subsurface drainage water. Conservation measures involve reducing the amount of drainage water and they include: source reduction through sound irrigation water management; shallow water table management; groundwater management[1]; and land retirement. These measures affect other options such as the reuse and disposal of drainage water. In general, conservation measures consist of measures that aim to reduce the quantity of drainage effluent and measures that aim to reduce the mass emission of constituents into receiving water. Water quality impacts to water users (agriculture, fisheries, etc.) are reflected in terms of concentration but the control of drainage from irrigated lands is in terms of water volume and mass discharge of constituents.

The case studies from the United States of America, India, Pakistan and the Aral Sea Basin presented in Part II illustrate the need for source reduction. In the northern third of the San Joaquin Valley, limited drainage water disposal into the San Joaquin River is permitted in order to protect water quality for downstream water users. However, in the southern two-thirds of this valley there are no opportunities for any drainage into the river and the vadose zone has filled up with deep percolation. Subsurface drainage is practised with disposal into evaporation ponds. For the southern part of this valley, options such as deep-well injection, desalination, and water treatment to remove selenium are either generally too expensive for the farmers to bear the entire cost or technically not feasible. Therefore, source reduction plays a major role in dealing with problems caused by the shallow, saline groundwater in the San Joaquin Valley. In countries such as India and Pakistan, conservation measures are required as large tracts of land in need of drainage occur in inland basins without adequate disposal facilities. In Pakistan, under the umbrella of the Fourth Salinity Control and Reclamation Project, evaporation ponds have been constructed to relieve the disposal problem, though adverse impacts have been encountered in surrounding lands. To prevent serious environmental degradation and to sustain irrigated agriculture, parallel conservation measures need to be implemented.

When water is used conservatively, the concentration of salts and trace elements will rise in drainage waters but the mass emission rates will decrease because the volume of water discharged is smaller. As water is used conservatively and the leaching fraction is reduced, salts tend to accumulate in the rootzone. Under such conditions, the major concern is for rootzone salinity not to exceed crop salt tolerance. In the Nile Delta, Egypt, a situation of increasing concentration in soil and water salinity is anticipated as water is used more conservatively (Box 2). In smaller irrigated areas with relatively good natural drainage, water tables might drop sufficiently low, as a result of improved irrigation efficiency, to minimize capillary rise into the rootzone. Under these conditions, downward leaching of salts will result in an overall improved salt situation (Christen and Skehan, 2000).

Box 2: Need to increase water use efficiency as result of water scarcity - An example from Egypt

Estimated water balance for Egypt

Water source

Annual availability in million m3

Water use

Annual demand ('95) in million m3

Nile water quota

55 500

Irrigation

51 500

Renewable GW

6 000

Municipal

2 900

Maximum reusable

7 000

Industrial

5 900

Total available

68 500

Total demand

60 300

The irrigation demand is projected to increase to 61 500 million m3 in 2025. Without taking into account the increase in municipal and industrial demand, the total projected water demand for 2025 is 70 300 million m3 per year. As the water resources are not projected to increase, irrigation efficiency needs to increase in order to sustain all valuable water uses. For this purpose, the Government of Egypt promotes the expansion of irrigation improvement projects. Reduction of irrigation losses reduces the drainage effluent generated and is likely to increase its salinity. This has considerable consequences for reuse and disposal (DRI, 1995; and EPIQ Water Policy Team, 1998).

Hydrologic balance

Drainage water management in general and water conservation measures in particular require a comprehensive knowledge and database of the hydrologic balance in irrigated agriculture. The control volume may be the vadose zone including the crop rootzone, the saturated zone for the groundwater basin, or a combination of vadose and saturated zones. The vadose and saturated zones may be viewed as the subsystem components and the combination of these two as the overall system or global perspective. Figure 15 depicts the hydrologic balance for these three control volumes. Water inputs and outputs vary among the subsystem components and the whole system. For example, pumped groundwater and drainage water reuse are vadose zone inputs but not considered in the combined vadose and groundwater system because they are internal flows that do not cut across the boundaries of the control volume.

Figure 15. Hydrologic balance in the vadose and saturated zones, and in a combination of vadose and saturated zones

The horizontal boundaries are normally easier to define because they often correspond to an irrigation district or a unit within an irrigation district. However, these types of boundaries are not always the most practical for defining the hydrological water balance as groundwater flow into and out of the district boundaries may be very difficult to estimate and measure. Defining the vertical boundaries is normally more difficult. The vertical components of the control volume include the crop rootzone, the vadose zone (the crop rootzone is often taken as part of the vadose zone) through which deep percolation passes and recharges the saturated or groundwater zone. Measurements of collected subsurface drainage are possible but deep percolation below the rootzone is extremely difficult to estimate and typically obtained as a closure term in the hydrologic balance in the vadose zone. The presence of a high groundwater table further complicates the situation as upward fluxes, i.e. capillary rise and water uptake from the shallow groundwater by roots, may also occur.

When viewing a series of interconnected irrigation and drainage districts as in a river basin, water use becomes quite complex. Although water may be used rather inefficiently upstream, the overall efficiency may be quite high for the entire river basin because of extensive reuse of water not consumed upslope or upstream, i.e. the irrigation return flow (Solomon and Davidoff, 1999). However, in terms of water quality, there is a progressive degradation because irrigation return flows pick up impurities (Chapter 4). Thus, it is necessary to be able to quantify the performance of irrigation and drainage systems not only for the design of alternative measures but also for the management and operation of irrigation and drainage systems.

Irrigation performance indicators

Numerous indicators, usually called efficiencies, have been developed to assess irrigation performance. The definitions of efficiencies as formulated by the ICID/ILRI (Bos and Nugteren, 1990) are widely used. According to the ICID/ILRI, the movement of water through an irrigation system involves three separate operations: conveyance, distribution, and field application. In congruence, the main efficiencies are defined for water use in each of these three operations. The conveyance efficiency (ec) is the efficiency of the canal or conduit networks from the source to the offtakes of the distribution system. The ec can be expressed as the volume of water delivered to the distribution system plus the non-irrigation deliveries from the conveyance system divided by the volume diverted or pumped from a source of water plus inflow of other sources. The distribution efficiency (ed) is the efficiency of the water distribution (tertiary and quaternary) canals and conduits supplying water to individual fields. The ed can be expressed as the volume of water furnished to the fields plus non-irrigation deliveries divided by the volume of water delivered to the distribution system. Finally, the water application efficiency (ea) is the relationship between the quantity of water applied at the field inlet and the quantity of water needed for evapotranspiration by the crops to avoid undesirable water stress in the plants. Unlike the ASCE Task Committee on Describing Irrigation Efficiency and Uniformity, the ICID/ILRI do not make a further distinction between consumptive, beneficial and reasonable water use.

Source reduction aims to reduce the volume of drainage water by improving the irrigation performance. In addition to the efficiencies defined by the ICID/ILRI, this report uses a number of performance indicators defined by the ASCE Task Committee on Describing Irrigation Efficiency and Uniformity (Burt et al., 1997) to assess the appropriateness of water use in an irrigation system and its contribution to the production of drainage water.

The ASCE Task Committee performance indicators are defined by the water entering and leaving the boundaries of a system. To judge the performance of an irrigation system, the ASCE Task Committee groups the fractions of water leaving the system boundaries in a specified time period into categories of use: consumptive versus non-consumptive use; beneficial versus non-beneficial use; and reasonable versus unreasonable use. This analysis requires an accurate description of the components of the hydrologic balance within defined boundaries of a control volume over a specific period of time. The time period taken is a cropping season rather than a single irrigation event.

Figure 16 shows that the division between consumptive use and non-consumptive use lies in the delineation between water that is considered irrecoverable and water that can be re-applied elsewhere, though perhaps degraded in quality. Consumptive uses include water that finds its way into the atmosphere through evaporation and transpiration, and water that leaves the boundaries in harvested plant tissues. Non-consumptive use is water that leaves the boundaries in the form of runoff, deep percolation and canal spills. The partitioning of applied water into beneficial and non-beneficial uses is a distinction between water consumed in order to achieve agronomic objectives and water that does not contribute to this objective. Beneficial use of water supports the production of crops and includes evapotranspiration, water needed for improving or maintaining soil productivity, seed bed preparation and seed germination, frost prevention, etc. Non-beneficial uses include deep percolation in excess of the leaching requirement, tailwater, evapotranspiration by weeds, canal seepage, spills, etc.

Figure 16. Consumptive versus non-consumptive and beneficial versus non-beneficial uses

Source: Clemmens and Burt, 1997.

Not all non-beneficial uses can be avoided at all times. Due to physical, economic or managerial constraints and various environmental requirements, some degree of non-beneficial use is reasonable. For example, reasonable but non-beneficial deep percolation can occur because of uncertainties that farmers face when deciding how much to irrigate to replenish the available soil moisture. Unreasonable uses include those that do not have any economical, practical or environmental justification. Figure 17 shows the division between beneficial and non-beneficial and reasonable and unreasonable uses.

Figure 17. Beneficial and non-beneficial and reasonable and unreasonable uses

Source: Clemmens and Burt, 1997.

Once defined, the water balance components can be used to make rational decisions about the appropriateness of the water use and whether they have a positive or negative impact on crop production, economy, regional hydrology, and on the amount of drainage water.

Three of the performance indicators proposed by the ASCE Task Committee are important in relation to the planning and design of conservation measures. The first indicator is the irrigation consumptive use coefficient (ICUC) which is expressed as follows:

(6)

The ICUC deals with the fraction of water that is actually consumed in the system beneficially and non-beneficially. The denominator contains 'change in storage of irrigation water' or that part of the water applied that has not left the control volume and is thus usable during succeeding periods. The quantity (100-ICUC) represents the percentage of non-consumptive use of the applied water. Thus, the ICUC deals with the fraction of water that is irrecoverable and therefore cannot be re-applied elsewhere.

The second indicator, irrigation efficiency (IE) deals with water used beneficially for crop production. IE can be expressed as:

(7)

The quantity (100-IE) represents the percentage of non-beneficial uses of applied water.

The ASCE Task Committee suggests a third performance indicator because not all water losses are avoidable. The irrigation sagacity (IS), first introduced by Solomon (Burt et al., 1997), is now defined by the ASCE Task Committee as:

(8)

The ASCE Task Committee suggests the use of IS to complement IE.

The numerical values of these three performance indicators provide appraisals on the overall effectiveness of the irrigation system and its management and the contribution towards drainage water production. However, the use of these performance indicators relies on quantification of the various water uses and fate pathways of water. There are many methods for determining the volume associated with each water use. Some are direct measurements (e.g. totalizing water meters), some are indirect measurements (e.g. ET estimated from weather data and crop coefficients) and some are obtained as the closure term from mass balance (e.g. deep percolation). Each method has errors associated with it that affects the accuracy of the performance indicator. Clemmens and Burt (1997) have shown that the confidence interval for the ICUC is about 7 percent. The accuracy of IE is in general less than the ICUC, as quantifying the beneficial water use is normally quite difficult. Therefore, they concluded that reporting more than two significant numbers for performance indicators is inappropriate. Furthermore, they introduced an approach using component variances to determine the relative importance of the accuracy of the variables that contribute to the estimate of the performance indicator. The component variances can be used to determine which measured volumes need closer attention. Improving the accuracy of the components with the highest variances will have the greatest impact on improving the accuracy of the performance measures.

Box 3: Non-beneficial unreasonable uses

Non-beneficial Uses

Unreasonable uses

· Any overirrigation due to non-uniformity;
· Any uncollected tailwater;
· Deep percolation in excess of that needed for salt removal;
· Evaporation from wet soil outside the cropped area of a field;
· Spray drift beyond the field boundaries;
· Evaporation associated with excessively frequent irrigations;
· Weed or phreatophyte evapotranspiration;
· System operational losses;
· Leakage from canals;
· Seepage and evaporation losses from canals and storage reservoirs;
· Regulatory spills to meet wastewater discharge requirements that are based on concentrations.

· Non-beneficial uses that are also without any economic, practical or environmental justification. Unreasonable uses cannot be defined scientifically as they are judgmental and may be site and time specific. However, they should not be beyond engineering as engineering practice normally considers constraints, different objectives, economics, etc.

Source: Burt et al., 1997

Source reduction through sound irrigation management

Source reduction aims to reduce the drainage effluent through improving irrigation performance and thus improving IE. As certain losses are unavoidable, source reduction aims more precisely to increase IS by reducing non-beneficial, unreasonable uses (Box 3).

Reasonable losses

This section explores the various losses captured by subsurface field drains and open collector and main drains, and establishes estimates for reasonable losses.

Losses captured by subsurface field drains

The discharge of on-farm subsurface drainage in irrigated agriculture may be determined with the following water balance equation:

(9)

where:

q = specific discharge (mm/d);
R = estimated deep percolation (mm);
Sc = seepage from canal (mm);
Sl = lateral seepage inflow to the area (mm);
Sv = vertical seepage inflow (mm);
Drn = natural groundwater drainage from the area (mm); and
t = time period of measurement or calculation (d).

Figure 18 depicts these losses. In certain instances, the subsurface drainage system intercepts only a portion of the deep percolation.

Figure 18. Losses captured by subsurface field drains

Deep percolation losses

Figure 19 shows that R consists of rootzone drainage from non-uniform water application, overirrigation from excessive duration of irrigation, water applied to leach salts, and rainfall. Unreasonable deep percolation losses are any percolation losses in excess of the inherent non-uniformity of the irrigation application system and salt leaching desired for the crop in question.

Figure 19. Deep percolation losses

Source: after Clemmens and Burt, 1997.

Table 9 shows that each irrigation method has a range of inherent distribution uniformity, ea, and deep percolation. The data reported are based on well-designed and well-managed systems on appropriate soil types. In Table 9, ea is defined as the ratio of the average amount of water stored in the rootzone to the average amount of water applied. Deep percola-tion, surface runoff, tailwater and evaporation losses make up the total losses. The ea reported by Tanji and Hanson (1990) is based on data estimated from irrigated agriculture in California, the United States of America, where there is a state mandate to conserve water and where most irrigators receive some training.

Table 9. Estimated reasonable deep percolation losses as related to irrigation methods

Application method

Distribution uniformity (%)

Water application efficiency (%)

Estimated deep percolation (%)

Tanji and Hanson, 1990

SJVDIP, 1999f

Sprinkler





- periodic move

70-80

65-80

70-80

15-25

- continuous move

70-90

75-85

80-90

10-15

- solid set

90-95

85-90

70-80

5-10

Drip/trickle

80-90

75-90

80-90

5-20

Surface





- furrow

80-90

60-90

70-85

5-25

- border

70-85

65-80

70-85

10-20

- basin

90-95

75-90


5-20

Source: Tanji and Hanson, 1990; and SJVDIP, 1999e.

More recently, the Technical Committee on Source Reduction for the San Joaquin Valley Drainage Implementation Program (SJVDIP, 1999e) reported ea values that differ somewhat from the 1990 values. The 1999 values were based on an analysis of nearly 1 000 irrigation system evaluations and they represent updated practical potential maximum irrigation efficiencies. In contrast, previously published FAO application efficiencies for irrigation application methods (FAO, 1980) and ea values reported by the ICID/ILRI are substantially lower, representing a lower level of management and probably a less than optimal design. A properly designed system that is well managed can attain quite high efficiencies. On heavy soils, surface and drip irrigation can attain similar levels of efficiency.

Estimates for deep percolation have been made on the basis of the following assumptions: no surface runoff occurs under drip and sprinkler irrigation; during daytime sprinkler irrigation evaporation losses can be up to 10 percent and during night irrigation 5 percent; tailwater in furrow and border irrigation can be up to 10 percent and evaporation losses up to 5 percent; and no runoff occurs in basin irrigation and evaporation losses can be up to 5 percent.

Thus, reasonable deep percolation losses may vary with the irrigation application method and water management employed.

Canal seepage

Canal seepage varies with: the nature of the canal lining; hydraulic conductivity; the hydraulic gradient between the canal and the surrounding land; resistance layer at the canal perimeter; water depth; flow velocity; and sediment load. The canal seepage can be calculated using empirically developed formulae or solutions derived from analytical approaches (FAO, 1977). Canal seepage might also be estimated on the basis of Table 10.

Table 10. Seepage losses in percentage of the canal flow

Type of canal

Seepage losses (%)

Unlined canals

20-30

Lined canals

15-20

Unlined large laterals

15-20

Lined large laterals and unlined small laterals

10-15

Small lined laterals

10

Pipelines

0

Source: USBR, 1978.

Excessive seepage can occur due to poor canal maintenance. Any seepage in excess of the aforementioned figures needs to be regarded as unreasonable.

Seepage inflow

Seepage inflow from outside areas is discussed in the section on land retirement. Discussion of seepage inflow from deep aquifers requires substantial geohydrological knowledge, which is beyond the scope of this publication.

Losses captured by collector and main drains

The drainage discharge in collector and main drains depends on the discharge of field drains and additional inputs such as irrigation runoff from farmers' fields, and operational and canal spills. The ILRI in collaboration with the ICID analysed irrigation efficiencies for 91 irrigated areas on the basis of questionnaires submitted by 29 national committees of the ICID (Bos and Nugteren, 1990). For the interpretation of the data, the basic climate and socio-economic conditions were taken as primary variables. On the basis of these variables, the irrigated areas were divided into four groups. Group I includes all areas with severe rainfall deficit, and the farms are generally small with cereals as the main crop. For Group II, the main crop is rice and the rainfall deficit is less than for Group I. Group III has a shorter irrigation season than the first two groups and the economic development is more advanced. In addition to cereals, the most important crops are fodder crops, fruits and vegetables. For Group IV, irrigation is supplementary as this group has a cool, temperate climate.

The operational or management losses in the conveyance system are related to: the size of the irrigation scheme; and the level of irrigation management, communication systems and control structures, i.e. manual versus automatic control. Figure 20 shows that there is a sharp increase in operational losses in irrigation schemes of less than 100 ha and larger than 10 000 ha. Management losses can be as high as 50 percent.

The size of the tertiary or rotational units also has a significant influence on the operational losses. Bos and Nugteren (1990) estimate that optimum efficiency can be attained if the size of the rotational unit lies between 70 and 300 ha. Where the rotational units are smaller, safety margins above the actual amounts of water required are introduced, as the system cannot cope with temporary deficits. Larger rotational units require a long filling time in relation to the periods that the canals are empty, as the canals are relatively long and of large dimensions. This requires organizational measures to correct timing, which is often difficult.

Figure 20. Management losses in relation to the size of the irrigation scheme

ec = water conveyance efficiency
Source: Bos and Nugteren, 1990.

In Egypt, canal tail losses are estimated to account for 25-50 percent of the total water losses in irrigation (EPIQ Water Policy Team, 1998). For Egypt, it is expected that operational losses can be reduced significantly when measures such as automatic controls and night storage are introduced.

Figure 21. Distribution losses in relation to farm size and soil type

Source: Bos and Nugteren, 1990.

Other losses reaching collector and main drains are from the distribution system. In addition to the seepage losses from the tertiary and quaternary canals, the method of water distribution, farm size, soil type and duration of the delivery period affect the ed. Figure 21 shows that the ed is a function of farm size and soil type. Farm units of less than 10 ha served by rotational supply have a lower efficiency than larger units. This is a result of the losses that occur at the beginning and end of each irrigation turn. Moreover, where farms are served by pipelines or are situated on less permeable soils, the ed will be higher than average. Most of these losses do not occur if farms receive a continuous water supply. Consequently, in this case, the ed is high irrespective of farm size.

When the delivery periods are increased, the ed rises markedly. This is probably due to the losses that occur at the initial wetting of the canals.

Management options for on-farm source reduction

Improving on-farm irrigation management

Improving on-farm irrigation management involves optimizing irrigation scheduling. This means determining when to irrigate and how much water to apply. This can be done by using real-time weather data to obtain the reference crop evapotranspiration (ETo) and taking the product of ETo and crop coefficient to establish the depth of water to apply (SJVDIP, 1999e). Soil water balance methods are commonly used to determine timing and depth of future irrigations. The California Irrigation Management Information System (CIMIS) and FAO's CROPWAT (FAO, 1992a) are based on these principles. Basic assumptions in these methods are that healthy high-yielding crops are grown and that the soil moisture depletion between two irrigations equals the crop evapotranspiration. The latter assumption may not be valid where the plant roots are extracting water from shallow groundwater. Where such conditions exist, using CIMIS or CROPWAT will give an overestimation of the depletion and result in more water being applied than needed to replenish soil moisture. Field experiments have been implemented all over the world to evaluate the contribution of capillary rise from a shallow water table towards crop water requirements (e.g. Qureshi et al., 1997; DRI, 1997b; Minhas et al., 1988; and Rao et al., 1992). However, no simple calculation procedures have been developed yet. Ayars and Hutmacher (1994) propose a modified crop coefficient to incorporate groundwater contribution to crop water use. A manual calculation procedure is proposed in the section on shallow water table management. Rough estimates for capillary rise might be used, especially where drought resistant crops are grown or during periods when the crop is less sensitive to water stress. FAO (1986) has determined the yield response to water for a range of crops. Such information may be helpful in assessing the risks involved, in terms of yield losses, when rough estimates of capillary rise are used in optimizing irrigation scheduling.

Soil sampling and soil water sensing devices can provide valid estimates of soil moisture depletion. Provided these instruments are properly installed and calibrated, and the users adequately trained, irrigation scheduling can be based on the results.

Improving water application uniformity and efficiency

Uniformity and ea can be improved by upgrading the existing on-farm irrigation system or by converting to a technique with a potential for higher efficiency and uniformity (SJVDIP, 1999e). Surface irrigation by gravity flow is the most common irrigation technique as it does not involve the costs for the O&M of pressurized systems. For this reason, over the coming decades surface irrigation is likely to remain the predominant approach (FAO, 2000), although better uniformity and ea can be obtained with sprinkler and drip irrigation.

There are several ways of improving the performance of furrow irrigation. The first method is to reduce the furrow length. This measure is most effective in reducing deep percolation below the rootzone for field lengths exceeding 300 m. The set time has to be reduced at the same time, and is equal to the difference in the initial advance time and the new advance time. Failure to do so will greatly increase the surface runoff and subsurface drainage. A second option is to apply cutback irrigation. Cutback irrigation means reducing the inflow rate of irrigated furrows after the completion of advance.

A third option is to use surge irrigation. Surge irrigation is intermittent application of water to an irrigation furrow (Yonts et al., 1994). Initial infiltration rates in a dry furrow are high. As the water continues to run, the infiltration rate reduces to a constant rate. If water is shut off and allowed to infiltrate, surface soil particles consolidate and form a partial seal in the furrow, which substantially reduces the infiltration rate. When the water inflow into the furrow is re-introduced, more water moves down the furrow in the previously wetted area and less infiltration into the soil takes place. This process is repeated several times. As the previously wetted part of the furrow has a lower infiltration rate and the advance in this part is higher, the final result is a more uniform infiltration pattern (Figure 22).

Last, better land grading and compaction of the furrow can improve the uniformity and efficiency of furrow irrigation (SJVDIP, 1999e).

Improvements in basin irrigation consist mainly of adjusting the size of the basins in accordance with the land slope, the soil type and the available stream size. FAO (1990b) gives guidelines on how to estimate optimal basin sizes. To obtain a uniformly wetted rootzone, the surface of the basin must be level and the irrigation water must be applied rapidly.

Drip and sprinkler irrigation systems have the potential to be highly efficient. However, where the systems are not properly designed, operated or maintained, the efficiency can be as low as in surface irrigation systems. Improving the uniformity and thus the efficiency in drip and sprinkler irrigation involves reducing the hydraulic losses. Losses can be minimized by selecting the proper length of the laterals and pipeline diameter and by applying appropriate pressure regulation throughout the system. After the system has been properly designed and installed, good O&M of the system is crucial. Phocaides (2001) provides guidelines on the design and O&M of pressurized irrigation techniques.

Figure 22. Infiltration losses in furrow irrigation

Source: after Yonts et al., 1994.

For additional details on on-farm irrigation management, the reader may refer to publications by Hoffman et al. (1990), and Skaggs and Van Schilfgaarde (1999).

Options for source reduction at scheme level

At a scheme level, numerous options can be applied to reduce the conveyance, distribution and operational losses. To reduce seepage and leakage, canal rehabilitation or upgrading might be required. To reduce the operational losses, improvements in the irrigation infrastructure and communication system could be implemented. To increase the distribution efficiency, tertiary and quaternary canals might need to be upgraded. Moreover, an increase in delivery period might help to increase the distribution efficiency. Furthermore, a number of policy options are available. In many countries, driven largely by financial constraints, the water users now manage the irrigation systems. It is hoped that handing over the systems to the water users will raise efficiency and profitability. FAO (1999b) has developed guidelines for the transfer of irrigation management services.

Impact of source reduction on long-term rootzone salinity

The main objective of source reduction in the context of drainage water management is to reduce the amount of drainage water. For the reduction of the amount of subsurface drainage this means that the amount of water percolating below the rootzone will be reduced through improving water application efficiency. In areas where salinization is a major concern, it is important to assess the feasibility and impact of source reduction on rootzone salinity.

The equilibrium rootzone salinity is a function of the salinity of the applied water that mixes with the soil solution and the fraction of water percolating from the soil solution (Annex 4). This can be expressed as:

(10)

where:

ECfrR = salinity of the percolation water which has been mixed with the soil solution (dS/m);
ECSW = salinity of the soil water (dS/m);
ECIWi = salinity of the infiltrated water that mixes with the soil solution (dS/m); and
LFi = leaching fraction of infiltrated water that mixes with the soil solution (-).

Therefore, it is important to evaluate whether the average net amount of percolation water under proposed irrigation practices satisfies the minimum leaching requirements to avoid soil salinization. In this evaluation, rainfall should be considered as it might supply adequate leaching. The leaching fraction of the infiltrated water that mixes with the soil solution can be expressed as:

(11)

where:

fr = leaching efficiency coefficient of the percolation water (-);

fi = leaching efficiency coefficient related to the incoming irrigation water that mixes with the soil solution (-);

Ii = irrigation water infiltrated, which is the total applied irrigation water minus the evaporation losses and surface runoff (mm);

Pe = effective precipitation (mm); and

R* = net deep percolation (mm).

In this equation, it is assumed that over the irrigation seasons a shallow zone of water is created below the rootzone which has a salinity equivalent to the percolation water. Where this assumption is not correct or where deficit irrigation is actively practised under shallow groundwater conditions, capillary rise and deep percolation have to be entered as separate entities in the rootzone salt balance. In this case, the soil water salinity in the rootzone is a function of the salinity of the infiltrated water, capillary rise and percolation water.

It is often assumed that the salinity of the deep percolation water is equivalent to the average rootzone salinity. However, due to irrigation and rootwater extraction patterns, the salinity is lower in the upper portion of the rootzone due to higher leaching fractions (zone of salt leaching) and the salinity is higher in the bottom portions because of smaller leaching fractions (zone of salt accumulation). Under normal irrigation and root distribution, the typical extraction pattern for the rootzone is 40-30-20-10 percent water uptake from the upper to the lower quarter of the rootzone. Equation 10 can be used to calculate the rootzone salinity of five successive depths under this water uptake pattern to obtain finally the average salinity in the rootzone (Figure 2 in Annex 4).

Maintaining a favourable salt balance under source reduction

The relationship between ECIWi and the average soil salinity of the saturated soil paste (ECe) can be calculated for each LFi and expressed as a concentration fraction (Table 1 in Annex 4). These concentration factors can be used to calculate the relationship between ECe and ECIWi (Figure 23). Where the salinity of the infiltrated water and the crop tolerance to salinity are known, the necessary LF to control soil salinity can be estimated from this figure. If the y-axis of the figure were the threshold salinity for the crop under consideration (ECts), then the diagonal lines would give a range of leaching requirements (LR) expressed as a LF.

Figure 23. Assessment of leaching fraction in relation to the salinity of the infiltrated water

Source: after FAO, 1985b.

Various researchers have developed other methods to calculate the LR to maintain a favourable salt balance in the rootzone. An empirical formula developed by Rhoades (1974) and Rhoades and Merrill (1976) is:

(12)

where ECts is the threshold salinity for a crop above which the yield begins to decline (Annex 1) and ECIW is the salinity of the infiltrated water. The value 5 was obtained empirically (FAO, 1985b).

The salt equilibrium equation (Equation 10) can be used to assess the feasibility and impact of source reduction on the rootzone salinity by calculating the LR expressed as LF, and to compare the results with the expected percolation under improved irrigation practices. The amount of percolation water should cover the LR. A safety margin is advisable because irrigation uniformity is never complete in the field (FAO, 1980). Depending on the distribution uniformity of the application method, the actual LR should be 1.1-1.3 times higher than the calculated LR. For example, for drip, sprinkler (solid set) and basin irrigation, smaller safety margins might be adopted while for border irrigation a larger margin might be more appropriate (Table 9).

Below an illustrative calculation from Pakistan is given. The example shows the impact of source reduction on the rootzone salinity.

Calculation example impact of source reduction on salinity of rootzone

General information

The drainage pilot study area is situated in the south east of the Punjab, Pakistan. The area has been suffering from waterlogging and salinity problems for a long time and therefore was selected as a priority area in urgent need of drainage. The main causes of waterlogging are overirrigation and seepage inflow from surrounding irrigated areas and canals. The estimated average seepage inflow is 0.5 mm/d with a salinity of 5 dS/m. In 1998, a subsurface drainage system was installed on a pilot area of 110 ha. The design discharge of the system is 1.5 mm/d and the design water table depth is 1.4 m. The soils are predominantly silty (qfc 0.36) with an estimated leaching efficiency coefficient (fi) of 0.9. The climate is semi-arid with an annual potential crop evapotranspiration of 1 303 mm and an average annual effective rainfall of 197 mm. The main crops are wheat in winter (December to April) and cotton in summer (June to October/November). Both crops are irrigated throughout the year, as rainfall is insufficient to meet crop water requirements (Table 11). Irrigation water is supplied through open canals and has a salinity of 1 dS/m. Wheat is irrigated through basin irrigation and cotton is irrigated with furrow basins. The major application losses occur as a result of deep percolation due to poor field levelling (non-uniformity) and as a result of uncertainties in relation to rainfall and water distribution. The annual average ea is 64 percent ((ETcrop-Pe)/I), including 5 percent evaporation losses. The depth of the rootzone is assumed to be 1 m and the water uptake pattern is 40, 30, 20, 10 percent from the first to the fourth quarter of the rootzone, respectively. It is assumed that no capillary rise occurs.

Long-term rootzone salinity

Long-term salinity in the rootzone under these conditions is calculated using Equations 1-11 presented in Annex 4 for the four-layer concept. The first step is to calculate ECIWi using Equation 10:

ECIWi = fiIi/(fiIi + Pe) * ECI = (0.9 * 0.95 * 1718)/(0.9 * 0.95 * 1718 + 197) * 1 = 0.88 dS/m

The second step is to calculate the LFi values for the infiltrated water mixing with the soil solution for the successive quarters (Figure 2 in Annex 4):

LFi1 = (IWi - 0.4 * ETcrop)/IWi = (0.9 * 0.95 * 1 718 + 197 - 0.4 * 1 303)/(0.9 * 0.95 * 1 718 + 197) = 0.69

LFi2 = (IWi - (0.4 + 0.3) * ETcrop)/IWi = (1 666 - 0.7 * 1 303)/1 666 = 0.45

LFi3 = (IWi - (0.4 + 0.3 + 0.2) * ETcrop)/IWi = (1 666 - 0.9 * 1303)/1 666 = 0.30

LFi4 = (IWi - (0.4 + 0.3 + 0.2 + 0.1) * ETcrop)/IWi = (1 666 - 1.0 * 1 303)/1 666 = 0.22

Table 11. Agroclimatic data for the drainage pilot study area

Period

Dec

Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sep

Oct

Nov

Year

Crop

wheat


cotton


ET (mm)

43

55

101

110

76

61

145

154

219

195

114

30

1303

Pe (mm)

5

15

15

9

5

15

80

35

10

0

3

5

197

I (mm)

222

122

122

122

0

144

144

144

288

144

144

122

1718

The third step is to calculate the ECe values with Equation 7 (Annex 4). Considering that for most soils ECe » 0.5 ECsw:

ECe1 = 0.5 ECsw1 = 0.5 * 0.88/0.69 = 0.64 dS/m
ECe2 = 0.5 ECsw2 = 0.5 * 0.88/0.45 = 0.98 dS/m
ECe3 = 0.5 ECsw3 = 0.5 * 0.88/0.30 = 1.47 dS/m
ECe4 = 0.5 ECsw4 = 0.5 * 0.88/0.22 = 2.00 dS/m

The average ECe of the rootzone, including the first value of the infiltrated water, is 1.11 dS/m. Figure 24 presents the results of these calculations.

Figure 24. Calculation of average rootzone salinity for the drainage pilot study area

Source reduction and the impact on rootzone salinity

Disposal of drainage water in southern Punjab is a major problem. Only very limited volumes of drainage water can be disposed into main irrigation canals and rivers. Evaporation ponds have been constructed to relieve the disposal problem but adverse environmental effects are observed. To sustain irrigated agriculture and to prevent serious environmental degradation, parallel conservation measures have to be implemented. As the calculated rootzone salinity of 1.11 dS/m is far below the threshold salinity value of wheat and cotton, source reduction at field level should be considered. Source reduction might be attained through a combination of measures including: precision land levelling; the shaping of basin and basin furrows; improved irrigation water distribution; and the introduction of water fees.

The threshold ECe value for wheat is 6 dS/m and for cotton 7.7 dS/m (Annex 1). To assess the minimum leaching requirement, the threshold rootzone salinity of the most sensitive crop in the crop rotation will be used. Using Figure 23, theoretically, the leaching fraction of the percolation water mixing with the soil solution could be less than 5 percent. If the LFi value at the bottom of the rootzone is assumed to be 0.05, the total irrigation water applied can be calculated as:

LFi = (fiIi + Pe - ETcrop)/(fiIi + Pe)
0.05 = (0.95 * 0.9 * I + 197 - 1 303)/(0.95 * 0.9 * I + 197)
0.05 * (0.855 * I + 197) = (0.855 * I - 1 106)
I = 1 374 mm

In this case, the ea is 80 percent, which is within the range of reasonable deep percolation losses (Table 9). Higher efficiencies cannot be achieved as farmers will need to cope with uncertainties in rainfall and to a certain extent in water supply. Second, some losses always occur as a result of non-uniformity and farmers' inability to apply exact amounts of water. Equation 10 (Annex 4) can be used again to calculate the salinity of the infiltrated water mixed with the soil water:

ECIWi = (0.9 * 0.95 * 1 374)/(0.9 * 0.95 * 1 374 + 197) * 1 = 0.86 dS/m

The LF values for the infiltrated water mixing with the soil solution and the ECe values for the successive quarters are calculated the same way as for Figure 24 and result in an average rootzone salinity of 2.77 dS/m (Figure 25) against the original value of 1.11 dS/m. Indeed, the salinity increased, but it is still acceptable for the common crops grown in the area.

The above calculations were based on dividing the rootzone into four equal parts (quadrants). However, this model can be extended into n-number of layers provided that the rootwater extraction pattern is known, e.g. in 15-cm depth increments for 90-cm rooting depth.

Figure 25. Calculation average rootzone salinity under source reduction in the drainage pilot study area

Impact of source reduction on salt storage within the cropping season

In previous sections, long-term steady-state conditions were assumed to prevail. However, salinity levels during a cropping season are not stable and change throughout the season. To study the impact of irrigation and drainage measures on crop performance, it is important to know the changes in rootzone salinity during a cropping season over multiple time periods. These periods may range from a day to a monthly period. The mass of salts at the start of a period and at the end of a period normally differs and can be expressed as:

Sstart = Send - DS (13)

where:

Sstart = quantity of salts in the rootzone at the start of the period (ECmm[2]);
Send = quantity of salts in the rootzone at the end of the period (ECmm); and
DS = change in salt storage in the rootzone (ECmm).

Equations 16 to 24 in Annex 4 provide a full description of the derivation of the salt storage equation. The salt storage equation is defined as:

(14)

where:

IW = infiltrated water (mm); and
Wfc = moisture content at field capacity (mm).

Equation 14 can be used to calculate changes in soil salinity within a cropping season. The salt storage equation can also be applied to the four-layer concept. Using the data from the previous example, the following calculation example assesses the impact of source reduction on the rootzone salinity on a monthly basis.

Calculation example of impact of source reduction on salt balance of the rootzone

To explore the changes in salinity over the growing season as a result of source reduction, the salt storage equation (Equation 14) applied to the four-layer concept as explained in Annex 4 is used. For each layer the change in salt storage is calculated using Equation 22 as presented in Annex 4 for the first layer and Equation 23 for the consecutive three layers. The average ECe is based on the quantity of salts stored in the rootzone at the end of the calculation periods. Equation 19 presented in Annex 4 is used to convert S to ECe. Table 12 presents the results for the wheat-growing season.

Table 12. Salt balance in the root zone for the drainage pilot study area during wheat season

Figure 26 shows the changes in the average rootzone salinity over an entire year. The salinity increases during the summer months and reaches its peak towards the end of the cotton growing season. Before planting wheat in November-December, many farmers apply a rauni (pre-irrigation) to leach any accumulated salts.

Figure 26. Change in the average rootzone salinity over the year under source reduction

Impact of source reduction on salinity of drainage water

To estimate the impact of source reduction on the extent of reuse and disposal of drainage water, it is necessary to consider not only the water quantity but also the changes in water quality. Generally, the salinity of the drainage water increases as the drainage discharge decreases. Where all the deep percolation is intercepted by the subsurface drainage system, the amount of generated drainage water is:

(15)

The salt load of the subsurface drainage water is the salinity of the intercepted water multiplied by the depth of drainage water plus the salt load of the seepage inflow minus the salt load of the natural drainage. If the natural drainage can be ignored the salt load of the subsurface drainage salinity can be calculated as follows:

(16)

where:

ECDra = salinity of the subsurface drainage water (dS/m);
ECfrR = salinity of the percolation water that mixed with soil solution (dS/m); and
ECSi = salinity of the seepage inflow intercepted by the subsurface drains (dS/m).
ECI = salinity of the irrigation water (dS/m)

The salinity of the subsurface drainage water (ECDra) is the salt load as calculated with Equation 16 divided by the depth of drainage water.

In the California studies, there appears to be some correlation between salinity, boron and selenium in terms of the waste discharge load, i.e. changes in flow result in similar changes in loads of salts, selenium and boron. This load-flow relationship exists because the shallow groundwater contains excessive levels of salinity, selenium and boron (Ayars and Tanji, 1999). Where this relationship exists in an area, regulating the salinity load would also regulate boron and selenium loads in drainage discharge.

The salinity of the subsurface drainage water is diluted in the main drainage system by surface runoff and canal spills. These losses will be drastically reduced under irrigation improvement projects. For example, in Egypt it is expected that through the introduction of night storage reservoirs, the canal tailwater losses will be reduced to almost zero (EPIQ Water Policy Team, 1998).

Calculation example of source reduction and the impact on drainage water generation and salinity

Source reduction influences not only the rootzone salinity but also the amount of drainage water generated and the salinity of the drainage water. This is of special interest when seeking options for reuse and disposal. Under normal conditions and source reduction, the amount of generated drainage water in the drainage pilot study area in Pakistan can be calculated using Equation 15:

Dra-normal = 363 + 163 + 182.5 = 708.5 mm
Dra-reduction = 69 + 131 + 182.5 = 382.5 mm

Thus, when source reduction is applied, 46 percent less drainage water is produced. A major reduction is obtained in the first months of the winter season and to a lesser extent during the summer season (Figure 27).

Figure 27. Depth of drainage water generated

Figure 28. Salinity of the generated drainage water

The salinity of the drainage water increases mainly during the first two months of the winter season when source reduction is applied in comparison to the normal irrigation practices (Figure 28). The increased rootzone salinity that develops over the summer season as a result of reduced leaching and the subsequent leaching at the start of the winter season are the main reasons for this increase in salinity of the drainage water.

The total salt load is normally of interest for disposal options. The salt load can be expressed as the depth of water multiplied by the salinity of the drainage water (ECmm). The annual salt load is calculated using Equation 16:

DraECDra-normal = 363 * 2 * 2.0 + 163 * 1 + 182.5 * 5 = 2 528 ECmm
DraECDra-reduction = 69 * 2 * 8.60 + 131 * 1 + 182.5 * 5 = 2 230 ECmm

This means that the annual salt load reduction achieved under source reduction is 11.8 percent. If the TDS (g/litre) is approximately 0.64 EC (dS/m), the annual reduction is salt load is 1.9 tonnes per ha. Figure 29 shows the distribution of salt load over the year.

Shallow water table management

In the past, drainage systems were typically designed to remove deep percolation from irrigated land plus any seepage inflow. In saline groundwater areas the depth and spacing was also determined to minimize potential capillary rise of saline groundwater into the rootzone. However, the interaction between crop water use from shallow groundwater and irrigation management has been used to demonstrate the potential impact on irrigation design and the active management of drainage systems. However, there has been little research on the active management of drainage systems in arid and semi-arid areas (Ayars, 1996). The concept of controlled subsurface drainage was first developed for humid climates.

Figure 29. Salt load in the generated drainage water

Controlled subsurface drainage

The concept of controlled subsurface drainage can be applied as a means to reduce the quantity of drainage effluent. Figure 30 shows how a control structure at the drainage outlet or a weir placed in the open collector drain allows the water table to be artificially set at any level between the ground surface and the pipe drainage level, so promoting rootwater extraction. The size of the areas where the water table is controlled by one structure depends on the topography. During system operation, it is important that the water table be maintained at a relatively uniform depth. A detailed topographical map is essential for dividing an area into zones of control for the water management system (Fouss et al., 1999). Butterfly valves could be installed to restrict the flow from individual lateral lines, and manholes with weir structures could be installed along the collector (Ayars, 1996). In situations where the drain laterals run parallel to the surface slope, in-field controls might be necessary to maintain uniform groundwater depths (Christen and Ayars, 2001).

Figure 30. Controlled drainage

Source: Zucker and Brown, 1998.

Raising the outlet or closing the valves at predetermined times maintains the water table at a shallow depth. Withholding irrigation applications at the same time induces capillary rise into the rootzone. In this way, plants meet part of their evapotranspiration needs directly from soil water, replacing irrigation water with shallow groundwater that would otherwise have been evacuated by the drainage system. Without irrigation applications, the water table gradually drops in areas with no seepage inflow from outside, while a more or less constant water table can be maintained in areas with high seepage inflow.

Where shallow water table management through controlled drainage is practised, the drainage effluent and salt load discharged is reduced. Controlled drainage also helps to reduce the loss of nutrients and other pollutants in subsurface drainage water (Skaggs, 1999; and Zucker and Brown, 1998). Maintaining a high groundwater table significantly decreases the nitrate concentration in the drainage water. This decrease is a result of a load reduction in drainage discharge and an increase in denitrification.

Considerations in shallow water table management

Box 4: Contribution of capillary rise in India

In India, in a sandy loam soil with a water table at 1.7 m depth and a salinity of 8.7 dS/m, capillary rise contributed up to 50 percent of the crop water requirement. Similarly, at another site a shallow water table at 1.0 m depth with salinity in the range of 3.5-5.5 dS/m facilitated the achievement of the potential crop yields whilst reducing the irrigation application by 50 percent. In both cases, the accumulated salts were leached in the subsequent monsoon season (case study on India, Part II).

In arid to semi-arid areas the main purpose of drainage is the control of waterlogging and salinity. Maintaining the water table at a shallow depth and thus inducing capillary rise into the rootzone seems counterproductive to attaining this objective. However, projects in Pakistan and India have shown that the salts, which accumulated in the rootzone when shallow groundwater was used for evapotranspiration, were easily leached before the next cropping season. In monsoon-type climates, the accumulated salts are leached in the subsequent monsoon season (Box 4). This experience shows that shallow water table management is an important mechanism for reducing the drainage effluent and at the same time for saving water for other beneficial uses.

The extent to which shallow water table management can be used to reduce the drainage effluent discharge depends on:

Capillary rise

Capillary flow depends on: soil type; soil moisture depletion in the rootzone; depth to the water table; and recharge. Evapotranspiration depletes the soil moisture content in the rootzone. Where no recharge through irrigation or rainfall takes place, a difference in potential induces capillary rise from the groundwater. In the unsaturated rootzone, the matrix head, which is caused by the interaction between the soil matrix and water, is negative. At the water table, atmospheric pressure exists and therefore the matrix head is zero. The water moves from locations with a higher potential to locations with lower potentials. Capillary rise from the groundwater or underlying soil layers to the rootzone takes place under the influence of this head difference.

Darcy's Law (Annex 5) can be applied to calculate the capillary flow. However, the unsaturated hydraulic conductivity, K(q), needs to be known. A difficulty is that K(q) is a function of the moisture content and the moisture content is a function of the pressure head.

Although water movement in the unsaturated zone is in reality unsteady, calculations can be simplified by assuming steady-state flow during a certain period of time. The steady-state flow equation can be written as:

(17)

where:

z1,2 = vertical coordinates (positive upward and z = 0 at groundwater table) (m);
h1,2 = soil pressure heads (m); and
K(q) = unsaturated hydraulic conductivity (m/d).

K() is the hydraulic conductivity for which is (h1 + h2)/2. With this equation, the soil pressure head profiles for stationary capillary rise fluxes can be calculated. From these pressure head profiles, the contribution of the capillary rise under shallow groundwater table management can be estimated. Figure 31 shows the capillary fluxes for a silty soil where the reference level is the groundwater table (z = 0).

Box 5 provides an example on how Figure 31 can be used to estimate maximum capillary rise.

Annex 5 provides more theore-tical background and an example to show the calculation procedures to derive the soil pressure head profiles for a given stationary capillary flux.

Figure 31. Pressure head profiles for a silty soil for stationary capillary rise fluxes

Instead of calculating the capillary rise manually, computer programs might be used. Programs are available based on both steady-state and non-steady-state models. An example of a steady-state model is CAPSEV (Wesseling, 1991). Non-steady-state models include SWAP (Kroes et al., 1999). SWAP is not specifically designed to calculate capillary rise but mainly to simulate water and solute transport in the unsaturated zone and plant growth.

Box 5: Capillary flux in the Drainage Pilot Study Area

In the drainage pilot study area (previous calculation examples), maximum crop evapotranspiration is 7.5 mm/d. If it is assumed that the soil moisture is readily available up to a soil pressure head of 400 cm, a water table depth up to 60 cm below the bottom of the rootzone can deliver 100 percent of the crop water requirements through capillary rise (Figure 31). If the water table is not recharged, the water table depth will increase and therefore the capillary flux will decrease. For example, at a depth of 160 cm below the rootzone the capillary rise into the rootzone is reduced to 1 mm/d.

Maintaining a favourable salt balance under shallow water table management

The salt balance and salt storage equations apply for situations where the downward flux dominates over the capillary rise. When shallow groundwater table management is implemented, capillary rise is induced and the upward flux dominates over the downward flux. Where no irrigation and percolation occurs during the calculation period, the change in salt storage in the rootzone equals:

DS = GECgw (18)

The change in salt storage is a function of the amount of capillary rise (as calculated in the previous section) and the salinity of the groundwater. The accumulated salts in the rootzone need to be leached before the next growing season. In arid and semi-arid climates where rainfall is absent during this period, it is important to calculate the amount of water required to leach the accumulated salts from the rootzone.

Assuming that the amount of evaporation can be ignored during the leaching process and that the soil moisture has been restored to field capacity so that the leaching efficiency coefficient fi equals fr, the depth of leaching water can be calculated if the leaching coefficients are known. Under the assumption that there is no rainfall, the following equation can be used to calculate the depth of leaching water required (Van Hoorn and Van Alphen, 1994):

(19)

where:

f = leaching efficiency coefficient (-);
q = vertical flow rate (mm/d);
t = time required to leach the accumulated salts (d);
Eceo = salinity in the rootzone at the start of the calculation period (dS/m); and
Ecet = the desired salinity in the rootzone at the end of the period (dS/m).

Calculation example of the impact of shallow water table management on salinity buildup and leaching requirement

In the drainage pilot study area, the water table depth is normally high in the winter, crop evapotranspiration is low and irrigation water supplies are limited during the winter season. It is ideal to practise shallow groundwater table management in the winter when wheat is the main crop. Total crop evapotranspiration for wheat is 385 mm. The total seepage inflow during the wheat growing season is 0.5 mm/d * 150 d = 75 mm. The salinity of the seepage inflow is 5 dS/m and the salinity of the irrigation water is 1.0 dS/m. The water table depth (WT) at the start of the season is 0.7 m. The readily available soil moisture for the silty soil as a volume percentage (q) is 0.191. The drainable pore space (m) 0.15. The soil moisture content in the rootzone at field capacity (qfc) is 0.36.

It is assumed that all farmers in the area will practice shallow groundwater table management and no additional seepage into the area is induced. Under these assumptions the following steps need to be followed to calculate the change in groundwater table depth and the contribution of the capillary rise (G = qt) towards crop evapotranspiration:

Step 1. Calculate z, which is WT - half of the depth of the rootzone (Drz).

Step 2. Estimate the initial equilibrium q (Table 3, Annex 5) for the calculated z. If the change in soil moisture content ((DW) = 0 then q at the start of the next calculation period is equivalent to the equilibrium q for the calculated z. At equilibrium conditions h = z. Otherwise, q = equilibrium q - ((DW/Drz). If q drops below the readily available soil moisture irrigation is required.

Step 3. Estimate the maximum q by using Figure 31.

Step 4. Calculate maximum G which is maximum qt.

Step 5. Check if the maximum G can cover the water uptake by crops. If maximum G minus ETcrop is positive then the actual G is equivalent to ETcrop. Otherwise, the actual G is equivalent to the maximum G.

Step 6. Calculate the change in soil moisture content (DW) = (ETcrop - actual G).

Step 7. Calculate the drop in groundwater table (Dh) = ((actual G - Si)/m).

Step 8. The WT in the succeeding month is the WT of the calculated month plus Dh.

Step 9. Repeat previous steps for all the months in the calculation period.

The below matrix presents a summary of the calculations of the changes in groundwater table depth and the contribution of the capillary flow towards crop evapotranspiration.


Dec

Jan

Feb

Mar

Apr

Total

WT (cm)

70

89

116

173

203

213

Drz (cm)

50

80

100

100

100


z (cm)

45

49

66

123

153


q (for h = z)

0.42

0.42

0.41

0.36

0.31


maximum q (cm/d)

> 1.0

> 1.0

0.75

0.2

0.1


maximum G (cm)

> 30

> 30

22.5

6

3

> 91.5

Si (cm)

1.5

1.5

1.5

1.5

1.5

7.5

ETcrop (cm)

4.3

5.5

10.1

11.0

7.6

38.5

actual G (cm)

4.3

5.5

10.1

6.0

3.0

28.9

DW (cm)

0

0

0

- 5.0

- 4.6

- 9.6

Dh (cm)

19

27

57

30

10

143

Figure 32. Change in water table during the winter season in the drainage pilot study area

The calculation example shows that the water table drops from 70 cm to 213 cm below the soil surface (Figure 32). The soil moisture content decreases towards the end of the growing season but does not get below the readily available soil moisture. Irrigation is therefore not required.

Assuming that the shallow ground-water salinity is equivalent to the salinity of the seepage inflow into the area, the change in salt storage in the rootzone over the winter season is:

DS = 289 mm * 5 dS/m = 1 445 ECmm

If the salinity (ECe) at the start of the season was 2 dS/m, the salinity at the end of the season would be:

Send = Sstart + DS
Send = 2 * 2 * 360 + 1 445 = 2 885 ECmm
ECe end = 2 885/(2 * 360) = 4.0 dS/m

Equation 19 can be used to calculate the leaching requirements. If it is assumed that the soil salinity should be lowered to 2 dS/m again before the sowing of cotton, the total leaching requirement is:

qt = 339 mm

This leaching might be obtained during normal irrigation during the following season and may not need any special leaching irrigation unless the salts have actually accumulated in the seed zone.

Land retirement

Land retirement involves taking land out of production because of water shortage and/or soil and water quality considerations. Elevated concentrations of toxic trace elements or extremely high salt concentrations in combination with waterlogging problems may be reasons for taking land out of production. By retiring salt-affected land, the total salt or toxic trace element load to be disposed of is reduced. Moreover, land retirement conserves irrigation water for reallocation to other beneficial uses such as public water supply, wetlands, etc. However, in California, the United States of America, water districts would generally prefer to retain the conserved water for application to other lands within the district preferably on non-problem soils. In water short regions of the Coleambally area in Australia, there is a policy to retain a minimum amount of water (2 000 m3/ha) to keep retired lands from becoming salinized so that retired lands do not become a discharge zone (Christen, E.W., personal communication, 2001). If problem soils are encountered in the planning stages of new irrigation projects, it might be decided to leave these uncultivated. Excluding land during the planning stages is comparable to land retirement in developed projects. Where large contiguous blocks of lands are retired, native vegetation and wildlife can be sustained.

Hydrologic, soil and biologic considerations

Land retirement appears to be an attractive solution to the drainage problem. However, analysis of hydrologic, biologic and soil consequences indicates land retirement is complex. For example, in the western side of the San Joaquin Valley (SJVDIP, 1999c), the waterlogged areas with high-selenium shallow groundwater are located downslope in the lower part of the alluvial fans and in the trough of the valley. The water table in upslope locations is deep and lateral subsurface flow of deep percolation from upslope areas contributes to downslope drainage problems. Simulations with a two-dimensional hydrologic model (Purkey, 1998) ascertained the following:

A land retirement programme can lead to many wildlife benefits by reducing the load of toxic trace elements and salts discharged into the environment, and especially if there is connectivity between retired land parcels for the restoration of native animals and plants. However, land retirement may potentially result in negative effects such as upwelling of saline shallow groundwater leading to excessive accumulation of trace elements and salts on the land surface, and the establishment of undesirable weed plant communities (SJVDIP, 1999c). Excessive salt accumulation might result in little or no vegetation cover, and wind-blown salt and selenium problems downwind. Therefore, retired lands require land, vegetation and water management in order to obtain wildlife benefits.

Selection of lands to retire

The potential reduction in drainage water effluent, salinity and toxic trace elements are the main criteria for selection of lands to retire. For example, in the San Joaquin Valley land areas generating selenium concentrations of more than 200 ppb are prime candidates for land retirement, and those generating more than 50 ppb are secondary candidates for land retirement. Where reduction of drainage effluent is the major goal, the retirement of undrained upslope land would provide the greatest long-term relief while retirement of downslope waterlogged land would reduce the drainage volume to a greater extent on a short-term basis.

One of the major concerns in land retirement is the accumulation of salts and trace elements at or near the soil surface destroying the vegetation cover and thus endangering its long-term sustainability. In this respect, it is important to make a distinction between recharge and discharge areas (Figure 33).

Figure 33. Recharge and discharge areas

It is relatively easy to differentiate between recharge and discharge areas in hilly and undulating terrain. Recharge occurs in the uplands and discharge in the lowlands and valley bottoms. In irrigated lands, recharge and discharge areas are more difficult to identify. Before an irrigation event in waterlogged lands, the land becomes a discharge area under the influence of evapotranspiration while just after irrigation the land turns into a recharge area. For more long-term trends, it would be desirable to ascertain directions of groundwater flow pathways for the identification of problem sites. Retirement of irrigated land and re-establishment of natural vegetation cover in irrigated flat plains will result in the creation of localized discharge areas.

Experience from Australia (Heuperman, 2000) shows that re-vegetation with salt tolerant crops in recharge areas with high groundwater tables is sustainable from a salinity point of view. It also reduces the recharge to the groundwater, thereby relieving lateral inflow in discharge areas. The plants in these areas rely on surface water input for evapotranspiration, which is either rainfall or irrigation. Land retirement in discharge areas with water tables within the critical water table depth combined with the introduction of (natural) vegetation will lead to the accumulation of salts in the vegetation's rootzone. Land retirement under these conditions is not sustainable if accumulated salts are not flushed down occasionally. Adequate drainage is required for the removal of saline drainage effluent.

Management of retired lands

The management of retired lands differs for upland and lowland conditions. In the uplands, the water table is deep and the vegetation depends on surface water input to meet the crop water requirement. In lowland areas, a shallow saline groundwater table normally lies beneath the retired land. Maintenance of the water and salt balance for both uplands and lowlands depends on the water requirement and salt tolerance of the vegetation. Annex 6 presents salt tolerance data for trees and shrubs that are potentially suitable for retired lands.

It is sometimes suggested that the planting of salt tolerant trees, shrubs and fodder crops may help to attain a positive salt balance. Chemical analysis of the bulk of dry weight of crops showed that 5 percent of the dry weight of a plant consists of mineral constituents (Palladin, 1992; and Pandey and Sinha, 1995). Kapoor (2000) assumes that these mineral constituents are derived from the soil solution. In this case, the biomass produced by salt tolerant crops on a yearly basis per hectare multiplied by the percentage mineral content of the plant is the total annual salt extraction per hectare of salt tolerant vegetation. The calculation example provided by Kapoor (2000) shows a favourable salt balance for an area where the only source of salt is imported surface water with a low salt concentration (86 mg/litre). However, from studies carried out in California and Australia where salinity levels are considerably higher there seems to be no real evidence that salt tolerant crops remove substantial amounts of salts from the soil water solution (Heuperman, 2000; Chauhan, 2000). Therefore, the water and salt balance equations as used for conventional agricultural crops also apply to maintaining favourable water and salt balances for vegetation tolerant to saline and waterlogged conditions that is used on retired land.


[1] High water table results from an imbalance in the water balance - water is being applied to the surface at a rate that exceeds the carrying capacity of the groundwater system, thereby raising groundwater levels. In groundwater management, groundwater pumping is increased in order to remove the excess groundwater and lower the water table (SJVDIP, 1999f). Application of this option needs substantial knowledge of groundwater systems and hydrology, which is beyond the scope of this publication.
[2] The quantity ECmm requires some explanation. The parameter S is the mass of salts obtained from the product of salt concentration and water volume per area. For the sake of convenience, Van Hoorn and Van Alphen (1994) chose to use EC instead of TDS in grams per litre. The unit millimetre equals litre per square metre. Thus, the parameter S corresponds with the amount of salt in grams per square metre.

Previous Page Top of Page Next Page