You are on page 1of 27

RAINFALL AND FLOOD FREQUENCY ANALYSIS

PRAVEEN THAKUR
WRD, IIRS DEHRADUN

Hydrological processes are generally chance and time


dependent processes.

Probabilistic modeling considers only the probability of


occurrence of an event with a given magnitude and uses
probability theory for decision making.

Probabilistic modeling or frequency analysis is one of the


earliest and most frequently used application of statistics
in hydrology.

Early applications of frequency analysis were largely in the


area of flood flow estimation but today nearly every phase
of hydrology is subjected to frequency analysis.

It involves identifying the specific probability distribution


which the event is likely to follow and to proceed to
evaluate the parameters of the distribution using the
available data of the events to be modeled.

Information on flood magnitudes and their frequencies is


needed for design of hydraulic structures and flood
management purposes such as :

Dams,
Spillways,
Road and railway bridges,
Culverts,
Urban drainage systems,
Flood plain zoning,
Economic evaluation of flood protection projects etc.

CRITERIA FOR CLASSIFICAION OF DAMS


BASED ON SIZE AND HYDRAULIC HEAD

Classification Gross storage


(in million cubic
meters)

Hydraulic head
(in meters)

Small

Between 7.5 and 12

Between 0.5 &


10

Intermediate Between 10 and


60

Between 12 and 30

Large

Greater than 30

Greater than 60

CRITERIA FOR CLASSIFICAION OF DAMS


BASED ON SIZE AND HYDRAULIC HEAD

Small

100-Year Flood

Intermediate Standard Project Flood


(SPF)

Large

Probable Maximum Flood


(PMF)

The Broad Area of Flood Frequency Analysis


Has Been Covered in the Light of the following
Topics:

Definitions
Assumptions and data requirement
Plotting positions
Commonly used distributions in flood
frequency analysis
Parameter estimation techniques
Goodness of fit tests and
Estimation of T year flood and confidence
limits

DEFINITIONS

a) Peak Annual Discharge: The peak annual discharge is


defined as the highest instantaneous volumetric rate of
discharge during a year
b) Annual flood series: The annual flood series is the
sequence of the peak annual discharges for each year of
the record
c) Design Flood: Design flood is the maximum flood which
any structure can safely pass. It is the adopted flood to
control the design of a structure

d) Recurrence interval or return period: The return


period is the time that elapses on an average between
two events that equal or exceed a particular level. For
example, T year flood will be equaled or exceeded on
an average once in T years
e) Partial flood series: the partial flood series consists of
all recorded floods above a particular threshold
regardless of the number of such floods occurring
each year

f) Mean: Mean is a measure of central tendency. Other


measures of central tendency are median and mode.
Arithmetic mean is the most commonly used measure of
central tendency and is given by
N

x xi / N

(1)

i 1

where xi is the ith variate and N is the total number of obs

g)

Standard Deviation: An unbiased estimate


of standard deviation (Sx) is given by
N

S X ( ( xi x) 2 / N 1) 0.5
i 1

(2)

Standard deviation is the measure of


variability of a data set. The standard
deviation divided by the mean is called the
coefficient of variation and (Cv) is generally
used as a regionalization parameter.

h)

Coefficient of skewness (Cs) : The coefficient of


skewness measure the asymmetry of the frequency
distribution of the data and an unbiased estimate of the
N
Cs is given by
N ( xi x ) 3
Cs

i 1

( N 1)(N 2) S x

(3)
i) Coefficient of kurtosis (Ck) : The coefficient of kurtosis
is Ck measures the peakedness or flatness of the
frequency distribution near its centre and an unbiased
N ( x x)
estimate of it is given by
N

Ck

i 1

( N 1)(N 2)(N 3) S x

(4)

j) Probability paper : A probability paper is a specially


designed paper on which ordinate represents the
magnitude of the variable and abscissa represent the
probability of exceedance or nonexceedance.
Proability of exceedance, Pr(X x), probability of
non exceedance, Pr (X x) and return period (T) are
related as

Pr (X x ) 1 - Pr ( X x)
Pr (X x ) 1/T

Plotting position formulae are used to assign


probability of exceedance to a particular event.

ASSUMPTIONS AND DATA REQUIREMENT


Assumptions:
The following three assumptions are implicit in
frequency analysis.
The data to be analyzed describe random events.

The natural process of the variable is stationary with


respect to time.
The population parameters can be estimated from the
sample data.

Data Requirement:

For flood frequency analysis either annual flood series or


partial duration flood series may be used.
The requirements with regard to data are that :
a) Data should be relevant,
b) Data should be adequate, and
c) Data should be accurate.

The term relevant means that data must deal with


problem.
For example, if the problem is of duration of flooding
then data series should represent the duration of flows in
excess of some critical value. If the problem is of interior
drainage of an area then data series must consist of the
volume of water above a particular threshold.
The term adequate primarily refers to length of data. The
length of data primarily depends upon variability of data
and hence there is no guide line for the length of data to
be used for frequency analysis. Generally a length of 30
35 years is considered adequate for flood frequency
analysis.

The term accurate refers primarily to the homogeneity of


data and accuracy of the discharge figures.

The data used for analysis should not have any effect of
man made changes.
Changes in the stage discharge relationship may render
stage records non-homogeneous and unsuitable for
frequency analysis.
It is therefore preferable to work with discharges and if
stage frequencies are required then most recent rating
curve is used.

FLOOD FREQUENCY ANALYSIS METHODS :

PLOTTING POSITIONS

NORMAL DISTRIBUTION

LOG NORMAL DISTRIBUTION

WEIBULL DISTRIBUTION
EXPONENTIAL DISTRIBUTION
LOG PEARSON TYPE-II DISTRIBUTION
LOG PEARSON TYPE-III DISTRIBUTION
GUMBELS DISTRIBUTION

Year
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994

Peak(m)
89.44
84.54
86.74
87.24
87.24
87.64
87.21
87.84
87.37
87.42
87.51
87.10
86.90
87.24
87.49
88.20
88.49
87.22
87.55
87.47
87.37
Total
S.D
Mean

Flood
level
dis. order
89.440
88.490
88.200
87.840
87.640
87.550
87.510
87.490
87.470
87.420
87.370
87.370
87.240
87.240
87.240
87.220
87.210
87.100
86.900
86.740
84.540
1835.220
0.877
87.391

Rank
(m)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

FLOOD GAUGE DATA AT SISAPATHAR SITE

Return
Period (T)
22.000
11.000
7.333
5.500
4.400
3.667
3.143
2.750
2.444
2.200
2.000
1.833
1.692
1.571
1.467
1.375
1.294
1.222
1.158
1.100
1.048

P=m/(N+1)
T=1/P

Rating Curve at Sisapathar Gauge Site

FLOOD GAUGE (m)

89.0
88.5
88.0

87.75

87.5

87.0
650

86.5
200

300

400

500

600

DISCHARGE (cumecs)

700

800

The extreme value distribution was introduced by Gumbel (1941), which is


known as Gumbels distribution. It is widely used probability functions for
extreme values in hydrologic and meteorologic studies for prediction of
flood peaks, maximum rainfalls, maximum wind Speed, etc.

According to his theory of extreme events, the probability of occurrence of


an event equal to or larger than a value x0 is
P (X>=x0) = 1-e-e-y
or
YP =-ln[-ln(1-P)]
or
YT =-[ln.ln(T/T-1)]

The basic equation used in the Gumbels method is..


xT =
+ k*SDV
where,
xT

Value of variate with a return period T

Mean of the variate

SDV =

Standard deviation of the sample

Frequency factor expressed as

yT

Reduced variate expressed by

yT - yn
---------Sn

yT

T
- [LN * LN ------- ]
T- 1
Return period

Yn

Reduced mean from table

Sn

Reduced standard deviation from table.

The basic equation used in this method is


ZT= + Kz * SDV
where,
Kz =

Frequency factor taken from table with values of coefficient


of skewnes Cs and return period T.

SDV =

Standard deviation of the Z variate sample.

Cs

Co-efficient of skew of variate Z

N (z - )3
=

-----------------------(N-1) (N-2) (SDV)3

Mean of the z values

Sample size = Number of years of record

And xT

Antilog (zT)

Source: Cees van Western, ITC

THANKS

You might also like