You are on page 1of 4

CURVE FITTING

The part describes techniques to fit curve to substitution data in order to obtain
intermediate estimate. Also to simplify a complication function.
The way to do this is to compute values of the function ,at a number of discrete
values along the range of interest. Then a simple function may be derived to fit this
values. Both of these application are known as curve fitting.
There are two general aproaches for curve fitting : least-squares Regression and
Interpolation.
The strategy of least-squares Regression is to derive a single curve that
represents the general trend of the data.
The basic approach of Interpolation is to fit a curve or a series of curve that pass
directly through each of the points. Then to estimate the values between well-known
discrete points.

x
y
Interpolation
-S regression

Interpolation
x
f(x)

x
f(x)

Linear Interpolation Curvilinear Interpolation
CURVE FITTING AND ENGINEERING PRACTICE
Two types of applications are generally encountered when fitting experimental
data ; trend analysis and hypothesis testing.
Trend analysis : represents the process of using the pattern of the data to make
predictions for values of the dependent variable.
This can involve extrapolation beyond the limits of the observed data or
interpolation within the range of the data.
Hipothesis testing
Here an existing mathematical model is compared with measured data.
To tes the adequacy of the model.
Finally,
Curve fitting techniques can be used to derive simple functions to approximate
complicated function.
Simple Statistics
The descriptive statistics are most often selected to represent :
1. The location of the center of the distribution of the data
2. The degre of spread of the data set
The most common location statistic is the mean, y.

summation from = 1 n.
The most common measure of spread for a sample is the standard deviation (

)
about the mean :

; where

= (

: is the type sum of the squares of the residual between the data
points and the mean.
The spread can also be represented by the square of the standard deviation,
which is called the variance,


Coefficient of variation (C.V)
: is final statistic that has utility in quantifying spread of data
: is the ratio of the standard deviation to the mean
Relative error
: ratio ofa measure of error (

) to an estimate of the true value ()



The Normal Distribution
The data distribution : the shape with which the data is spread around the mean
If a quantity is normal distributed, the range defined by -

with encompass
approximately 68 % of the number of measurement.
Smilarry, the range defined by 2

to 2

will encompass approximately 95 %.


SCOPE
1. LEAST SQUARES REGRESSION
- Linear Regresson
- Polinomial Regresson
- Multiple Regression
2. INTErPOLATION
- Newtons Polinomials
- Lagrange Polinomials
- Splines
3. CASE STUDIES
-
-
I. Least-Squares Regression
x
y
a
x
y

b
x
y

c
Data exhibing significant
error
Polinomial fit oscilasting
beyond the range of the
data
More satisfactory result
using the least-squares fit

1.1 Linear Regression
the simplest example at a least-squares approximation is fitting a straight line to a
set of paired observation : (

) (

) (

).
The mathematical expression of the straight line is
Y =

E . . . (1)
C.V =

x 100 %
Where

and

are coefficients representing the intercept and the slope


respectively, and E is the error, or residual the model and the observation.
E = y

. . . (2)
So the residual is the discrepancy betwen the true value of y and the approximate
value

predicted by the linear equation.


1.1.1 Critia for a Best Fit
The strategy for fitting a best line though the data would be to minimize the sum
of the residual error, as in
(1)

. . . (3)
Another criterion would be ti minimize the sum of the absolute values of the
discrepancies, as in :
(2)

. . . (4)
A third strategy for fitting a best line is the minimax criterian.
:is to minimize the sum of the squares of the residuals,

, :
(3)

. . .(5)
this is a unique line for a given set a data

You might also like