2. In numerical analysis, the secant method is a root-
finding algorithm that uses a succession of roots of
secant lines to better approximate a root of a function
f. The secant method can be thought of as a finite-
difference approximation of Newton's method.
However, the method was developed independently of
New Secant method is considered to be the most
effective approach to find the root of a non-linear
function. It is a generalized from the Newton-Raphson
method and does not require obtaining the derivatives
of the function. So, this method is generally used as an
alternative to Newton Raphson method.ton's method
and predates it by over 3000 years.
3. No. of initial guesses â
Type â open bracket
Rate of convergence â faster
Convergence â super linear
Accuracy â good
Approach â interpolation
Programming effort â tedious
4. 1.Start
2. Get values of x0, x1 and e *Here x0 and x1 are the two
initial guesses e is the stopping criteria, absolute error
or the desired degree of accuracy*
3. Compute f(x0) and f(x1)
4. Compute x2 = [x0*f(x1) â x1*f(x0)] / [f(x1) â f(x0)]
5. Test for accuracy of x2 If [ (x2 â x1)/x2 ] > e, *Here [ ]
is used as modulus sign* then assign x0 = x1 and x1 = x2
go to step 4 Else, go to step 6
6. Display the required root as x2.
7. Stop
5.
6. Starting with initial values x0 and x1, we construct a
line through the points (x0, f(x0)) and (x1, f(x1))
Assume x0 and x1 to be the initial guess values, and
construct a secant line to the curve through (x0, f(x0))
and (x1, f(x1)). The equation of this secant line is given
by
If x be the root of the given equation, it must satisfy:
f(x) = 0 or y= 0. Substituting y = 0 in the above
equation, and solving for x, we get:
7. Now, considering this new x as x2, and repeating the
same process for x2, x3, x4, . . . . we end up with the
following expressions:
this is the required formula which can also be used in
matlab.
8. The iterates xn of the secant method converge to a root of f, if
the initial values x0 and x1 are sufficiently close to the root. The
order of convergence is Ï, where
In particular, the convergence is superlinear, but not quite
quadratic. This result only holds under some technical
conditions, namely that f be twice continuously differentiable
and the root in question be simple. If the initial values are not
close enough to the root, then there is no guarantee that the
secant method converges. For example, if f is differentiable on
that interval and there is a point where fâ=0 on the interval,
then the algorithm may not converge
9. The secant method does not always converge. The
false position method (or regula falsi) uses the same
formula as the secant method. However, it does not
apply the formula on (xn-1) and (xn-2), like the secant
method, but on (xn-1) and on the last iterate xk such
that f(xk) and f(xn1) have a different sign. This means
that the false position method always converges. The
recurrence formula of the secant method can be
derived from the formula for Newton's method
10. If we compare Newton's method with the secant method,
we see that Newton's method converges faster (order 2
against Ï â 1.6). However, Newton's method requires the
evaluation of both f and its derivative fâ at every step, while
the secant method only requires the evaluation of f.
Therefore, the secant method may occasionally be faster in
practice.
For instance, if we assume that evaluating f takes as much
time as evaluating its derivative and we neglect all other
costs, we can do two steps of the secant method
(decreasing the logarithm of the error by a factor Ï 2 â 2.6)
for the same cost as one step of Newton's method
(decreasing the logarithm of the error by a factor 2), so the
secant method is faster. If, however, we consider parallel
processing for the evaluation of the derivative, Newton's
method proves its worth, being faster in time, though still
spending more steps.
11. BROYDENâS METHOD is a generalization of the
secant method to more than one dimension The
following graph shows the function f in red and the
last secant line in bold blue. In the graph, the x
intercept of the secant line seems to be a good
approximation of the root of f
12.
13. As an example of the secant method, suppose we wish
to find a root of the function:
f(x) = cos(x) + 2 sin(x) + x 2 we use a numerical
technique. We will use x0 = 0 and x1 = -0.1 as our initial
approximations. We will let the two values Δstep =
0.001 and Δabs = 0.001 and we will halt after a
maximum of N = 100 iterations. We will use four
decimal digit arithmetic to find a solution and the
resulting iteration is shown in Table 1.
Table 1. The secant method applied to
f(x) = cos(x) + 2 sin(x) + x 2 .
14.
15. 1. It converges at faster than a linear rate, so that it is
more rapidly convergent than the bisection
method.
2. It does not require use of the derivative of the
function, something that is not available in a
number of applications.
3. It requires only one function evaluation per
iteration, as compared with Newtonâs method
which requires two.
16. 1. It may not converge.
2. There is no guaranteed error bound for the
computed iterates.
3. It is likely to have difficulty if f0 (α) = 0. This
means the x-axis is tangent to the graph of y = f(x)
at x = α
4. Newtonâs method generalizes more easily to new
methods for solving simultaneous systems of
nonlinear equations.
17. 1. The method fails to converge when f(xn) = f(xn-1)
2. If X-axis is tangential to the curve, it may not
converge to the solution.