SlideShare ist ein Scribd-Unternehmen logo
1 von 33
1




 Gaussian
Integration
        M. Reza Rahimi,
Sharif University of Technology,
         Tehran, Iran.
2




    Outline
•   Introduction
•   Gaussian Integration
•   Legendre Polynomials
•   N-Point Gaussian Formula
•   Error Analysis for Gaussian Integration
•   Gaussian Integration for Improper Integrals
•   Legendre-Gaussian Integration Algorithms
•   Chebyshev-Gaussian Integration Algorithms
•   Examples, MATLAB Implementation and Results
•   Conclusion
3




  Introduction
• Newton-Cotes and Romberg Integration usually use
  table of the values of function.
• These methods are exact for polynomials less than N
  degrees.
• General formula of these methods are as bellow:
                 b          n

                 ∫ f ( x)dx ≅ ∑w
                 a         i =1
                                  i   f ( xi )


• In Newton-Cotes method the subintervals has the
  same length.
4




• But in Gaussian Integration we have the exact
  formula of function.
• The points and weights are distinct for specific
  number N.
5




   Gaussian Integration
• For Newton-Cotes methods we have:
              b
                                   b −a
       1.     ∫   f ( x )dx ≅           [ f (a) + f (b)].
              a
                                     2
              b
                                   b −a                 a +b           
       2.     ∫
              b
                  f ( x )dx ≅
                                     6 
                                          f (a ) + 4 f (
                                                           2
                                                              ) + f (b) .
                                                                        

• And in general form:
        b                n

       ∫ f ( x)dx ≅ ∑ w f ( x )i    i      xi = a + (i − 1)h i ∈ {1,2,3,..., n}
        a               i =1
                    n
            b− a n t− j
       wi =           ∏ dt
            n − 1 ∫ j =1, j ≠ i i − j
                  0
6




• But suppose that the distance among points are not
  equal, and for every w and x we want the integration
  to be exact for polynomial of degree less than 2n-1.
                  n           1
              1.∑ wi = ∫ dx
                 i =1         −1
                  n                1
              2.∑ xi wi = ∫ xdx
                 i =1              −1

              .............
                        n               1
              2n.∑ x 2 n −1 wi = ∫ x 2 n −1 dx
                      i =1              −1
7




• Lets look at an example:
  n =2    w1 , w2 , x1 , x 2 .
   .w1 + w2 = 2
   1
  2.x w + x w = 0
   1 1
  
              2    2
                                                    1
                         2 ⇒2,4 ∴x 1 = x 2 ⇒3 ∴x 1 = .
                                   2    2       2
   2
   .x 1 w1 + x 2 w2 = 3
                2
   3                                                3
   3
  4.x 1 w1 + x 3 2 w2 = 0
  
                1
  x1 = −x 2 =        , w1 = w2 =1.
                 3


• So 2-point Gaussian formula is:
               1
                                      1           −1
              −
               ∫
               1
                   f ( x ) dx ≅ f (
                                      3
                                          )+ f(
                                                   3
                                                       ).
8




  Legendre Polynomials
• Fortunately each x is the roots of Legendre Polynomial.
                           1 d 2
             PN ( x) =             ( x − 1) n .     n = 0,1,2,.....
                         2 n n! dx

• We have the following properties for Legendre
  Polynomials.
            1.Pn ( x)    Has N Zeros in interval(-1,1).
             2.( n +1) Pn +1 ( x ) = ( 2n +1) xPn ( x ) − nPn −1 ( x).
               1
                                                2
             3. ∫ Pn ( x ) Pm ( x) dx = δmn
               −1
                                              2mn +1
               1
             4. ∫ x k Pn ( x) dx = 0    k = 0,1,2......, n - 1
               −1

                                 2 n +1 ( n!) 2
               1
             5.∫ x Pn ( x ) dx =
                    n

               -1
                                 (2n +1)!
9




• Legendre Polynomials make orthogonal bases in (-1,1)
  interval.
• So for finding Ws we must solve the following
  equations:
                n                1
            1.∑ wi = ∫ dx = 2
               i =1          −1
                n                      1
            2.∑ wi x     2
                             i       = ∫ xdx = 0
               i =1                    −1

            ....................
            ....................
                n                           1
                                                   1
            n.∑ wi x n −1i = ∫ x n −1 dx =           (1 − (−1) n )
               i =1                        −1
                                                   n
10




• We have the following equation which has unique
  answer:
                       ... x1   w1                   
                             n −1 T             2
         1      x1                                     
                                    
         1      x2    ... x 2   w2  
                              n −1              0        
                                    . =     .       
          ...   ...   ... ...    
                                             1           
         1            ... x n   wn   n (1 − (− 1) ) 
                              n −1                    n
                xn                                  



• Theorem: if Xs are the roots of legendre polynomials
                                                             1


  and we got W from above equation then ∫P ( x)dx is         −
                                                             1


  exact for P ∈Π2 n −1 .
11




• Proof:
 p ∈ Π 2 n −1 ⇒ p( x) = q ( x) Pn ( x) + r ( x).
                                          n −1                                            n −1
                 q( x) = ∑ q j Pj ( x) ; r ( x) = ∑ r j Pj ( x).
                                          j =0                                            j =0
                  1                                 1                                              1                n −1                 n −1

                 ∫ p( x)dx = ∫ (q( x) P ( x) + r ( x))dx = ∫ ( P ( x)∑ q P ( x) + ∑ r P ( x))dx =
                 -1                                 −1
                                                                    n
                                                                                                   −1
                                                                                                        n
                                                                                                                      j =0
                                                                                                                                 j   j
                                                                                                                                         j =0
                                                                                                                                                j   j


                  n −1               1                                    n −1       1

                 ∑ q ∫ P ( x) P ( x)dx + ∑ r ∫ P ( x) P (x)dx = 2r .
                  j =0
                                 j          j            n
                                                                          j =0
                                                                                 j          0      j                         0
                                     −1                                              −1

⇒
                      n                                   n                                                     n

                 ∑ w p( x ) = ∑ w (q( x ) P
                  i =1
                                 i              i
                                                         i =1
                                                                i          i     N       ( x) + r ( xi )) = ∑ wi r ( xi )
                                                                                                               i =1
                           n              n −1                          n −1         n                  n −1          1
                 = ∑ wi ∑ r j Pj ( xi ) = ∑ r j ∑ wi Pj ( x) = ∑ r j ∫ Pj ( x)dx = 2r0 .
                          i =1             j =0                         j =0     i =1                   j =0        −1
12




Theorem:
             1                                            n         (x − x j )
     wi = ∫ [ Li ( x )] dx                              ∏ (x
                          2
                                       Li ( x ) =
            −1                                         j = , j ≠i
                                                          1           i   −xj )


Proof:
                                1                                         2

     [ Li ( x)] 2 ∈ Π 2 n−2 ⇒ ∫ [ Li ( x)] 2 = ∑ w j [ Li ( x j )]
                                                 n
                                                                              = wi .
                               −1               j =1
13




                              Error Analysis for
                             Gaussian Integration
• Error analysis for Gaussian integrals can be derived
  according to Hermite Interpolation.

                                                                                    b
  Theorem : The error made by gaussian integration in approximation the integral ∫ f ( x )dx is ::
                                                                                    a

             (b − a ) 2 n +1 ( N !) 4
  EN ( f ) =                          f   (2n)
                                                 (ξ )   ξ ∈ [ a, b].
             (2n + 1)((2n)!) 3
14


      Gaussian Integration for Improper
                  Integrals
• Suppose we want to compute the following integral:
                      1
                                 f ( x)
                      ∫
                      −1         1−x2
                                              dx


• Using Newton-Cotes methods are not useful in here
  because they need the end points results.
• We must use the following:
               1                          1−ε
                    f ( x)                         f ( x)
               ∫
               −1   1− x     2
                                 dx ≅     ∫ε
                                        −1+     1− x        2
                                                                dx
15




• But we can use the Gaussian formula because it does
  not need the value at the endpoints.
• But according to the error of Gaussian integration,
  Gaussian integration is also not proper in this case.
• We need better approach.
   Definition : The Polynomial set { Pi } is orthogonal in (a, b) with respect to w(x) if :
                                 b

                                 ∫ w( x) P ( x)P
                                 a
                                           i       j   ( x) dx = 0 for i ≠ j

   then we have the following approximation :
                                 b                        n

                                 ∫ w( x) f ( x)dx ≅ ∑ wi f ( xi )
                                 a                       i =1

   where xi are the roots for Pn and
                                       b
                                 wi = ∫ w( x)[ Li ( x)] dx
                                                              2

                                       a

   will compute the integral exactly when f ∈ Π 2 n −1
16


           Definition : Chebyshev Polynomials Tn ( x ) is defined as :
                      n 
                      2 
                       
                           n 
           Tn ( x ) = ∑ x n −2 k ( x 2 −1) k
                            
                      k =0  2 k 

           Tn ( x ) = 2 xTn ( x) − Tn −1 ( x), n ≥ 1, T0 ( x) = 1, T1 ( x ) = x.
           If - 1 ≤ x ≤ 1 then :
                                                                   ( 2i −1)π 
           Tn ( x ) = cos( n arccos x).        roots      xi = cos           .
                                                                    2n       
           1
                 1
           ∫
           −1   1−x   2
                          Ti ( x )T j ( x ) dx = 0 if i ≠ j.



     • So we have following approximation:

1
       1                   π n                    (2i − 1)π 
∫                f ( x)dx ≅ ∑ f ( xi ), xi = cos 
                           n i =1                 2n        i ∈ {1,2,3,..., n}.
−1    1− x2
Legendre-Gaussian Integration
                                                    17


         Algorithms
                      a,b: Integration Interval,
                        N: Number of Points,
                       f(x):Function Formula.



                      Initialize W(n,i),X(n,i).
                               Ans=0;


                      b−a b−a    a+b
           A( x ) =      f(   x+     ).
                       2    2     2



                    For i=1 to N do:
               Ans=Ans+W(N,i)*A(X(N,i));



                           Return Ans;



                               End


Figure 1: Legendre-Gaussian Integration Algorithm
18

                      a,b: Integration Interval,
                        tol=Error Tolerance.
                       f(x):Function Formula.



                      Initialize W(n,i),X(n,i).
                               Ans=0;


                           b −a    b −a    a +b
                A( x ) =        f(      x+      ).
                             2       2       2



                         For i=1 to N do:
          If |Ans-Gaussian(a,b,i,A)|<tol then return Ans;
                               Else
                     Ans=Gaussian(a,b,i,A);




                             Return Ans;



                                End



Figure 2: Adaptive Legendre-Gaussian Integration Algorithm.
    (I didn’t use only even points as stated in the book.)
19

Chebychev-Gaussian Integration
         Algorithms
                      a,b: Integration Interval,
                        N: Number of Points,
                       f(x):Function Formula.



                              (b − a )    a +b a −b
            A( x) = 1 − x 2            f(     +     x)
                                 2          2    2


                     For i=1 to N do:
               Ans=Ans+ A(xi); //xi chebyshev
                          roots



                        Return Ans*pi/n;



                                 End




Figure 3: Chebyshev-Gaussian Integration Algorithm
20



                          a,b: Integration Interval,
                            tol=Error Tolerance.
                           f(x):Function Formula.


                                     (b − a ) a + b a − b
                  A( x ) = 1 − x 2           f(    +      x)
                                        2       2     2



                          For i=1 to N do:
           If |Ans-Chebyshev(a,b,I,A)|<tol then return Ans;
                                Else
                      Ans=Chebyshev(a,b,I,A);




                                 Return Ans;



                                        End




Figure 4: Adaptive Chebyshev-Gaussian Integration Algorithm
21
  Example and MATLAB
Implementation and Results




 Figure 5:Legendre-Gaussian Integration
22




Figure 6: Adaptive Legendre-Gaussian Integration
23




Figure 7:Chebyshev-Gaussian Integration
24




Figure 8:Adaptive Chebyshev-Gaussian Integration
25




 Testing Strategies:
• The software has been tested for
  polynomials less or equal than 2N-1
  degrees.
• It has been tested for some random inputs.
• Its Result has been compared with MATLAB
  Trapz function.
26




Examples:
Example 1:Gaussian-Legendre


 1
       1                                        π
 ∫ 2
 −1 1 + x
          dx exact → Arc tan(1) − Arc tan(−1) = ≅ 1.5707.
              
                                                2
                        1 − (−1)        1            1
           Trapezoid →(
                              )(             +           ) = 1.0000.
                            2      1 + (−1)  2
                                                  1 + (1) 2


                       1 − (−1)        1               1           1
           Simpson →(
                              )(             +4             +        ) ≅ 1.6667.
                           6      1 + (−1) 2      1 + (0) 2 1 + (1) 2
           2− Po int Gaussian → According To Software Resualt = 1.5000.
                      
           3−Po int Gaussian → According To Software Resualt = 1.5833.
                     
27

 Example 2:Gaussian-Legendre
                                               2
                                      − e−x 3      − e −9 1
          3

       ∫ xe
                        2
                   −x
                            dx  →(
                                 
                                exact
                                            )0 = (       + ) ≅ 0.4999.
          0
                                        2            2    2
                                               3− 0
                              Trapezoid →(
                                                  )(0 + 3e −9 ) ≅ 0.0005.
                                                 2
                                            3−0                   2
                             Simpson → (
                                                 )(0 + 1.5e −1.5 + 3e −9 ) ≅ 0.0792.
                                               6
                             2− Po int Gaussian → ≅ 0.6494.
                                        
                             3− Po int Gaussian → ≅ 0.4640.
                                        
Example 3:Gaussian-Legendre

                 (b − a ) 2 n +1 ( n!) 4
      En ( f ) =                         f         2n
                                                        (ξ )   ξ ∈[a, b].
                 ( 2n +1)((2n)!) 3
      π
                        (π − 0) 2 n +1 ( n!) 4
      ∫ sin( x)dx  → | (2n +1)((2n)!) 3 sin (ξ ) |≤ 5 ×10 ⇒ n ≥ 4.
                                                          −4
                                              2n

      0

                  ( 2 − 0) 2 n +1 (n!) 4 −ξ
      2

      ∫ e dx  →| (2n +1)((2n)!) 3 e |≤ 5 ×10 ⇒ n ≥ 3.
              −x                             −4
             
      0
28
Example 4:Gaussian-Legendre

3
      x        1
∫0 1 + x 2 dx = ln(1 + x 2 ) ≅ 1.15129.
               2
2 ⇒ ≅ 1.21622 ⇒ errora ≅ 0.06493.
3 ⇒≅ 1.14258 ⇒ errora ≅ 0.00871.
4 ⇒≅ 1.14902 ⇒ errora ≅ 0.36227.
5 ⇒≅ 1.15156 ⇒ errora ≅ 0.00027.
6 ⇒≅ 1.15137 ⇒ errora ≅ 0.00008.



Example 5:Gaussian-Legendre
3                      2   3
                     e−x
∫ xe
            2
       −x
                dx =      ≅ 0.49994.
0
                     −2 0
2 ⇒≅ 0.64937 ⇒ errora ≅ 0.14943.
3 ⇒≅ 0.46397 ⇒ errora ≅ 0.03597.
4 ⇒≅ 0.50269 ⇒ errora ≅ 0.00275.
5 ⇒≅ 0.50007 ⇒ errora ≅ 0.00013.
6 ⇒≅ 0.49989 ⇒ errora ≅ 0.00005.
29

 Example 6:Gaussian-Legendre
π /2

 ∫ sin( x) dx :: Trapzoid :: 0.78460183690360
                                                     3.5
          2

    0                                                 3

2 - Point ≅ 0.78539816339745.
                                                     2.5
3 - Point ≅ 0.78539816339745.
                                                      2
π

∫ sin( x)
            2
                dx :: Trapzoid :: 1.57079632662673   1.5
0

2 − Point ≅ 1.19283364797927.                         1


3 - Point ≅ 1.60606730236915.                        0.5

3π
 2                                                    0        -0.77 -0.57                          0.57 0.77


∫ sin( x)
            2                                          -1   -0.8    -0.6 -0.4   -0.2   0   0.2   0.4    0.6   0.8   1
                dx ::Trapzoid :: 2.35580550989210
 0

2 − Point ≅ 2.35619449019234.                                4 - Point ≅ 3.53659228676239.
3 - Point ≅ 2.35619449019234.                                5 - Point ≅ 3.08922572211956.
2π                                                           6 - Point ≅ 3.14606122123817.
∫ sin( x)
            2
                dx ::Trapzoid :: 3.14159265355679
                                                            7 - Point ≅ 3.14132550162258.
0

2 − Point ≅ 5.91940603385020.                               8 - Point ≅ 3.14131064749986.
3 - Point ≅ 1.47666903877755.
30




Example 7:Adaptive Gaussian-Legendre


3
     x
∫ 1 + x 2 dx ::
0

1)Adaptive Gaussian Integration :: error ≈ 5 ×10 -5 ⇒ 1.15114335351486.
2)Adaptive Gaussian Integration :: error ≈ 5 ×10 -4 ⇒ 1.15137188448013.


3

∫
     2
  xe x dx ::
0

1)Adaptive Gaussian Integration :: error ≈ 5 × 10 -5 ⇒ 0.49980229291620.
2)Adaptive Gaussian Integration :: error ≈ 5 × 10 -4 ⇒ 0.49988858784837.
31




Example 8:Gaussian-Chebyshev




2 - Point Chebyshev Integration ≈ 0.48538619428604.
3 - Point Chebyshev Integration ≈ 1.39530571408271




2 - Point Chebyshev :: 0
3 - Point Chebyshev :: 0.33089431565488.
32
Example 9:

 1) w1 + w2 + w3 = 2
 2) w1 x1 + w2 x 2 + w3 x3 = 0
        2             2           2    2
 3) w1 x1 + w2 x 2 + w3 x3 =
                                       3
        3             3           3
 4) w1 x1 + w2 x 2 + w3 x3            =0
        4             4           4    2
 5) w1 x1 + w2 x 2 + w3 x3 =
                                       5
        5             5           5
 6) w1 x1 + w2 x 2 + w3 x3            =0
             w1 x1 + w2 x 2         1 
 2,4 ⇒            3           3
                                  =   2 
            w1 x1 + w2 x 2         x3                2    2               2     2        3    2    2          3     2    2
                  3           3          ⇒ w1 x1 ( x1 − x3 ) = w2 x 2 ( x3 − x 2 ), w1 x1 ( x3 − x1 ) = w2 x 2 ( x 2 − x3 )
            w1 x1 + w2 x 2          1
 4,5 ⇒                            = 2
                                   x3 
                  5           5
            w1 x1 + w2 x 2              
 ⇒ x1 = − x 2

              2       2                    2   2
 ⇒ w1 x1 ( x1 − x3 ) = w2 (− x1 )( x3 − x1 ) ⇒ w1 = w2
 w1 = w2 , x1 = − x 2 ⇒ 2) ⇒ w3 x3 = 0.
                                              
             2  2       4 2
 3,5 ⇒ 2w1 x1 = ,2 w1 x1 = . ⇒  x1 =
                                      3
                                        = − x2                              ( w1 , w2 , w3 ) =  5 , 5 , 8 
                                                                                                           
                3         5          5                                                             9 9 9
              2            5            8                                                        3    3 
         2
 ⇒ 2w1 x1 = ⇒ w1 = w2 = ⇒ 1 ⇒ w3 = ⇒ x3 = 0                                  ( x1 , x 2 , x3 ) = 
                                                                                                   ,− , 0 
              3            9            9                                                        5    5  
33




 Conclusion
• In this talk I focused on Gaussian Integration.
• It is shown that this method has good error
  bound and very useful when we have exact
  formula.
• Using Adaptive methods is Recommended Highly.
• General technique for this kind of integration
  also presented.
• The MATLAB codes has been also explained.

Weitere ähnliche Inhalte

Was ist angesagt?

interpolation
interpolationinterpolation
interpolation
8laddu8
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equation
Nofal Umair
 

Was ist angesagt? (20)

Riemann sumsdefiniteintegrals
Riemann sumsdefiniteintegralsRiemann sumsdefiniteintegrals
Riemann sumsdefiniteintegrals
 
Beta gamma functions
Beta gamma functionsBeta gamma functions
Beta gamma functions
 
Boundary Value Problems - Finite Difference
Boundary Value Problems - Finite DifferenceBoundary Value Problems - Finite Difference
Boundary Value Problems - Finite Difference
 
interpolation
interpolationinterpolation
interpolation
 
Runge kutta 2nd Order
Runge kutta 2nd OrderRunge kutta 2nd Order
Runge kutta 2nd Order
 
Numerical differentiation integration
Numerical differentiation integrationNumerical differentiation integration
Numerical differentiation integration
 
Numerical integration;Gaussian integration one point, two point and three poi...
Numerical integration;Gaussian integration one point, two point and three poi...Numerical integration;Gaussian integration one point, two point and three poi...
Numerical integration;Gaussian integration one point, two point and three poi...
 
Chapter 1: First-Order Ordinary Differential Equations/Slides
Chapter 1: First-Order Ordinary Differential Equations/Slides Chapter 1: First-Order Ordinary Differential Equations/Slides
Chapter 1: First-Order Ordinary Differential Equations/Slides
 
MATLAB : Numerical Differention and Integration
MATLAB : Numerical Differention and IntegrationMATLAB : Numerical Differention and Integration
MATLAB : Numerical Differention and Integration
 
Line integral.ppt
Line integral.pptLine integral.ppt
Line integral.ppt
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equation
 
1st order differential equations
1st order differential equations1st order differential equations
1st order differential equations
 
Presentation on Numerical Integration
Presentation on Numerical IntegrationPresentation on Numerical Integration
Presentation on Numerical Integration
 
Numerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolationNumerical Methods: curve fitting and interpolation
Numerical Methods: curve fitting and interpolation
 
Gaussian Quadrature Formula
Gaussian Quadrature FormulaGaussian Quadrature Formula
Gaussian Quadrature Formula
 
homogeneous Equation All Math Solved
homogeneous Equation All Math Solvedhomogeneous Equation All Math Solved
homogeneous Equation All Math Solved
 
Euler and improved euler method
Euler and improved euler methodEuler and improved euler method
Euler and improved euler method
 
Applied numerical methods lec10
Applied numerical methods lec10Applied numerical methods lec10
Applied numerical methods lec10
 
Initial Value Problems
Initial Value ProblemsInitial Value Problems
Initial Value Problems
 
Gradient , Directional Derivative , Divergence , Curl
Gradient , Directional Derivative , Divergence , Curl Gradient , Directional Derivative , Divergence , Curl
Gradient , Directional Derivative , Divergence , Curl
 

Ähnlich wie Gaussian Integration

Varian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution bookVarian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution book
José Antonio PAYANO YALE
 
Engr 213 sample midterm 2b sol 2010
Engr 213 sample midterm 2b sol 2010Engr 213 sample midterm 2b sol 2010
Engr 213 sample midterm 2b sol 2010
akabaka12
 
Maths assignment
Maths assignmentMaths assignment
Maths assignment
Ntshima
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
Ahmed Haider
 
Emat 213 midterm 1 fall 2005
Emat 213 midterm 1 fall 2005Emat 213 midterm 1 fall 2005
Emat 213 midterm 1 fall 2005
akabaka12
 
Lesson 29: Integration by Substition (worksheet solutions)
Lesson 29: Integration by Substition (worksheet solutions)Lesson 29: Integration by Substition (worksheet solutions)
Lesson 29: Integration by Substition (worksheet solutions)
Matthew Leingang
 
Engr 213 final sol 2009
Engr 213 final sol 2009Engr 213 final sol 2009
Engr 213 final sol 2009
akabaka12
 
Lesson 29: Integration by Substition
Lesson 29: Integration by SubstitionLesson 29: Integration by Substition
Lesson 29: Integration by Substition
Matthew Leingang
 
Quadratic Function Presentation
Quadratic Function PresentationQuadratic Function Presentation
Quadratic Function Presentation
RyanWatt
 
Engr 213 midterm 2b sol 2009
Engr 213 midterm 2b sol 2009Engr 213 midterm 2b sol 2009
Engr 213 midterm 2b sol 2009
akabaka12
 

Ähnlich wie Gaussian Integration (20)

Varian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution bookVarian, microeconomic analysis, solution book
Varian, microeconomic analysis, solution book
 
Calculus Final Exam
Calculus Final ExamCalculus Final Exam
Calculus Final Exam
 
Engr 213 sample midterm 2b sol 2010
Engr 213 sample midterm 2b sol 2010Engr 213 sample midterm 2b sol 2010
Engr 213 sample midterm 2b sol 2010
 
Antiderivatives nako sa calculus official
Antiderivatives nako sa calculus officialAntiderivatives nako sa calculus official
Antiderivatives nako sa calculus official
 
Maths assignment
Maths assignmentMaths assignment
Maths assignment
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
 
Math report
Math reportMath report
Math report
 
Emat 213 midterm 1 fall 2005
Emat 213 midterm 1 fall 2005Emat 213 midterm 1 fall 2005
Emat 213 midterm 1 fall 2005
 
125 5.2
125 5.2125 5.2
125 5.2
 
整卷
整卷整卷
整卷
 
Lesson 29: Integration by Substition (worksheet solutions)
Lesson 29: Integration by Substition (worksheet solutions)Lesson 29: Integration by Substition (worksheet solutions)
Lesson 29: Integration by Substition (worksheet solutions)
 
Engr 213 final sol 2009
Engr 213 final sol 2009Engr 213 final sol 2009
Engr 213 final sol 2009
 
Lesson 29: Integration by Substition
Lesson 29: Integration by SubstitionLesson 29: Integration by Substition
Lesson 29: Integration by Substition
 
Taylor problem
Taylor problemTaylor problem
Taylor problem
 
Monopole zurich
Monopole zurichMonopole zurich
Monopole zurich
 
Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)
 
Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)
 
Quadratic Function Presentation
Quadratic Function PresentationQuadratic Function Presentation
Quadratic Function Presentation
 
Cross product
Cross productCross product
Cross product
 
Engr 213 midterm 2b sol 2009
Engr 213 midterm 2b sol 2009Engr 213 midterm 2b sol 2009
Engr 213 midterm 2b sol 2009
 

Mehr von Reza Rahimi

The Next Big Thing in IT
The Next Big Thing in ITThe Next Big Thing in IT
The Next Big Thing in IT
Reza Rahimi
 
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud ComputingQoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
Reza Rahimi
 
On Optimal and Fair Service Allocation in Mobile Cloud Computing
On Optimal and Fair Service Allocation in Mobile Cloud ComputingOn Optimal and Fair Service Allocation in Mobile Cloud Computing
On Optimal and Fair Service Allocation in Mobile Cloud Computing
Reza Rahimi
 
SMS Spam Filter Design Using R: A Machine Learning Approach
SMS Spam Filter Design Using R: A Machine Learning ApproachSMS Spam Filter Design Using R: A Machine Learning Approach
SMS Spam Filter Design Using R: A Machine Learning Approach
Reza Rahimi
 

Mehr von Reza Rahimi (20)

Boosting Personalization In SaaS Using Machine Learning.pdf
Boosting Personalization  In SaaS Using Machine Learning.pdfBoosting Personalization  In SaaS Using Machine Learning.pdf
Boosting Personalization In SaaS Using Machine Learning.pdf
 
Self-Tuning and Managing Services
Self-Tuning and Managing ServicesSelf-Tuning and Managing Services
Self-Tuning and Managing Services
 
Low Complexity Secure Code Design for Big Data in Cloud Storage Systems
Low Complexity Secure Code Design for Big Data in Cloud Storage SystemsLow Complexity Secure Code Design for Big Data in Cloud Storage Systems
Low Complexity Secure Code Design for Big Data in Cloud Storage Systems
 
Smart Connectivity
Smart ConnectivitySmart Connectivity
Smart Connectivity
 
Self-Tuning Data Centers
Self-Tuning Data CentersSelf-Tuning Data Centers
Self-Tuning Data Centers
 
The Next Big Thing in IT
The Next Big Thing in ITThe Next Big Thing in IT
The Next Big Thing in IT
 
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud ComputingQoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud Computing
 
On Optimal and Fair Service Allocation in Mobile Cloud Computing
On Optimal and Fair Service Allocation in Mobile Cloud ComputingOn Optimal and Fair Service Allocation in Mobile Cloud Computing
On Optimal and Fair Service Allocation in Mobile Cloud Computing
 
SMS Spam Filter Design Using R: A Machine Learning Approach
SMS Spam Filter Design Using R: A Machine Learning ApproachSMS Spam Filter Design Using R: A Machine Learning Approach
SMS Spam Filter Design Using R: A Machine Learning Approach
 
Mobile Applications on an Elastic and Scalable 2-Tier Cloud Architecture
Mobile Applications on an Elastic and Scalable 2-Tier Cloud ArchitectureMobile Applications on an Elastic and Scalable 2-Tier Cloud Architecture
Mobile Applications on an Elastic and Scalable 2-Tier Cloud Architecture
 
Exploiting an Elastic 2-Tiered Cloud Architecture for Rich Mobile Applications
Exploiting an Elastic 2-Tiered Cloud Architecture for Rich Mobile ApplicationsExploiting an Elastic 2-Tiered Cloud Architecture for Rich Mobile Applications
Exploiting an Elastic 2-Tiered Cloud Architecture for Rich Mobile Applications
 
Fingerprint High Level Classification
Fingerprint High Level ClassificationFingerprint High Level Classification
Fingerprint High Level Classification
 
Linear Programming and its Usage in Approximation Algorithms for NP Hard Opti...
Linear Programming and its Usage in Approximation Algorithms for NP Hard Opti...Linear Programming and its Usage in Approximation Algorithms for NP Hard Opti...
Linear Programming and its Usage in Approximation Algorithms for NP Hard Opti...
 
Optimizing Multicast Throughput in IP Network
Optimizing Multicast Throughput in IP NetworkOptimizing Multicast Throughput in IP Network
Optimizing Multicast Throughput in IP Network
 
The Case for a Signal Oriented Data Stream Management System
The Case for a Signal Oriented Data Stream Management SystemThe Case for a Signal Oriented Data Stream Management System
The Case for a Signal Oriented Data Stream Management System
 
Mobile Cloud Computing: Big Picture
Mobile Cloud Computing: Big PictureMobile Cloud Computing: Big Picture
Mobile Cloud Computing: Big Picture
 
Network Information Processing
Network Information ProcessingNetwork Information Processing
Network Information Processing
 
Pervasive Image Computation: A Mobile Phone Application for getting Informat...
Pervasive Image Computation: A Mobile  Phone Application for getting Informat...Pervasive Image Computation: A Mobile  Phone Application for getting Informat...
Pervasive Image Computation: A Mobile Phone Application for getting Informat...
 
Interactive Proof Systems and An Introduction to PCP
Interactive Proof Systems and An Introduction to PCPInteractive Proof Systems and An Introduction to PCP
Interactive Proof Systems and An Introduction to PCP
 
Quantum Computation and Algorithms
Quantum Computation and Algorithms Quantum Computation and Algorithms
Quantum Computation and Algorithms
 

Kürzlich hochgeladen

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
KarakKing
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 

Kürzlich hochgeladen (20)

General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 

Gaussian Integration

  • 1. 1 Gaussian Integration M. Reza Rahimi, Sharif University of Technology, Tehran, Iran.
  • 2. 2 Outline • Introduction • Gaussian Integration • Legendre Polynomials • N-Point Gaussian Formula • Error Analysis for Gaussian Integration • Gaussian Integration for Improper Integrals • Legendre-Gaussian Integration Algorithms • Chebyshev-Gaussian Integration Algorithms • Examples, MATLAB Implementation and Results • Conclusion
  • 3. 3 Introduction • Newton-Cotes and Romberg Integration usually use table of the values of function. • These methods are exact for polynomials less than N degrees. • General formula of these methods are as bellow: b n ∫ f ( x)dx ≅ ∑w a i =1 i f ( xi ) • In Newton-Cotes method the subintervals has the same length.
  • 4. 4 • But in Gaussian Integration we have the exact formula of function. • The points and weights are distinct for specific number N.
  • 5. 5 Gaussian Integration • For Newton-Cotes methods we have: b b −a 1. ∫ f ( x )dx ≅ [ f (a) + f (b)]. a 2 b b −a  a +b  2. ∫ b f ( x )dx ≅ 6  f (a ) + 4 f ( 2 ) + f (b) .  • And in general form: b n ∫ f ( x)dx ≅ ∑ w f ( x )i i xi = a + (i − 1)h i ∈ {1,2,3,..., n} a i =1 n b− a n t− j wi = ∏ dt n − 1 ∫ j =1, j ≠ i i − j 0
  • 6. 6 • But suppose that the distance among points are not equal, and for every w and x we want the integration to be exact for polynomial of degree less than 2n-1. n 1 1.∑ wi = ∫ dx i =1 −1 n 1 2.∑ xi wi = ∫ xdx i =1 −1 ............. n 1 2n.∑ x 2 n −1 wi = ∫ x 2 n −1 dx i =1 −1
  • 7. 7 • Lets look at an example: n =2 w1 , w2 , x1 , x 2 .  .w1 + w2 = 2 1 2.x w + x w = 0  1 1  2 2 1 2 ⇒2,4 ∴x 1 = x 2 ⇒3 ∴x 1 = . 2 2 2  2  .x 1 w1 + x 2 w2 = 3 2 3 3  3 4.x 1 w1 + x 3 2 w2 = 0  1 x1 = −x 2 = , w1 = w2 =1. 3 • So 2-point Gaussian formula is: 1 1 −1 − ∫ 1 f ( x ) dx ≅ f ( 3 )+ f( 3 ).
  • 8. 8 Legendre Polynomials • Fortunately each x is the roots of Legendre Polynomial. 1 d 2 PN ( x) = ( x − 1) n . n = 0,1,2,..... 2 n n! dx • We have the following properties for Legendre Polynomials. 1.Pn ( x) Has N Zeros in interval(-1,1). 2.( n +1) Pn +1 ( x ) = ( 2n +1) xPn ( x ) − nPn −1 ( x). 1 2 3. ∫ Pn ( x ) Pm ( x) dx = δmn −1 2mn +1 1 4. ∫ x k Pn ( x) dx = 0 k = 0,1,2......, n - 1 −1 2 n +1 ( n!) 2 1 5.∫ x Pn ( x ) dx = n -1 (2n +1)!
  • 9. 9 • Legendre Polynomials make orthogonal bases in (-1,1) interval. • So for finding Ws we must solve the following equations: n 1 1.∑ wi = ∫ dx = 2 i =1 −1 n 1 2.∑ wi x 2 i = ∫ xdx = 0 i =1 −1 .................... .................... n 1 1 n.∑ wi x n −1i = ∫ x n −1 dx = (1 − (−1) n ) i =1 −1 n
  • 10. 10 • We have the following equation which has unique answer: ... x1   w1    n −1 T 2 1 x1       1 x2 ... x 2   w2   n −1 0     . = .   ... ... ... ...     1  1 ... x n   wn   n (1 − (− 1) )  n −1 n  xn      • Theorem: if Xs are the roots of legendre polynomials 1 and we got W from above equation then ∫P ( x)dx is − 1 exact for P ∈Π2 n −1 .
  • 11. 11 • Proof: p ∈ Π 2 n −1 ⇒ p( x) = q ( x) Pn ( x) + r ( x). n −1 n −1 q( x) = ∑ q j Pj ( x) ; r ( x) = ∑ r j Pj ( x). j =0 j =0 1 1 1 n −1 n −1 ∫ p( x)dx = ∫ (q( x) P ( x) + r ( x))dx = ∫ ( P ( x)∑ q P ( x) + ∑ r P ( x))dx = -1 −1 n −1 n j =0 j j j =0 j j n −1 1 n −1 1 ∑ q ∫ P ( x) P ( x)dx + ∑ r ∫ P ( x) P (x)dx = 2r . j =0 j j n j =0 j 0 j 0 −1 −1 ⇒ n n n ∑ w p( x ) = ∑ w (q( x ) P i =1 i i i =1 i i N ( x) + r ( xi )) = ∑ wi r ( xi ) i =1 n n −1 n −1 n n −1 1 = ∑ wi ∑ r j Pj ( xi ) = ∑ r j ∑ wi Pj ( x) = ∑ r j ∫ Pj ( x)dx = 2r0 . i =1 j =0 j =0 i =1 j =0 −1
  • 12. 12 Theorem: 1 n (x − x j ) wi = ∫ [ Li ( x )] dx ∏ (x 2 Li ( x ) = −1 j = , j ≠i 1 i −xj ) Proof: 1 2 [ Li ( x)] 2 ∈ Π 2 n−2 ⇒ ∫ [ Li ( x)] 2 = ∑ w j [ Li ( x j )] n = wi . −1 j =1
  • 13. 13 Error Analysis for Gaussian Integration • Error analysis for Gaussian integrals can be derived according to Hermite Interpolation. b Theorem : The error made by gaussian integration in approximation the integral ∫ f ( x )dx is :: a (b − a ) 2 n +1 ( N !) 4 EN ( f ) = f (2n) (ξ ) ξ ∈ [ a, b]. (2n + 1)((2n)!) 3
  • 14. 14 Gaussian Integration for Improper Integrals • Suppose we want to compute the following integral: 1 f ( x) ∫ −1 1−x2 dx • Using Newton-Cotes methods are not useful in here because they need the end points results. • We must use the following: 1 1−ε f ( x) f ( x) ∫ −1 1− x 2 dx ≅ ∫ε −1+ 1− x 2 dx
  • 15. 15 • But we can use the Gaussian formula because it does not need the value at the endpoints. • But according to the error of Gaussian integration, Gaussian integration is also not proper in this case. • We need better approach. Definition : The Polynomial set { Pi } is orthogonal in (a, b) with respect to w(x) if : b ∫ w( x) P ( x)P a i j ( x) dx = 0 for i ≠ j then we have the following approximation : b n ∫ w( x) f ( x)dx ≅ ∑ wi f ( xi ) a i =1 where xi are the roots for Pn and b wi = ∫ w( x)[ Li ( x)] dx 2 a will compute the integral exactly when f ∈ Π 2 n −1
  • 16. 16 Definition : Chebyshev Polynomials Tn ( x ) is defined as : n  2    n  Tn ( x ) = ∑ x n −2 k ( x 2 −1) k   k =0  2 k  Tn ( x ) = 2 xTn ( x) − Tn −1 ( x), n ≥ 1, T0 ( x) = 1, T1 ( x ) = x. If - 1 ≤ x ≤ 1 then : ( 2i −1)π  Tn ( x ) = cos( n arccos x). roots xi = cos  .  2n  1 1 ∫ −1 1−x 2 Ti ( x )T j ( x ) dx = 0 if i ≠ j. • So we have following approximation: 1 1 π n  (2i − 1)π  ∫ f ( x)dx ≅ ∑ f ( xi ), xi = cos  n i =1  2n   i ∈ {1,2,3,..., n}. −1 1− x2
  • 17. Legendre-Gaussian Integration 17 Algorithms a,b: Integration Interval, N: Number of Points, f(x):Function Formula. Initialize W(n,i),X(n,i). Ans=0; b−a b−a a+b A( x ) = f( x+ ). 2 2 2 For i=1 to N do: Ans=Ans+W(N,i)*A(X(N,i)); Return Ans; End Figure 1: Legendre-Gaussian Integration Algorithm
  • 18. 18 a,b: Integration Interval, tol=Error Tolerance. f(x):Function Formula. Initialize W(n,i),X(n,i). Ans=0; b −a b −a a +b A( x ) = f( x+ ). 2 2 2 For i=1 to N do: If |Ans-Gaussian(a,b,i,A)|<tol then return Ans; Else Ans=Gaussian(a,b,i,A); Return Ans; End Figure 2: Adaptive Legendre-Gaussian Integration Algorithm. (I didn’t use only even points as stated in the book.)
  • 19. 19 Chebychev-Gaussian Integration Algorithms a,b: Integration Interval, N: Number of Points, f(x):Function Formula. (b − a ) a +b a −b A( x) = 1 − x 2 f( + x) 2 2 2 For i=1 to N do: Ans=Ans+ A(xi); //xi chebyshev roots Return Ans*pi/n; End Figure 3: Chebyshev-Gaussian Integration Algorithm
  • 20. 20 a,b: Integration Interval, tol=Error Tolerance. f(x):Function Formula. (b − a ) a + b a − b A( x ) = 1 − x 2 f( + x) 2 2 2 For i=1 to N do: If |Ans-Chebyshev(a,b,I,A)|<tol then return Ans; Else Ans=Chebyshev(a,b,I,A); Return Ans; End Figure 4: Adaptive Chebyshev-Gaussian Integration Algorithm
  • 21. 21 Example and MATLAB Implementation and Results Figure 5:Legendre-Gaussian Integration
  • 22. 22 Figure 6: Adaptive Legendre-Gaussian Integration
  • 25. 25 Testing Strategies: • The software has been tested for polynomials less or equal than 2N-1 degrees. • It has been tested for some random inputs. • Its Result has been compared with MATLAB Trapz function.
  • 26. 26 Examples: Example 1:Gaussian-Legendre 1 1 π ∫ 2 −1 1 + x dx exact → Arc tan(1) − Arc tan(−1) = ≅ 1.5707.  2 1 − (−1) 1 1 Trapezoid →(   )( + ) = 1.0000. 2 1 + (−1) 2 1 + (1) 2 1 − (−1) 1 1 1 Simpson →(  )( +4 + ) ≅ 1.6667. 6 1 + (−1) 2 1 + (0) 2 1 + (1) 2 2− Po int Gaussian → According To Software Resualt = 1.5000.   3−Po int Gaussian → According To Software Resualt = 1.5833.  
  • 27. 27 Example 2:Gaussian-Legendre 2 − e−x 3 − e −9 1 3 ∫ xe 2 −x dx  →(  exact )0 = ( + ) ≅ 0.4999. 0 2 2 2 3− 0 Trapezoid →(   )(0 + 3e −9 ) ≅ 0.0005. 2 3−0 2 Simpson → (  )(0 + 1.5e −1.5 + 3e −9 ) ≅ 0.0792. 6 2− Po int Gaussian → ≅ 0.6494.   3− Po int Gaussian → ≅ 0.4640.   Example 3:Gaussian-Legendre (b − a ) 2 n +1 ( n!) 4 En ( f ) = f 2n (ξ ) ξ ∈[a, b]. ( 2n +1)((2n)!) 3 π (π − 0) 2 n +1 ( n!) 4 ∫ sin( x)dx  → | (2n +1)((2n)!) 3 sin (ξ ) |≤ 5 ×10 ⇒ n ≥ 4. −4  2n 0 ( 2 − 0) 2 n +1 (n!) 4 −ξ 2 ∫ e dx  →| (2n +1)((2n)!) 3 e |≤ 5 ×10 ⇒ n ≥ 3. −x −4  0
  • 28. 28 Example 4:Gaussian-Legendre 3 x 1 ∫0 1 + x 2 dx = ln(1 + x 2 ) ≅ 1.15129. 2 2 ⇒ ≅ 1.21622 ⇒ errora ≅ 0.06493. 3 ⇒≅ 1.14258 ⇒ errora ≅ 0.00871. 4 ⇒≅ 1.14902 ⇒ errora ≅ 0.36227. 5 ⇒≅ 1.15156 ⇒ errora ≅ 0.00027. 6 ⇒≅ 1.15137 ⇒ errora ≅ 0.00008. Example 5:Gaussian-Legendre 3 2 3 e−x ∫ xe 2 −x dx = ≅ 0.49994. 0 −2 0 2 ⇒≅ 0.64937 ⇒ errora ≅ 0.14943. 3 ⇒≅ 0.46397 ⇒ errora ≅ 0.03597. 4 ⇒≅ 0.50269 ⇒ errora ≅ 0.00275. 5 ⇒≅ 0.50007 ⇒ errora ≅ 0.00013. 6 ⇒≅ 0.49989 ⇒ errora ≅ 0.00005.
  • 29. 29 Example 6:Gaussian-Legendre π /2 ∫ sin( x) dx :: Trapzoid :: 0.78460183690360 3.5 2 0 3 2 - Point ≅ 0.78539816339745. 2.5 3 - Point ≅ 0.78539816339745. 2 π ∫ sin( x) 2 dx :: Trapzoid :: 1.57079632662673 1.5 0 2 − Point ≅ 1.19283364797927. 1 3 - Point ≅ 1.60606730236915. 0.5 3π 2 0 -0.77 -0.57 0.57 0.77 ∫ sin( x) 2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 dx ::Trapzoid :: 2.35580550989210 0 2 − Point ≅ 2.35619449019234. 4 - Point ≅ 3.53659228676239. 3 - Point ≅ 2.35619449019234. 5 - Point ≅ 3.08922572211956. 2π 6 - Point ≅ 3.14606122123817. ∫ sin( x) 2 dx ::Trapzoid :: 3.14159265355679 7 - Point ≅ 3.14132550162258. 0 2 − Point ≅ 5.91940603385020. 8 - Point ≅ 3.14131064749986. 3 - Point ≅ 1.47666903877755.
  • 30. 30 Example 7:Adaptive Gaussian-Legendre 3 x ∫ 1 + x 2 dx :: 0 1)Adaptive Gaussian Integration :: error ≈ 5 ×10 -5 ⇒ 1.15114335351486. 2)Adaptive Gaussian Integration :: error ≈ 5 ×10 -4 ⇒ 1.15137188448013. 3 ∫ 2 xe x dx :: 0 1)Adaptive Gaussian Integration :: error ≈ 5 × 10 -5 ⇒ 0.49980229291620. 2)Adaptive Gaussian Integration :: error ≈ 5 × 10 -4 ⇒ 0.49988858784837.
  • 31. 31 Example 8:Gaussian-Chebyshev 2 - Point Chebyshev Integration ≈ 0.48538619428604. 3 - Point Chebyshev Integration ≈ 1.39530571408271 2 - Point Chebyshev :: 0 3 - Point Chebyshev :: 0.33089431565488.
  • 32. 32 Example 9: 1) w1 + w2 + w3 = 2 2) w1 x1 + w2 x 2 + w3 x3 = 0 2 2 2 2 3) w1 x1 + w2 x 2 + w3 x3 = 3 3 3 3 4) w1 x1 + w2 x 2 + w3 x3 =0 4 4 4 2 5) w1 x1 + w2 x 2 + w3 x3 = 5 5 5 5 6) w1 x1 + w2 x 2 + w3 x3 =0 w1 x1 + w2 x 2 1  2,4 ⇒ 3 3 = 2  w1 x1 + w2 x 2 x3  2 2 2 2 3 2 2 3 2 2 3 3  ⇒ w1 x1 ( x1 − x3 ) = w2 x 2 ( x3 − x 2 ), w1 x1 ( x3 − x1 ) = w2 x 2 ( x 2 − x3 ) w1 x1 + w2 x 2 1 4,5 ⇒ = 2 x3  5 5 w1 x1 + w2 x 2  ⇒ x1 = − x 2 2 2 2 2 ⇒ w1 x1 ( x1 − x3 ) = w2 (− x1 )( x3 − x1 ) ⇒ w1 = w2 w1 = w2 , x1 = − x 2 ⇒ 2) ⇒ w3 x3 = 0.   2 2 4 2 3,5 ⇒ 2w1 x1 = ,2 w1 x1 = . ⇒  x1 = 3 = − x2  ( w1 , w2 , w3 ) =  5 , 5 , 8    3 5  5  9 9 9 2 5 8  3 3  2 ⇒ 2w1 x1 = ⇒ w1 = w2 = ⇒ 1 ⇒ w3 = ⇒ x3 = 0 ( x1 , x 2 , x3 ) =   ,− , 0  3 9 9  5 5  
  • 33. 33 Conclusion • In this talk I focused on Gaussian Integration. • It is shown that this method has good error bound and very useful when we have exact formula. • Using Adaptive methods is Recommended Highly. • General technique for this kind of integration also presented. • The MATLAB codes has been also explained.