SlideShare ist ein Scribd-Unternehmen logo
1 von 37
Downloaden Sie, um offline zu lesen
(Artificial) Neural Network

        Putri Wikie Novianti

          Reading Group
           July 11, 2012
Analogy with human brain
Input                                               Output


 X1
         W1


        W2        Y_in                    Y_out
 X2                             F                      Y

  .                      Activation functions:

  .                      1. Binary step function
        WN
  .
                         2. Bipolar step function

                         3. Sigmoid function
 XN                      4. Linear function

              b
(Artificial) Neural Network
(Artificial) Neural Network
Perceptron




       X1             W1
                                                                     Y1
                               Y_in                   Y_out
                                            F
                  W2
      X2
                                                                     Y2

Initialization:                       Stopping criterions:

- Weight (wi) and bias                - Weight changes

- Learning rate (α)                   - Error ≠ 0
                           b
- Maximum epoch                       - Reach maximum epoch


                                        Update weight and bias

                                        W ij = W ij + α * tj * Xki
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Backpropagation
Some Issues in Training NN

1.   Starting values

     Usually starting values for weights are chosen to be random values near zero.
     Hence the model starts out nearly linear, and becomes nonlinear as the weights
     increase.

     Use of exact zero weights leads to zero derivatives and perfect symmetry, and the
     algorithm never moves

2.   Overfitting

     Typically we don’t want the global minimizer of R(θ), as this is likely to be an overfit
     solution. Instead some regularization is needed: this is achieved directly through a
     penalty term, or indirectly by early stopping.

     R(θ) : Error as a function of the complete ser of weight θ
Some Issues in Training NN
Some Issues in Training NN
Some Issues in Training NN

3.   Scaling the input

-    Since scaling the input determines the effective scaling of the weight in the
     bottom layer, It can have the large effect of the final model
-    It is better to standardize all inputs (mean = 0 and std.dev = 1)
-    Ensure all inputs are treated equally in regulation process and allows one to
     choose a meaningfull range for random starting weights.
-    Typical weight for standardized inputs: random uniform weight over the range
     [-0.7, 0.7]
Some Issues in Training NN

4.      Number of hidden units and layers

      - Better to have too many hidden units than too few
        Too few hidden units: Less flexibility of the model (hard to capture nonlinearity)
        Too many hidden units: extra weight can be shrunk towards zero with
        appropriate regularization used
      - Typical # of hidden units : [5, 100]

5.      Multiple Minima

      - The error function R(θ) is non-convex, possessing many local minima.
        As result, the final solution quite depends on the choice of starting weight.
      - Solution:
        * averaging the predictions over the collection of networks as the final prediction
        * averaging the weight
        * bagging: averaging the prediction of the networks training from randomly
        perturbed version of the training data.
Example: Simulated Data
Example: Simulated Data
Example: Simulated Data
Example: ZIP Code Data
Example: ZIP Code Data
Example: ZIP Code Data
Bayesian Neural Nets

 A classification was held in 2003, by Neural Information Processing
        System (NIPS) workshop




 The winner: Neal and Zhang (2006) used a series of preprocessing
        feature selection steps, followed by Bayesian NN, Dirichelet
        diffusion trees, and combination of these methods.
Bayesian Neural Nets


Bayesian approach review:
Bayesian Neural Nets
Bayesian Neural Nets
Bayesian Neural Nets
Bagging and Boosting Neural Nets
(Artificial) Neural Network
Computational consideration
References

[1] Zhang, X. Support Vector Machine. Lecture slides on Data Mining course. Fall 2010, KSA:
KAUST

[2] Hastie, T., Tibshirani, R., Friedman, J. The elements of statistical learning, second edition.
2009. New York: Springer

Weitere ähnliche Inhalte

Was ist angesagt?

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkDEEPASHRI HK
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural NetworksArslan Zulfiqar
 
Neural network
Neural networkNeural network
Neural networkSilicon
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
 
Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Parinaz Faraji
 
Neural networks1
Neural networks1Neural networks1
Neural networks1Mohan Raj
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPratik Aggarwal
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseoJinTaek Seo
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network Iman Ardekani
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIUProf. Neeta Awasthy
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksarjitkantgupta
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksmadhu sudhakar
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksThe Integral Worm
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 

Was ist angesagt? (20)

Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Neural network
Neural networkNeural network
Neural network
 
Hopfield Networks
Hopfield NetworksHopfield Networks
Hopfield Networks
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1Dr. kiani artificial neural network lecture 1
Dr. kiani artificial neural network lecture 1
 
Neural networks1
Neural networks1Neural networks1
Neural networks1
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural networks
Neural networksNeural networks
Neural networks
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseo
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 

Andere mochten auch

Artificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachArtificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachRoee Levy
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehtaRutul Mehta
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network AbstractAnjali Agrawal
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???Adii Shah
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)James Boulie
 
Use of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionUse of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionkamalsrit
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkDessy Amirudin
 
Artificial neural network for misuse detection
Artificial neural network for misuse detectionArtificial neural network for misuse detection
Artificial neural network for misuse detectionLikan Patra
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
neural network
neural networkneural network
neural networkSTUDENT
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligenceu053675
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencevallibhargavi
 
Data Science - Part VIII - Artifical Neural Network
Data Science - Part VIII -  Artifical Neural NetworkData Science - Part VIII -  Artifical Neural Network
Data Science - Part VIII - Artifical Neural NetworkDerek Kane
 

Andere mochten auch (16)

Artificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachArtificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular Approach
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
 
Abstract
AbstractAbstract
Abstract
 
Artificial Neural Network Abstract
Artificial Neural Network AbstractArtificial Neural Network Abstract
Artificial Neural Network Abstract
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
 
Use of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognitionUse of artificial neural network in pattern recognition
Use of artificial neural network in pattern recognition
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial neural network for misuse detection
Artificial neural network for misuse detectionArtificial neural network for misuse detection
Artificial neural network for misuse detection
 
Back propagation
Back propagationBack propagation
Back propagation
 
neural network
neural networkneural network
neural network
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
Data Science - Part VIII - Artifical Neural Network
Data Science - Part VIII -  Artifical Neural NetworkData Science - Part VIII -  Artifical Neural Network
Data Science - Part VIII - Artifical Neural Network
 

Ähnlich wie (Artificial) Neural Network

Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksSivagowry Shathesh
 
hfWTX+b The activation function should Provide nonlin.pdf
hfWTX+b The activation function should  Provide nonlin.pdfhfWTX+b The activation function should  Provide nonlin.pdf
hfWTX+b The activation function should Provide nonlin.pdfgeetakannupillai1
 
19 - Neural Networks I.pptx
19 - Neural Networks I.pptx19 - Neural Networks I.pptx
19 - Neural Networks I.pptxEmanAl15
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Randa Elanwar
 
Machine Learning With Neural Networks
Machine Learning  With Neural NetworksMachine Learning  With Neural Networks
Machine Learning With Neural NetworksKnoldus Inc.
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelskhbrodersen
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Ono Shigeru
 
Neural Network Fundamentals
Neural Network FundamentalsNeural Network Fundamentals
Neural Network FundamentalsManoj Kumar
 
03 neural network
03 neural network03 neural network
03 neural networkTianlu Wang
 
Introduction to Neural Netwoks
Introduction to Neural Netwoks Introduction to Neural Netwoks
Introduction to Neural Netwoks Abdallah Bashir
 
Cheatsheet deep-learning
Cheatsheet deep-learningCheatsheet deep-learning
Cheatsheet deep-learningSteve Nouri
 
Multilayer Slides
Multilayer  SlidesMultilayer  Slides
Multilayer SlidesESCOM
 

Ähnlich wie (Artificial) Neural Network (20)

CS767_Lecture_05.pptx
CS767_Lecture_05.pptxCS767_Lecture_05.pptx
CS767_Lecture_05.pptx
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
 
hfWTX+b The activation function should Provide nonlin.pdf
hfWTX+b The activation function should  Provide nonlin.pdfhfWTX+b The activation function should  Provide nonlin.pdf
hfWTX+b The activation function should Provide nonlin.pdf
 
Unit 1
Unit 1Unit 1
Unit 1
 
19 - Neural Networks I.pptx
19 - Neural Networks I.pptx19 - Neural Networks I.pptx
19 - Neural Networks I.pptx
 
SOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - UnitSOFTCOMPUTERING TECHNICS - Unit
SOFTCOMPUTERING TECHNICS - Unit
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
 
Machine Learning With Neural Networks
Machine Learning  With Neural NetworksMachine Learning  With Neural Networks
Machine Learning With Neural Networks
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal models
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Neural Networks - How do they work?
Neural Networks - How do they work?Neural Networks - How do they work?
Neural Networks - How do they work?
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 
Neural Network Fundamentals
Neural Network FundamentalsNeural Network Fundamentals
Neural Network Fundamentals
 
03 neural network
03 neural network03 neural network
03 neural network
 
Introduction to Neural Netwoks
Introduction to Neural Netwoks Introduction to Neural Netwoks
Introduction to Neural Netwoks
 
Cheatsheet deep-learning
Cheatsheet deep-learningCheatsheet deep-learning
Cheatsheet deep-learning
 
Multilayer Slides
Multilayer  SlidesMultilayer  Slides
Multilayer Slides
 

(Artificial) Neural Network

  • 1. (Artificial) Neural Network Putri Wikie Novianti Reading Group July 11, 2012
  • 3. Input Output X1 W1 W2 Y_in Y_out X2 F Y . Activation functions: . 1. Binary step function WN . 2. Bipolar step function 3. Sigmoid function XN 4. Linear function b
  • 6. Perceptron X1 W1 Y1 Y_in Y_out F W2 X2 Y2 Initialization: Stopping criterions: - Weight (wi) and bias - Weight changes - Learning rate (α) - Error ≠ 0 b - Maximum epoch - Reach maximum epoch Update weight and bias W ij = W ij + α * tj * Xki
  • 18. Some Issues in Training NN 1. Starting values Usually starting values for weights are chosen to be random values near zero. Hence the model starts out nearly linear, and becomes nonlinear as the weights increase. Use of exact zero weights leads to zero derivatives and perfect symmetry, and the algorithm never moves 2. Overfitting Typically we don’t want the global minimizer of R(θ), as this is likely to be an overfit solution. Instead some regularization is needed: this is achieved directly through a penalty term, or indirectly by early stopping. R(θ) : Error as a function of the complete ser of weight θ
  • 19. Some Issues in Training NN
  • 20. Some Issues in Training NN
  • 21. Some Issues in Training NN 3. Scaling the input - Since scaling the input determines the effective scaling of the weight in the bottom layer, It can have the large effect of the final model - It is better to standardize all inputs (mean = 0 and std.dev = 1) - Ensure all inputs are treated equally in regulation process and allows one to choose a meaningfull range for random starting weights. - Typical weight for standardized inputs: random uniform weight over the range [-0.7, 0.7]
  • 22. Some Issues in Training NN 4. Number of hidden units and layers - Better to have too many hidden units than too few Too few hidden units: Less flexibility of the model (hard to capture nonlinearity) Too many hidden units: extra weight can be shrunk towards zero with appropriate regularization used - Typical # of hidden units : [5, 100] 5. Multiple Minima - The error function R(θ) is non-convex, possessing many local minima. As result, the final solution quite depends on the choice of starting weight. - Solution: * averaging the predictions over the collection of networks as the final prediction * averaging the weight * bagging: averaging the prediction of the networks training from randomly perturbed version of the training data.
  • 29. Bayesian Neural Nets A classification was held in 2003, by Neural Information Processing System (NIPS) workshop The winner: Neal and Zhang (2006) used a series of preprocessing feature selection steps, followed by Bayesian NN, Dirichelet diffusion trees, and combination of these methods.
  • 30. Bayesian Neural Nets Bayesian approach review:
  • 34. Bagging and Boosting Neural Nets
  • 37. References [1] Zhang, X. Support Vector Machine. Lecture slides on Data Mining course. Fall 2010, KSA: KAUST [2] Hastie, T., Tibshirani, R., Friedman, J. The elements of statistical learning, second edition. 2009. New York: Springer