SlideShare ist ein Scribd-Unternehmen logo
1 von 12
Downloaden Sie, um offline zu lesen
Module
          12
Machine Learning
        Version 2 CSE IIT, Kharagpur
Lesson
               39
Neural Networks - III
            Version 2 CSE IIT, Kharagpur
12.4.4 Multi-Layer Perceptrons

In contrast to perceptrons, multilayer networks can learn not only multiple decision
boundaries, but the boundaries may be nonlinear. The typical architecture of a multi-layer
perceptron (MLP) is shown below.


 Output nodes

 Internal nodes

  Input nodes
To make nonlinear partitions on the space we need to define each unit as a nonlinear
function (unlike the perceptron). One solution is to use the sigmoid unit. Another reason
for using sigmoids are that they are continuous unlike linear thresholds and are thus
differentiable at all points.

x1            w1
x2                                 net
           w2             ÎŁ

                        w0                                 O = σ(net) = 1 / 1 + e -net
xn         wn
                   X0=1



   O(x1,x2,…,xn) = σ ( WX )
  where: σ ( WX ) = 1 / 1 + e -WX

Function σ is called the sigmoid or logistic function. It has the following property:

 d σ(y) / dy = σ(y) (1 – σ(y))




                                                             Version 2 CSE IIT, Kharagpur
12.4.4.1 Back-Propagation Algorithm

Multi-layered perceptrons can be trained using the back-propagation algorithm described
next.

Goal: To learn the weights for all links in an interconnected multilayer network.

We begin by defining our measure of error:

E(W) = ½ Σd Σk (tkd – okd) 2

k varies along the output nodes and d over the training examples.

The idea is to use again a gradient descent over the space of weights to find a global
minimum (no guarantee).

Algorithm:

   1. Create a network with nin input nodes, nhidden internal nodes, and nout output
      nodes.
   2. Initialize all weights to small random numbers.
   3. Until error is small do:
               For each example X do
                   • Propagate example X forward through the network
                   • Propagate errors backward through the network


Forward Propagation

Given example X, compute the output of every node until we reach the output nodes:

Output

Internal                                                            Compute sigmoid
                                                                      function

 Input


           Example



                                                            Version 2 CSE IIT, Kharagpur
Backward Propagation

A. For each output node k compute the error:
    δk = Ok (1-Ok)(tk – Ok)

B. For each hidden unit h, calculate the error:
    δh = Oh (1-Oh) Σk Wkh δk

C. Update each network weight:

C. Wji = Wji + ΔWji
   where ΔWji = η δj Xji       (Wji and Xji are the input and weight of node i to node j)

A momentum term, depending on the weight value at last iteration, may also be added to
the update rule as follows. At iteration n we have the following:

ΔWji (n) = η δj Xji + αΔWji (n)

Where Îą ( 0 <= Îą <= 1) is a constant called the momentum.

   1. It increases the speed along a local minimum.
   2. It increases the speed along flat regions.

Remarks on Back-propagation

   1.   It implements a gradient descent search over the weight space.
   2.   It may become trapped in local minima.
   3.   In practice, it is very effective.
   4.   How to avoid local minima?
            a) Add momentum.
            b) Use stochastic gradient descent.
            c) Use different networks with different initial values for the weights.


Multi-layered perceptrons have high representational power. They can represent the
following:

   1. Boolean functions. Every boolean function can be represented with a network
      having two layers of units.

   2. Continuous functions. All bounded continuous functions can also be
      approximated with a network having two layers of units.

   3. Arbitrary functions. Any arbitrary function can be approximated with a network
      with three layers of units.



                                                            Version 2 CSE IIT, Kharagpur
Generalization and overfitting:

   One obvious stopping point for backpropagation is to continue iterating until the error is
   below some threshold; this can lead to overfitting.


                                                                    Validation set
Error

                                                                    Training set

                                           Number of weight
   Overfitting can be avoided using the following strategies.

        •   Use a validation set and stop until the error is small in this set.

        •   Use 10 fold cross validation.

        •   Use weight decay; the weights are decreased slowly on each iteration.




   Applications of Neural Networks

   Neural networks have broad applicability to real world business problems. They have
   already been successfully applied in many industries.

   Since neural networks are best at identifying patterns or trends in data, they are well
   suited for prediction or forecasting needs including:

        •   sales forecasting
        •   industrial process control
        •   customer research
        •   data validation
        •   risk management
        •   target marketing




                                                                   Version 2 CSE IIT, Kharagpur
Because of their adaptive and non-linear nature they have also been used in a number of
control system application domains like,

   •    process control in chemical plants
   •    unmanned vehicles
   •    robotics
   •    consumer electronics

Neural networks are also used a number of other applications which are too hard to
model using classical techniques. These include, computer vision, path planning and user
modeling.

Questions
1. Assume a learning problem where each example is represented by four attributes. Each
attribute can take two values in the set {a,b}. Run the candidate elimination algorithm on
the following examples and indicate the resulting version space. What is the size of the
space?
                  ((a, b, b, a), +)
                  ((b, b, b, b), +)
                  ((a, b, b, b), +)
                  ((a, b, a, a), -)


2. Decision Trees

The following table contains training examples that help a robot janitor predict whether
or not an office contains a recycling bin.

          STATUS DEPT. OFFICE SIZE RECYCLING BIN?
       1. faculty ee   large       no
       2. staff   ee   small       no
       3. faculty cs   medium      yes
       4. student ee   large       yes
       5. staff   cs   medium      no
       6. faculty cs   large       yes
       7. student ee   small       yes
       8. staff   cs   medium      no

Use entropy splitting function to construct a minimal decision tree that predicts whether
or not an office contains a recycling bin. Show each step of the computation.

3. Translate the above decision tree into a collection of decision rules.

4. Neural networks: Consider the following function of two variables:

                                                             Version 2 CSE IIT, Kharagpur
A      B   Desired Output
          0      0     1
          1      0     0
          0      1     0
          1      1     1

Prove that this function cannot be learned by a single perceptron that uses the step
function as its activation function.

   4. Construct by hand a perceptron that can calculate the logic function implies (=>).
      Assume that 1 = true and 0 = false for all inputs and outputs.

Solution
1. The general and specific boundaries of the version space are as follows:

S: {(?,b,b,?)}
G: {(?,?,b,?)}

The size of the version space is 2.

2. The solution steps are as follows:

1. Selecting the attribute for the root node:

                                            YES                        NO

       Status:

                     Faculty:    (3/8)*[ -(2/3)*log_2(2/3) - (1/3)*log_2(1/3)]

                     Staff:     + (3/8)*[ -(0/3)*log_2(0/3) - (3/3)*log_2(3/3)]

                     Student:   + (2/8)*[ -(2/2)*log_2(2/2) - (0/2)*log_2(0/2)]

                     TOTAL:     = 0.35 + 0 + 0 = 0.35

       Size:

                     Large:      (3/8)*[ -(2/3)*log_2(2/3) - (1/3)*log_2(1/3)]

                     Medium:    + (3/8)*[ -(1/3)*log_2(1/3) - (2/3)*log_2(2/3)]

                     Small:     + (2/8)*[ -(1/2)*log_2(1/2) - (1/2)*log_2(1/2)]

                     TOTAL:     = 0.35 + 0.35 + 2/8 = 0.95




                                                           Version 2 CSE IIT, Kharagpur
Dept:

              ee:              (4/8)*[ -(2/4)*log_2(2/4) - (2/4)*log_2(2/4)]

              cs:           + (4/8)*[ -(2/4)*log_2(2/4) - (2/4)*log_2(2/4)]

              TOTAL:        = 0.5 + 0.5 = 1

Since status is the attribute with the lowest entropy, it is selected
as the root node:




                         Status

                    /      |       

      Faculty    /         | staff  student

                /          |           

        1-,3+,6+        2-,5-,8-    4+,7+

         BIN=?           BIN=NO    BIN=YES

Only the branch corresponding to Status=Faculty needs further
processing.

Selecting the attribute to split the branch Status=Faculty:

                                   YES                   NO

      Dept:

       ee:              (1/3)*[ -(0/1)*log_2(0/1) - (1/1)*log_2(1/1)]

       cs:          + (2/3)*[ -(2/2)*log_2(2/2) - (0/2)*log_2(0/2)]

       TOTAL:       = 0 + 0 = 0




                                                     Version 2 CSE IIT, Kharagpur
Since the minimum possible entropy is 0 and dept has that minimum
possible entropy, it is selected as the best attribute to split the
branch Status=Faculty. Note that we don't even have to calculate the
entropy of the attribute size since it cannot possibly be better than
0.

                            Status

                        /     |        

         Faculty     /        | staff  student

                    /         |            

           1-,3+,6+         2-,5-,8-   4+,7+

              Dept          BIN=NO     BIN=YES

              /     

       ee /           cs

          /             

         1-          3+,6+

      BIN=NO        BIN=YES

Each branch ends in a homogeneous set of examples so the construction
of the decision tree ends here.

3. The rules are:

   IF status=faculty AND dept=ee THEN recycling-bin=no

   IF status=faculty AND dept=cs THEN recycling-bin=yes

   IF status=staff THEN recycling-bin=no

   IF status=student THEN recycling-bin=yes




                                                  Version 2 CSE IIT, Kharagpur
4. This function cannot be computed using a single perceptron since the function is not
linearly separable:
           B
            ^
            |
            0    1
            |
            |
         ---1----0-------->
            |              A

5. Let




Choose g to be the threshold function (output 1 on input > 0, output 0 otherwise).
There are two inputs to the implies function,          Therefore,




Arbitrarily choose W2 = 1, and substitute into above for all values of X1 and X2.




Note that greater/less than signs are decided by the threshold fn. and the result of
           For example, when X1=0 and X2=0, the resulting equation must be greater than
0 since g is threshold fn. and          is True. Similarly, when X1=1 and X2=0, the
resulting equation must be less than 0 since g is the threshold fn. and
          is False.

Since W0 < 0, and W0 > W1, and W0 – W1 < 1, choose W1 = -1. Substitute where possible.




                                                             Version 2 CSE IIT, Kharagpur
Since W0 < 0, and W0 > -1, choose W0 = -0.5.

Thus… W0 = -0.5, W1 = -1, W2 = 1.

Finally,




                                               Version 2 CSE IIT, Kharagpur

Weitere ähnliche Inhalte

Was ist angesagt?

.doc
.doc.doc
.doc
butest
 
Deep learning
Deep learningDeep learning
Deep learning
Rouyun Pan
 
Mitchell's Face Recognition
Mitchell's Face RecognitionMitchell's Face Recognition
Mitchell's Face Recognition
butest
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Simplilearn
 

Was ist angesagt? (18)

Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
 
.doc
.doc.doc
.doc
 
Duplicate_Quora_Question_Detection
Duplicate_Quora_Question_DetectionDuplicate_Quora_Question_Detection
Duplicate_Quora_Question_Detection
 
Dynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDADynamic Texture Coding using Modified Haar Wavelet with CUDA
Dynamic Texture Coding using Modified Haar Wavelet with CUDA
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
 
Deep learning
Deep learningDeep learning
Deep learning
 
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
An Approach for Image Deblurring: Based on Sparse Representation and Regulari...
 
An Efficient Multiplierless Transform algorithm for Video Coding
An Efficient Multiplierless Transform algorithm for Video CodingAn Efficient Multiplierless Transform algorithm for Video Coding
An Efficient Multiplierless Transform algorithm for Video Coding
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)On Implementation of Neuron Network(Back-propagation)
On Implementation of Neuron Network(Back-propagation)
 
Oh2423312334
Oh2423312334Oh2423312334
Oh2423312334
 
Mitchell's Face Recognition
Mitchell's Face RecognitionMitchell's Face Recognition
Mitchell's Face Recognition
 
Determining the Efficient Subband Coefficients of Biorthogonal Wavelet for Gr...
Determining the Efficient Subband Coefficients of Biorthogonal Wavelet for Gr...Determining the Efficient Subband Coefficients of Biorthogonal Wavelet for Gr...
Determining the Efficient Subband Coefficients of Biorthogonal Wavelet for Gr...
 
13Vol70No2
13Vol70No213Vol70No2
13Vol70No2
 
17Vol71No1
17Vol71No117Vol71No1
17Vol71No1
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Dynamic clustering algorithm using fuzzy c means
Dynamic clustering algorithm using fuzzy c meansDynamic clustering algorithm using fuzzy c means
Dynamic clustering algorithm using fuzzy c means
 
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
A Framework for Scene Recognition Using Convolutional Neural Network as Featu...
 

Ähnlich wie Lesson 39

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Ryo Takahashi
 
Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr - Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr -
PyData
 
eam2
eam2eam2
eam2
butest
 

Ähnlich wie Lesson 39 (20)

Eye deep
Eye deepEye deep
Eye deep
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
 
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic...
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
 
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdfTutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep models
 
IRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGAIRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGA
 
VCE Unit 01 (1).pptx
VCE Unit 01 (1).pptxVCE Unit 01 (1).pptx
VCE Unit 01 (1).pptx
 
Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
N ns 1
N ns 1N ns 1
N ns 1
 
A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...A Review on Color Recognition using Deep Learning and Different Image Segment...
A Review on Color Recognition using Deep Learning and Different Image Segment...
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language Processing
 
Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr - Using CNTK's Python Interface for Deep LearningDave DeBarr -
Using CNTK's Python Interface for Deep LearningDave DeBarr -
 
240318_JW_labseminar[Attention Is All You Need].pptx
240318_JW_labseminar[Attention Is All You Need].pptx240318_JW_labseminar[Attention Is All You Need].pptx
240318_JW_labseminar[Attention Is All You Need].pptx
 
IRJET- Image Classification – Cat and Dog Images
IRJET- Image Classification – Cat and Dog ImagesIRJET- Image Classification – Cat and Dog Images
IRJET- Image Classification – Cat and Dog Images
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
eam2
eam2eam2
eam2
 

Mehr von Avijit Kumar

Mehr von Avijit Kumar (19)

Lesson 18
Lesson 18Lesson 18
Lesson 18
 
Lesson 19
Lesson 19Lesson 19
Lesson 19
 
Lesson 20
Lesson 20Lesson 20
Lesson 20
 
Lesson 21
Lesson 21Lesson 21
Lesson 21
 
Lesson 23
Lesson 23Lesson 23
Lesson 23
 
Lesson 25
Lesson 25Lesson 25
Lesson 25
 
Lesson 24
Lesson 24Lesson 24
Lesson 24
 
Lesson 22
Lesson 22Lesson 22
Lesson 22
 
Lesson 26
Lesson 26Lesson 26
Lesson 26
 
Lesson 27
Lesson 27Lesson 27
Lesson 27
 
Lesson 28
Lesson 28Lesson 28
Lesson 28
 
Lesson 29
Lesson 29Lesson 29
Lesson 29
 
Lesson 30
Lesson 30Lesson 30
Lesson 30
 
Lesson 31
Lesson 31Lesson 31
Lesson 31
 
Lesson 32
Lesson 32Lesson 32
Lesson 32
 
Lesson 35
Lesson 35Lesson 35
Lesson 35
 
Lesson 37
Lesson 37Lesson 37
Lesson 37
 
Lesson 41
Lesson 41Lesson 41
Lesson 41
 
Lesson 40
Lesson 40Lesson 40
Lesson 40
 

KĂźrzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

KĂźrzlich hochgeladen (20)

Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 

Lesson 39

  • 1. Module 12 Machine Learning Version 2 CSE IIT, Kharagpur
  • 2. Lesson 39 Neural Networks - III Version 2 CSE IIT, Kharagpur
  • 3. 12.4.4 Multi-Layer Perceptrons In contrast to perceptrons, multilayer networks can learn not only multiple decision boundaries, but the boundaries may be nonlinear. The typical architecture of a multi-layer perceptron (MLP) is shown below. Output nodes Internal nodes Input nodes To make nonlinear partitions on the space we need to define each unit as a nonlinear function (unlike the perceptron). One solution is to use the sigmoid unit. Another reason for using sigmoids are that they are continuous unlike linear thresholds and are thus differentiable at all points. x1 w1 x2 net w2 ÎŁ w0 O = σ(net) = 1 / 1 + e -net xn wn X0=1 O(x1,x2,…,xn) = σ ( WX ) where: σ ( WX ) = 1 / 1 + e -WX Function σ is called the sigmoid or logistic function. It has the following property: d σ(y) / dy = σ(y) (1 – σ(y)) Version 2 CSE IIT, Kharagpur
  • 4. 12.4.4.1 Back-Propagation Algorithm Multi-layered perceptrons can be trained using the back-propagation algorithm described next. Goal: To learn the weights for all links in an interconnected multilayer network. We begin by defining our measure of error: E(W) = ½ ÎŁd ÎŁk (tkd – okd) 2 k varies along the output nodes and d over the training examples. The idea is to use again a gradient descent over the space of weights to find a global minimum (no guarantee). Algorithm: 1. Create a network with nin input nodes, nhidden internal nodes, and nout output nodes. 2. Initialize all weights to small random numbers. 3. Until error is small do: For each example X do • Propagate example X forward through the network • Propagate errors backward through the network Forward Propagation Given example X, compute the output of every node until we reach the output nodes: Output Internal Compute sigmoid function Input Example Version 2 CSE IIT, Kharagpur
  • 5. Backward Propagation A. For each output node k compute the error: δk = Ok (1-Ok)(tk – Ok) B. For each hidden unit h, calculate the error: δh = Oh (1-Oh) ÎŁk Wkh δk C. Update each network weight: C. Wji = Wji + ΔWji where ΔWji = Ρ δj Xji (Wji and Xji are the input and weight of node i to node j) A momentum term, depending on the weight value at last iteration, may also be added to the update rule as follows. At iteration n we have the following: ΔWji (n) = Ρ δj Xji + αΔWji (n) Where Îą ( 0 <= Îą <= 1) is a constant called the momentum. 1. It increases the speed along a local minimum. 2. It increases the speed along flat regions. Remarks on Back-propagation 1. It implements a gradient descent search over the weight space. 2. It may become trapped in local minima. 3. In practice, it is very effective. 4. How to avoid local minima? a) Add momentum. b) Use stochastic gradient descent. c) Use different networks with different initial values for the weights. Multi-layered perceptrons have high representational power. They can represent the following: 1. Boolean functions. Every boolean function can be represented with a network having two layers of units. 2. Continuous functions. All bounded continuous functions can also be approximated with a network having two layers of units. 3. Arbitrary functions. Any arbitrary function can be approximated with a network with three layers of units. Version 2 CSE IIT, Kharagpur
  • 6. Generalization and overfitting: One obvious stopping point for backpropagation is to continue iterating until the error is below some threshold; this can lead to overfitting. Validation set Error Training set Number of weight Overfitting can be avoided using the following strategies. • Use a validation set and stop until the error is small in this set. • Use 10 fold cross validation. • Use weight decay; the weights are decreased slowly on each iteration. Applications of Neural Networks Neural networks have broad applicability to real world business problems. They have already been successfully applied in many industries. Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including: • sales forecasting • industrial process control • customer research • data validation • risk management • target marketing Version 2 CSE IIT, Kharagpur
  • 7. Because of their adaptive and non-linear nature they have also been used in a number of control system application domains like, • process control in chemical plants • unmanned vehicles • robotics • consumer electronics Neural networks are also used a number of other applications which are too hard to model using classical techniques. These include, computer vision, path planning and user modeling. Questions 1. Assume a learning problem where each example is represented by four attributes. Each attribute can take two values in the set {a,b}. Run the candidate elimination algorithm on the following examples and indicate the resulting version space. What is the size of the space? ((a, b, b, a), +) ((b, b, b, b), +) ((a, b, b, b), +) ((a, b, a, a), -) 2. Decision Trees The following table contains training examples that help a robot janitor predict whether or not an office contains a recycling bin. STATUS DEPT. OFFICE SIZE RECYCLING BIN? 1. faculty ee large no 2. staff ee small no 3. faculty cs medium yes 4. student ee large yes 5. staff cs medium no 6. faculty cs large yes 7. student ee small yes 8. staff cs medium no Use entropy splitting function to construct a minimal decision tree that predicts whether or not an office contains a recycling bin. Show each step of the computation. 3. Translate the above decision tree into a collection of decision rules. 4. Neural networks: Consider the following function of two variables: Version 2 CSE IIT, Kharagpur
  • 8. A B Desired Output 0 0 1 1 0 0 0 1 0 1 1 1 Prove that this function cannot be learned by a single perceptron that uses the step function as its activation function. 4. Construct by hand a perceptron that can calculate the logic function implies (=>). Assume that 1 = true and 0 = false for all inputs and outputs. Solution 1. The general and specific boundaries of the version space are as follows: S: {(?,b,b,?)} G: {(?,?,b,?)} The size of the version space is 2. 2. The solution steps are as follows: 1. Selecting the attribute for the root node: YES NO Status: Faculty: (3/8)*[ -(2/3)*log_2(2/3) - (1/3)*log_2(1/3)] Staff: + (3/8)*[ -(0/3)*log_2(0/3) - (3/3)*log_2(3/3)] Student: + (2/8)*[ -(2/2)*log_2(2/2) - (0/2)*log_2(0/2)] TOTAL: = 0.35 + 0 + 0 = 0.35 Size: Large: (3/8)*[ -(2/3)*log_2(2/3) - (1/3)*log_2(1/3)] Medium: + (3/8)*[ -(1/3)*log_2(1/3) - (2/3)*log_2(2/3)] Small: + (2/8)*[ -(1/2)*log_2(1/2) - (1/2)*log_2(1/2)] TOTAL: = 0.35 + 0.35 + 2/8 = 0.95 Version 2 CSE IIT, Kharagpur
  • 9. Dept: ee: (4/8)*[ -(2/4)*log_2(2/4) - (2/4)*log_2(2/4)] cs: + (4/8)*[ -(2/4)*log_2(2/4) - (2/4)*log_2(2/4)] TOTAL: = 0.5 + 0.5 = 1 Since status is the attribute with the lowest entropy, it is selected as the root node: Status / | Faculty / | staff student / | 1-,3+,6+ 2-,5-,8- 4+,7+ BIN=? BIN=NO BIN=YES Only the branch corresponding to Status=Faculty needs further processing. Selecting the attribute to split the branch Status=Faculty: YES NO Dept: ee: (1/3)*[ -(0/1)*log_2(0/1) - (1/1)*log_2(1/1)] cs: + (2/3)*[ -(2/2)*log_2(2/2) - (0/2)*log_2(0/2)] TOTAL: = 0 + 0 = 0 Version 2 CSE IIT, Kharagpur
  • 10. Since the minimum possible entropy is 0 and dept has that minimum possible entropy, it is selected as the best attribute to split the branch Status=Faculty. Note that we don't even have to calculate the entropy of the attribute size since it cannot possibly be better than 0. Status / | Faculty / | staff student / | 1-,3+,6+ 2-,5-,8- 4+,7+ Dept BIN=NO BIN=YES / ee / cs / 1- 3+,6+ BIN=NO BIN=YES Each branch ends in a homogeneous set of examples so the construction of the decision tree ends here. 3. The rules are: IF status=faculty AND dept=ee THEN recycling-bin=no IF status=faculty AND dept=cs THEN recycling-bin=yes IF status=staff THEN recycling-bin=no IF status=student THEN recycling-bin=yes Version 2 CSE IIT, Kharagpur
  • 11. 4. This function cannot be computed using a single perceptron since the function is not linearly separable: B ^ | 0 1 | | ---1----0--------> | A 5. Let Choose g to be the threshold function (output 1 on input > 0, output 0 otherwise). There are two inputs to the implies function, Therefore, Arbitrarily choose W2 = 1, and substitute into above for all values of X1 and X2. Note that greater/less than signs are decided by the threshold fn. and the result of For example, when X1=0 and X2=0, the resulting equation must be greater than 0 since g is threshold fn. and is True. Similarly, when X1=1 and X2=0, the resulting equation must be less than 0 since g is the threshold fn. and is False. Since W0 < 0, and W0 > W1, and W0 – W1 < 1, choose W1 = -1. Substitute where possible. Version 2 CSE IIT, Kharagpur
  • 12. Since W0 < 0, and W0 > -1, choose W0 = -0.5. Thus… W0 = -0.5, W1 = -1, W2 = 1. Finally, Version 2 CSE IIT, Kharagpur