Diese Präsentation wurde erfolgreich gemeldet.
Die SlideShare-Präsentation wird heruntergeladen. ×

Early Stopping in Deep Learning

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Nächste SlideShare
Batch Normalization
Batch Normalization
Wird geladen in …3
×

Hier ansehen

1 von 1 Anzeige

Weitere Verwandte Inhalte

Aktuellste (20)

Anzeige

Early Stopping in Deep Learning

  1. 1. 1. We stop training process when we do not see any improvement in the validation error at the end of epoch. 2. Key parameters: 1. Patience – How many epochs of no improvement we need to wait to finally stop the training. 2. Delta – What is the minimum change in KPI that can be termed as a real improvement. For example, improvement of 0.000001% in validation error can be called as not an improvement as it is minor. 3. Keep best weights – Let’s say validation error keeps reducing from epoch 1 to 10 and after 10, it starts increasing. We have a patience of 4 which makes us wait till epoch 14 to stop training process. In this scenario, the best validation error was at the end of epoch 10. Hence, we keep the weights that were used in epoch 10. Early Stopping

×