Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs consist of interconnected nodes that operate in parallel to solve problems. The document discusses ANN components like neurons and weights, compares ANNs to biological neural networks, and outlines ANN architectures, learning methods, applications, and more. It provides an overview of ANNs and their relationship to the human brain.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Artificial Intelligence: Artificial Neural NetworksThe Integral Worm
This presentation covers artificial neural networks for artificial intelligence. Topics covered are as follows: artificial neural networks, basic representation, hidden units, exclusive OR problem, backpropagation, advantages of artificial neural networks, properties of artificial neural networks, and disadvantages of artificial neural networks.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
This presentation discusses the following ANN concepts:
Introduction
Characteristics
Learning methods
Taxonomy
Evolution of neural networks
Basic models
Important technologies
Applications
Artificial Intelligence: Artificial Neural NetworksThe Integral Worm
This presentation covers artificial neural networks for artificial intelligence. Topics covered are as follows: artificial neural networks, basic representation, hidden units, exclusive OR problem, backpropagation, advantages of artificial neural networks, properties of artificial neural networks, and disadvantages of artificial neural networks.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
This presentation discusses the following ANN concepts:
Introduction
Characteristics
Learning methods
Taxonomy
Evolution of neural networks
Basic models
Important technologies
Applications
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
Artificial neural networks, usually simply called neural networks, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
4. What is Artificial Neural Network ?
• Artificial Neural Network (ANN) is an efficient computing system whose
central theme is borrowed from the analogy of biological neural networks.
• ANNs are also named as “artificial neural systems,” or “parallel distributed
processing systems,” or “connectionist systems.”
• ANN acquires a large collection of units that are interconnected in some
pattern to allow communication between the units.
• These units, also referred to as nodes or neurons, are simple processors
which operate in parallel.
• Every neuron is connected with other neuron through a connection link.
Each connection link is associated with a weight that has information
about the input signal.
• This is the most useful information for neurons to solve a particular
problem because the weight usually excites or inhibits the signal that is
being communicated.
• Each neuron has an internal state, which is called an activation signal.
Output signals, which are produced after combining the input signals and
activation rule, may be sent to other units.
6. • A neuron (also known as a neurone or nerve cell) is a cell that carries
electrical impulses. Neurons are the basic units of the nervous system and
its most important part is the brain.
• Every neuron is made of a cell body (also called a soma), dendrites and an
axon. Dendrites and axons are nerve fibres. There are about 86 billion
neurons in the human brain, which comprises roughly 10% of all brain
cells. The neurons are supported by glial cells and astrocytes.
• Neurons are connected to one another and tissues. They do not touch and
instead form tiny gaps called synapses. These gaps can be chemical
synapses or electrical synapses and pass the signal from one neuron to the
next.
Biological Neuron
7. Biological Neuron
• Dendrite — It receives signals from other neurons.
• Soma (cell body) — It sums all the incoming signals to generate input.
• Axon — When the sum reaches a threshold value, neuron fires and the
signal travels down the axon to the other neurons.
• Synapses — The point of interconnection of one neuron with other
neurons. The amount of signal transmitted depend upon the strength
(synaptic weights) of the connections.
• The connections can be inhibitory (decreasing strength) or excitatory
(increasing strength) in nature.
• So, neural network, in general, is a highly interconnected network of
billions of neuron with trillion of interconnections between them.
8. How is Brain Different from Computers
BRAIN COMPUTERS
Biological Neurons or Nerve Cells. Silicon Transistor.
200 Billion Neurons, 32 trillion
interconnections.
1 Billion bytes RAM, trillion of bytes on
disk.
Neuron Size: 10-6m. Single Transistor Size: 10-9m.
Energy Consumption: 6-10 Joules
operation per second.
Energy Consumption: 10-16 Joules
Operation per second
Learning Capability Programming capability
9. Comparing ANN with BNN
• As this concept borrowed from ANN there are lot of similarities though
there are differences too.
• Similarities are in the following table
Biological Neural Network Artificial Neural Network
Soma Node
Dendrites Input
Synapse Weights or Interconnections
Axon Output
10. Following table Shows some differences between ANN and BNN :-
Comparing ANN with BNN
Criteria BNN ANN
Processing Massively parallel, slow but
superior than ANN
Massively parallel, fast but
inferior than BNN
Size 1011 neurons and 1015
interconnections
102 to 104 nodes (mainly
depends on the type of
application and network
designer)
Learning They can tolerate ambiguity Very precise, structured and
formatted data is required to
tolerate ambiguity
Fault tolerance Performance degrades with
even partial damage
It is capable of robust
performance, hence has the
potential to be fault tolerant
Storage capacity Stores the information in the
synapse
Stores the information in
continuous memory
locations
12. • The dendrites in biological neural network is analogous to the
weighted inputs based on their synaptic interconnection in artificial
neural network.
• Cell body is analogous to the artificial neuron unit in artificial neural
network which also comprises of summation and threshold unit.
• Axon carry output that is analogous to the output unit in case of
artificial neural network. So, ANN are modelled using the working
of basic biological neurons.
Analogy of ANN with BNN
14. Model of Artificial Neural Network
• Artificial neural networks can be viewed as weighted directed graphs in
which artificial neurons are nodes and directed edges with weights are
connections between neuron outputs and neuron inputs.
• The Artificial Neural Network receives input from the external world in the
form of pattern and image in vector form. These inputs are mathematically
designated by the notation x(n) for n number of inputs.
• Each input is multiplied by its corresponding weights. Weights are the
information used by the neural network to solve a problem. Typically
weight represents the strength of the interconnection between neurons
inside the neural network.
• The weighted inputs are all summed up inside computing unit (artificial
neuron). In case the weighted sum is zero, bias is added to make the output
not- zero or to scale up the system response. Bias has the weight and input
always equal to ‘1’.
15. • The sum corresponds to any numerical value ranging from 0 to infinity.
• In order to limit the response to arrive at desired value, the threshold
value is set up. For this, the sum is passed through activation function.
• The activation function is set of the transfer function used to get desired
output. There are linear as well as the non-linear activation function.
• Some of the commonly used activation function are — binary, sigmoidal
(linear) and tan hyperbolic sigmoidal functions(nonlinear).
• Binary — The output has only two values either 0 and 1. For this, the
threshold value is set up. If the net weighted input is greater than 1, an
output is assumed 1 otherwise zero.
• Sigmoidal Hyperbolic — This function has ‘S’ shaped curve. Here tan
hyperbolic function is used to approximate output from net input. The
function is defined as — f (x) = (1/1+ exp(-𝝈x)) where 𝝈— steepness
parameter.
Model of Artificial Neural Network
16. Architecture
A typical neural network contains a large number of artificial neurons called units arranged
in a series of layers.
In typical artificial neural network, comprises different layers -
17. Architecture
• Input layer — It contains those units (artificial neurons) which receive
input from the outside world on which network will learn, recognize about
or otherwise process.
• Output layer — It contains units that respond to the information about
how it’s learned any task.
• Hidden layer — These units are in between input and output layers. The
job of hidden layer is to transform the input into something that output
unit can use in some way.
• Most neural networks are fully connected that means to say each hidden
neuron is fully connected to the every neuron in its previous layer(input)
and to the next layer (output) layer
18. Learning in Biology(Human)
• Learning = learning by adaptation
• The young animal learns that the green fruits are sour, while the
yellowish/reddish ones are sweet. The learning happens by adapting the
fruit picking behaviour.
• At the neural level the learning happens by changing of the synaptic
strengths, eliminating some synapses, and building new ones.
• The objective of adapting the responses on the basis of the information
received from the environment is to achieve a better state. E.g., the
animal likes to eat many energy rich, juicy fruits that make its stomach full,
and makes it feel happy.
• In other words, the objective of learning in biological organisms is to
optimise the amount of available resources, happiness, or in general to
achieve a closer to optimal state
20. Types of Learning in Neural Network
• Supervised Learning— In supervised learning, the training data is input to
the network, and the desired output is known weights are adjusted until
output yields desired value.
• Unsupervised Learning— The input data is used to train the network
whose output is known. The network classifies the input data and adjusts
the weight by feature extraction in input data.
• Reinforcement Learning— Here the value of the output is unknown, but
the network provides the feedback whether the output is right or wrong.
It is semi-supervised learning.
• Offline Learning— The adjustment of the weight vector and threshold is
done only after all the training set is presented to the network. it is also
called batch learning.
• Online Learning— The adjustment of the weight and threshold is done
after presenting each training sample to the network.
21. Learning Data sets in ANN
• Training set: A set of examples used for learning, that is to fit the
parameters [i.e. weights] of the network. One Epoch comprises of one full
training cycle on the training set.
• Validation set: A set of examples used to tune the parameters [i.e.
architecture] of the network. For example to choose the number of hidden
units in a neural network.
• Test set: A set of examples used only to assess the performance
[generalization] of a fully specified network or to apply successfully in
predicting output whose input is known.
22. Learning principle for
artificial neural networks
• Learning occurs when the weights inside the network get updated after
many iterations.
• For example — Suppose we have inputs in the form of patterns for two
different class of patterns — I & 0 as shown and b -bias and y as desired
output.
• We want to classify input patterns into either pattern ‘I’ & ‘O’.
• Following are the steps performed:
• 9 inputs from x1 — x9 along with bias b (input having weight value 1) is fed
to the network for the first pattern.
• Initially, weights are initialized to zero.
• Then weights are updated for each neuron using the formulae: Δ wi = xi y
for i = 1 to 9 (Hebb’s Rule)
23. • Finally, new weights are found using the formulae:
• wi(new) = wi(old) + Δwi
• Wi(new) = [111–11–1 1111]
• The second pattern is input to the network. This time, weights are not
initialized to zero. The initial weights used here are the final weights
obtained after presenting the first pattern. By doing so, the network
• The steps from 1–4 are repeated for second inputs.
• The new weights are Wi(new) = [0 0 0 -2 -2 -2 000]
• So, these weights correspond to the learning ability of network to classify
the input patterns successfully.
Learning principle for
artificial neural networks
24. Uses of ANN
Using ANNs requires an understanding of their characteristics.
• Choice of model: This depends on the data representation and the
application. Overly complex models slow learning.
• Learning algorithm: Numerous trade-offs exist between learning
algorithms. Almost any algorithm will work well with the correct hyper
parameters for training on a particular data set. However, selecting and
tuning an algorithm for training on unseen data requires significant
experimentation.
• Robustness: If the model, cost function and learning algorithm are
selected appropriately, the resulting ANN can become robust.
25. ANN capabilities fall within the following broad categories
• Function approximation, or regression analysis, including time series
prediction, fitness approximation and modelling.
• Classification, including pattern and sequence recognition, novelty
detection and sequential decision making.
• Data processing, including filtering, clustering, blind source separation and
compression.
• Robotics, including directing manipulators and prostheses.
• Control, including computer numerical control.
Uses of ANN
26. • Classification — A neural network can be trained to classify given pattern
or data set into predefined class. It uses feed forward networks.
• Prediction — A neural network can be trained to produce outputs that are
expected from given input. E.g.: — Stock market prediction.
• Clustering — The Neural network can be used to identify a special feature
of the data and classify them into different categories without any prior
knowledge of the data.
• Following networks are used for clustering -
• Competitive networks
• Adaptive Resonance Theory Networks
• Kohonen Self-Organizing Maps.
• Association — A neural network can be trained to remember the certain
pattern, so that when the noise pattern is presented to the network, the
network associates it with the closest one in the memory or discard it.
E.g. — Hopfield Networks which performs recognition, classification, and
clustering etc.
Uses of ANN
27. Applications
• Because of their ability to reproduce and model nonlinear processes,
ANNs have found many applications in a wide range of disciplines.
• Application areas include system identification and control (vehicle
control, trajectory prediction, process control, natural
resources management), quantum chemistry,[game-playing and decision
making (backgammon, chess, poker), pattern recognition (radar
systems, face identification, signal classification, object recognition and
more), sequence recognition (gesture, speech, handwritten text
recognition), medical diagnosis, finance (e.g. automated trading
systems), data mining, visualization, machine translation, social network
filtering and e-mail spam filtering.
• ANNs have been used to diagnose cancers, including lung cancer, prostate
cancer, colorectal cancer and to distinguish highly invasive cancer cell lines
from less invasive lines using only cell shape information.
• ANNs have been used for building black-box models in geosciences:
hydrology ocean modelling and coastal engineering, and geomorphology,
are just few examples of this kind.
28. Since neural networks are best at identifying patterns or trends in data,
they are well suited for prediction or forecasting needs including:
• sales forecasting
• industrial process control
• customer research
• data validation
• risk management
• target marketing
Applications
29. Summary
• Artificial neural networks are inspired by the learning processes that take
place in biological systems.
• Artificial neurons and neural networks try to imitate the working
mechanisms of their biological counterparts.
• Learning can be perceived as an optimisation process.
• Biological neural learning happens by the modification of the synaptic
strength. Artificial neural networks learn in the same way.
• The synapse strength modification rules for artificial neural networks can
be derived by applying mathematical optimisation methods.
30. Summary
• Learning tasks of artificial neural networks can be reformulated as
function approximation tasks.
• Neural networks can be considered as nonlinear function approximating
tools (i.e., linear combinations of nonlinear basis functions), where the
parameters of the networks should be found by applying optimisation
methods.
• The optimisation is done with respect to the approximation error
measure.
• In general it is enough to have a single hidden layer neural network (MLP,
RBF or other) to learn the approximation of a nonlinear function. In such
cases general optimisation can be applied to find the change rules for the
synaptic weights.