2. Full Day of Applied AI
Morning
Session 1 Intro to Artificial Intelligence
09:00-09:45 Introduction to Applied AI
09:45-10:00 Coffee and break
Session 2 Live Coding a machine learning app
10:00-10:10 Getting your machine ready for machine learning
10:10-10.20 Training and evaluating the model
10.20-10.50 Improving the model
10.50-11.00 Coffee and break
Session 3 Machine learning in the wild - deployment
11:00-11.15 Coding exercise continued
11:15-11:45 Serving your own machine learning model | Code
11:45-11:55 How to solve problems | interactive exercise
11:55-12:00 Q and A
Lunch
12:00-13:00 Lunch
Afternoon
Session 4 Hello World Deep Learning (MNIST)
13:00-13:15 Deep Learning intro
13:00-13.15 Image recognition and CNNs | Talk |
13:15-13:45 Building your own convolutional neural network | Code |
13:45-14:00 Coffee and break
Session 5 Natural Language Processing
14:00-14.30 Natural language processing | Talk |
14:30-14:45 Working with language | Code |
14:45-15:00 Coffee and break
Session 6 Conversational interfaces and Time Series
14:00-14.20 Conversational interfaces
14:20-14:45 Time Series prediction
14:45-15:00 Coffee and break
Session 7 Generative models and style transfer
16:00-16.30 Generative models | Talk |
16:30-16:45 Trying out GANS and style transfer | Code |
Anton Osika AI Research Engineer Sana Labs AB
anton.osika@gmail.com
Birger Moëll Machine Learning Engineer
birger.moell@gmail.com
3.
4. A new type of interface for most users
Communicating directly with text (or speech) with a computer is a new type of
interface for most users.
AI systems using text to speech (Google Home, Alexa) are built to understand
natural sounding language.
Of course, it’s not nearly new at all (we use it ALL the time). The news is that we
can communicate with machine using speech and text.
5. Research has shown that people
respond to conversational
technology as they would to
another human
6. Expect users to be informative.
Because users are cooperative, they often
offer more information than is literally
required of them
9. Designing good conversational interfaces is hard.
Almost as hard as talking to people. :-)
Here is google Design Guide with many tips
https://designguidelines.withgoogle.com/conversation/conversation-design/learn-about-conversation.html#learn-about-conversation-
context
There are also more tips on handling errors
https://designguidelines.withgoogle.com/conversation/conversational-components/errors.html#errors-no-match
10. The overview of conversational interfaces
Intents (what I want) Example, OrderSandwhich
Training phrases (I want to order a sandwhich with Ham)
Entities (what variables the user gives to the agent) (Toppings, Ham,
Cheese)
Responses (how to respond)
11. Intents
Intents are the wants a user have in
your app.
In our sample app our intents are,
Default Welcome Intent,
UpdateProfile and GetHealthTip.
Examples, BuySandwhich,
BuyShoes, FindRestaurant,
BookHairAppointment,
LearnAboutEskimos, BookRoom,
BookAFlight, BuyFlowers
12. Training Phrases
Training phrases
Training phrases are ways for our
app to figure out intents (what user
want).
Here our training phrases are
examples of symptoms a user can
in a bodypart.
13. Entities
Entities
Entities are ways to structure
knowledge within your app.
For our app we have an entity of
body parts with different parts of the
body.
15. Contexts
Contexts
With the help of contexts, we can
structure our app to only respond
with specific intents, once a certain
context is activated
16. The overview of conversational interfaces
Intents (what I want) Example, OrderSandwhich
Training phrases (I want to order a sandwhich with Ham)
Entities (what variables the user gives to the agent) (Toppings, Ham,
Cheese)
Responses (how to respond)
17. Lets check out some code!
The conversational agent we have built is made using Google Firebase Cloud
Functions in Node.js for handling the logic once an intent is found.
The agent is made in the GUI and can be zipped and uploaded.
https://github.com/BirgerMoell/fulldaydeeplearning/tree/master/chatbot
22. Integration with Webhooks
Using webhooks
we can make our
dialogflow integrate
with a custom url
through a post
request.
This makes it
possible to build
our own agent!
24. Using Amazon Lex
AWS has a similar
system for building
chatbots called Lex.
The drawback is lack of
support in Swedish.
25. Integration
Once our dialog agent is complete we can use
systems such as Kommunicate to
Integrate it with websites and smoothly
Integrate handoff to a human if the bot
doesn’t understand the user intent.
26. Integration
Once our dialog agent is complete
we can use systems such as
Kommunicate to Integrate it with
websites and smoothly Integrate
handoff to a human if the bot
doesn’t understand the user intent.
27. To build your own chat, try out Dialogflow
https://dialogflow.cloud.google.com/
29. Sequential data and RNNs
Language is an example of sequential data – it moves in one direction.
RNNs have revolutionized sequential data modeling.
Examples where they work great include:
● Language translation
● Language generation
● Video understanding
● Playing Starcraft
Attention has become more popular for many NLP tasks, but RNNs are still
state-of-the-art for pure time series tasks with long input sequences.
36. Human brains are Recurrent Networks
Your mind can been seen as an astoundingly
large recurrent neural network where a part of
the input for the next time step is the previous
timestep. Memory serves the function of
changing the state of mind.
You are your state and your state largely
determines the state at the next timestep.
37. Trying out LSTMs
LSTMs can be used for many different tasks
In the colab notebooks we have some examples of using LSTMs
Using Google you can find other examples such as:
● Sales forecasting
● Power grid predictions
● Sentiment analysis
● Recommendation systems
38. Trying out LSTMs
Open up the notebooks in Time Series Prediction with Deep Learning and follow
the explanations.
39. ML / Deep Learning lecture
Birger Moëll Machine Learning Engineer Ayond AB
birger.moell@ayond.se
Morning
Session 1 Intro to deep learning
09:00-09:30 Introduction to Machine learning / Deep Learning | Talk |
09:30-09:45 Getting your machines ready for machine learning | Code |
09:45-10:00 Coffee and break
Session 2 Hello world
10:00-10:15 Hello World in Machine Learning (MNIST) | Talk |
10:15-10:45 Running your own MNIST | Code |
10:45-11:00 Coffee and break
Session 3 Feedforward networks
11:00-11.15 Feedforward Neural Networks | Talk |
11.15-11.45 Building your own feedforward neural network | Code |
11:45-12:00 Q and A | Interactive
Lunch
12:00-13:00 Lunch
Afternoon
Session 4 Image recognition
13:00-13.15 Image recognition and CNNs | Talk |
13:15-13:45 Building your own convolutional neural network | Code |
13:45-14:00 Coffee and break
Session 5 Natural Language Processing
14:00-14.15 Natural language processing | Talk |
14:15-14:45 Working with language | Code |
14:45-15:00 Coffee and break
Session 6-7 Generative models and time series
15:00-15.15 Generative models and LSTMs | Talk |
15:15-15:45 Trying out GANS and time series | Code |
15:45-16:00 Coffee and break
Session 8 Machine learning in the wild / Deployment
16:00-16.15 Machine learning in the wild | Talk |
16:15-16:45 Serving your own machine learning model | Code |
16:45-17:00 Q and A | Interactive
Hinweis der Redaktion
Have narrow dialog trees where users can quickly solve their problems.