2. Outline
• Introduction to Machine Learning
• Framing: Key ML Terminology
• Descending into ML
• Reducing Loss
• First Steps with TF
3. Introduction to Machine
Learning
• Reduce time programming
• feed machine learning tool some examples, and get a more
reliable program in a small fraction of the time.
• Customize and scale products
• To support multiple languages, you can collect data in that
language and feeding it into the exact same machine learning
model.
• Complete seemingly "unprogrammable" tasks
• ML lets you solve problems that you, as a programmer, have no
idea how to do by hand, ex. recognize face
4. Introduction to Machine
Learning
• Coding
• We use assertions to prove properties of our program
are correct.
• ML
• The focus shifts from a mathematical science to a
natural science:
• We're making observations about an uncertain world,
running experiments, and using statistics, not logic, to
analyze the results of the experiment.
5. Framing: Key ML
Terminology
• Label
• A label is the thing we're predicting—the y variable in
simple linear regression.
• already has an answer
• Feature
• A feature is an input variable — the x variable in simple
linear regression.
• Parameter types of data we already have
6. Framing: Key ML
Terminology
• Example
• An example is a particular instance of data, x. (We put x in boldface to
indicate that it is a vector.)
• labeled examples
• {features, label}: (x, y)
• train the model
• unlabeled examples
• {features, ?}: (x, ?)
• we want to predict
8. Descending into ML
• Linear Regression
• find the closest linear relationship (prediction)
between x and y
• prediction could be defined as
9. Descending into ML
• - Loss
• a number indicating how bad the model's prediction was on a
single example
10. Descending into ML
• Loss Function
• Squared Loss (L2 loss)
• = the square of the difference between the label and the
prediction
•
•
• Mean Square Error (MSE)
• sum up all the L2 loss, and then divide by the number of examples
13. Reducing Loss
• An iterative trial-and-error approach to training a model
• start with an initial guess for the weights and bias
• iteratively adjusting those guesses
• until learning the weights and bias with the lowest
possible loss
• overall loss stops changing or at least changes
extremely slowly
• called the model has converged
16. Reducing Loss
• Gradient descent
• find a learning rate (a hyperparameter) large enough that gradient
descent converges efficiently, but not so large that it never converges
18. Reducing Loss
• batch
• the total number of examples you use to calculate the
gradient in a single iteration.
• small: computing ↓ noisy ↑; large: computing ↑ noisy ↓
• Stochastic gradient descent (SGD):one example (a
batch size of 1) per iteration
• Mini-batch stochastic gradient descent (mini-batch
SGD):10 and 1,000 examples
21. First Steps with TensorFlow
• Pandas
• deal with examples (input data, x) before being
put into TensorFlow
• data structure
• DataFrame - like examples, has 1↑ Series
• Series - like features,
22. First Steps with TensorFlow
• TensorFlow
• Build the First Model
• Tweak the Model Hyperparameters
23. First Steps with TensorFlow
• Build the First Model
• Define and Configure Feature
• Define the Target (y)
• Configure the LinearRegressor
• Define the Input Function
• Train the Model
• Evaluate the Model
24. First Steps with TensorFlow
• Define and Configure Feature
• Configure data type for TF’s feature column
• Categorical Data
• Numerical Data
26. First Steps with TensorFlow
• Configure the LinearRegressor
• apply gradient clipping via clip_gradients_by_norm
• ensures the magnitude of the gradients do not
become too large during training, which can cause
gradient descent to fail.
27. First Steps with TensorFlow
• Define the Input Function
• instructs TensorFlow how to preprocess the data, as well as
how to batch, shuffle, and repeat it during model training.
• convert our pandas feature data into a dict of NumPy
arrays.
• use the TensorFlow Dataset API to construct a dataset
object
• break data into batches of batch_size, to be repeated for
the specified number of epochs (num_epochs).
29. First Steps with TensorFlow
• Train the Model
• call train() on our linear_regressor to train the model.
30. First Steps with TensorFlow
• Evaluate the Model
• compare max, min, mean value to Root Mean Squared
Error (RMSE)
31. First Steps with TensorFlow
• Tweak the Model Hyperparameters
• learning_rate, steps, batch_size, input_feature
• tips
• Lower learning rate
• Larger number of steps or batch size