Slides from my presentation at the Data Intelligence conference in Washington DC (6/23/2017). See this link for the abstract: http://www.data-intelligence.ai/presentations/36
2. Vishal Patel
Data Science Consultant
Founder of DERIVE, LLC
Fully Automated Advanced Analytics products
Data Science services
MS in Computer Science, and MS in Decision Sciences
Richmond, VA
w w w. d e r i v e . i o
6. On September 21, 2009, the grand prize of US$1,000,000 was given to the BellKor's
Pragmatic Chaos team which bested Netflix's own algorithm for predicting ratings by 10.06%.
7. “[T]he additional accuracy gains that we measured did not seem to justify the
engineering effort needed to bring them into a production environment.”
8. Analytic projects fail because…
…they aren’t completed within budget or on schedule,
or because they fail to deliver the features and benefits
that are optimistically agreed on at their outset.
9. How to Avoid Failure?
1 Build with Organizational Buy-in
2 Build with End In Mind
3 Build with Structured Approach
10. How to Avoid Failure?
1 Build with Organizational Buy-in
2 Build with End In Mind
3 Build with Structured Approach
14. The Blind Men and the Elephant
It was six men of Indostan
To learning much inclined,
Who went to see the Elephant
(Though all of them were blind),
That each by observation
Might satisfy his mind.
And so these men of Indostan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right
And all were in the wrong!
– John Godfrey Saxe
16. Data Munging
Model Training
Model Deployment
Data Preparation Model Tracking
Business
Understanding
Model Evaluation
Visualize Results
Data Science Process
STATISTICSBUSINESS COMPUTER SCIENCE
1
2
3
4
5
6
7
17. Data Munging
Model Training
Model Deployment
Data Preparation Model Tracking
Business
Understanding
Model Evaluation
Visualize Results
Data Science Process
STATISTICSBUSINESS COMPUTER SCIENCE
24. Business
Understanding
OBJECTIVE TECHNIQUE EXAMPLES
Predict Values Regression
Linear regression,
Bayesian regression,
Decision Trees
Predict Categories Classification
Logistic regression,
SVM, Decision Trees
Predict Preference Recommender System
Collaborative / Content-
based filtering
Discover groups Clustering
k-means, Hierarchical
clustering
Discover unusual data
points
Anomaly Detection k-NN, One-class SVM
…
1
2
3
DETERMINE
UNDERSTAND
MAP
26. If all you have is a hammer
then everything looks like a nail.
27. Business
Understanding
o Primary Objective: Prevent attrition Increase subscription renewals
o Completive Objective: Core customers are also targeted for up-sell
o Constraints: Avoid targeting customers too close to their contract expiration
o Success Criteria: Current renewal rate = 65% Improve by 8%
o Existing Solution: Business-rule-based targeting
o Data Science Objective: Build a binary classification model to identify
customers who are not likely to renew their subscriptions three months in
advance of their contract expiration.
o Success Scenario: The model correctly identifies 80% of the future attritors.
And the promotional campaign targets all likely attritors, and successfully
converts 19% of them into non-attritors.
28. Business
Understanding
o Duration
o Inventory of resources
o Tools and techniques
o Risks and contingencies
o Costs and benefits
o Milestones
The thought that disaster is impossible
often leads to an unthinkable disaster.
– Gerald Weinberg
Project Plan
Titanic at Southampton docks, prior to departure
30. Data
Preparation
o Data sources, formats
o Database, Streaming API’s, Logs, Excel files, Websites, etc.
o Entity Relationship Diagram (ERD)
o Identify additional data sources
o Demographics data appends
o Geographical data
o Census data, etc.
o Identify relevant data
o Record unavailable data
o How long a history is available and one should use?
1 IDENTIFY
2 COLLECT
3 ASSESS
4 VECTORIZE
31. Data
Preparation
o Access or acquire all relevant data in a central location
o Quality control checks and tests
o File formats, delimiters
o Number of records, columns
o Primary keys
1 IDENTIFY
2 COLLECT
3 ASSESS
4 VECTORIZE
32. Data
Preparation
First look at the data
o Get familiar with the data
o Study seasonality
o Monthly/weekly/daily patterns
o Unexplained gaps or spikes in the historical data
o Detect mistakes
o Extreme or outlier values
o Unusual values
o Special missing values
o Check assumptions
o Review distributions
1 IDENTIFY
3 ASSESS
4 VECTORIZE
2 COLLECT
33. Tidy dataset are all alike;
Every messy dataset is messy in its own way.
35. Target Definition
o Churn = 90 days of consecutive inactivity
o What’s inactivity?
o Incoming and outgoing calls
o Data usage
o Incoming text
o Promotional texts
o Voicemail usage
o Call forwarding
o Etc.
o What to do with customers who change their device or phone number?
o Churn at the individual (person) level, or at the device (phone) level?
o How many customers return (become active again) after 90 days of inactivity?
o Prediction window
o Predict 90 days of consecutive inactivity?
o Would 10 days of consecutive inactivity suffice?
o How many customers return after x days of inactivity?
o How to deal with fraud?
o Etc.
36. Modeling Sample
o Historical trends and seasonality
o Are there certain timeframes that should be discarded?
o The model should be generalizable for future targeting purposes
o Eligible, relevant population
o Must align with the business goals
o Eligible, relevant markets (e.g., US vs. International)
o Must align with the business goals
o Outdated products
37. Selection Bias
Abraham Wald's Work on Aircraft Survivability
Journal of the American Statistical Association Vol. 79, No. 386 (Jun., 1984)
38. Information Leakage
… Sep Oct Nov Dec Jan Feb Mar Apr
…to predict whether customers
will [do something]
Use all available information
(“Leading Indicators”)
as of the end of Jan…
OBSERVATION WIDOW PREDICTION WIDOW
o The leading indicators must be calculated from the timeframe leading up to the event
– it must not overlap with the prediction window.
o Beware of proxy events, e.g., future bookings
39. Data Aggregation
o Attribute creation
o Derived attributes: Household income / Number of adults = Income per adult
o Brainstorm with team members
𝑥11 𝑥12 𝑥13 … 𝑥1𝑗
𝑥21 𝑥22 𝑥23 … 𝑥2𝑗
𝑥31 𝑥32 𝑥33 … 𝑥3𝑗
. . . .
. . . .
. . . .
𝑥 𝑛1 𝑥 𝑛2 𝑥 𝑛3 … 𝑥 𝑛𝑗
𝑋 =
40. Data Aggregation
CUSTOMER_ID PURCHASE_DATE
1001 02-12-2015:05:20:39
1001 05-13-2015:12:18:09
1001 12-20-2016:00:15:59
1002 01-19-2014:04:28:54
1003 01-12-2015:09:20:36
1003 05-31-2015:10:10:02
… …
1. Number of transactions (Frequency)
2. Days since the last transaction (Recency)
3. Days since the earliest transaction (Tenure)
4. Avg. days between transaction
5. # of transactions during weekends
6. % of transactions during weekends
7. # of transactions by day-part (breakfast, lunch, etc.)
8. % of transactions by day-part
9. Days since last transaction / Avg. days between transactions
10.…
CUSTOMER_ID 𝑥1 𝑥2 … 𝑥𝑗
1001 … … …
1002 … … …
1003 … … …
… … … … …
44. Give me six hours to chop down a tree
and I will spend the first four sharpening the axe.
– Abraham Lincoln
45. Data
Munging
o Descriptive statistics
o Review with business owners
o Correlation analysis
o Review with business owners
o Watch out for data leakage
o Impute missing values
o Trim extreme values
o Process categorical attributes
o Transformations (square, log, etc.)
o Binning / variable smoothing
o Multicollinearity
o Reduce redundancy
o Create additional feature
o Interactions
o Normalization (scaling)
46. Univariate Multivariate
Non-Graphical
Graphical
o Cross-tabulation
o Univariate statistics by category
o Correlation matrices
o Histograms
o Box plots, stem-and-leaf plots
o Quantile-normal plots
o Categorical: Tabulated frequencies
o Quantitative:
o Central tendency: mean, median, mode
o Spread: Standard deviation, inter-
quartile range
o Skewness and kurtosis
o Univariate graphs by category (e.g.,
side-by-side box-plots)
o Scatterplots
o Correlation matrix plots
Data
Munging
47. Data
Munging
Feature Reduction: The process of selecting a subset of features for use in model construction
Useful for both supervised and unsupervised learning problems
Art is the elimination of the unnecessary.
– Pablo Picasso
48. Data
Munging
True dimensionality <<< Observed dimensionality
The abundance of redundant and irrelevant features
Curse of dimensionality
With a fixed number of training samples, the predictive power reduces as the dimensionality
increases. [Hughes phenomenon]
With 𝑑 binary variables, the number of possible combinations is 𝑂(2 𝑑
).
Value of Analytics
Descriptive Diagnostic Predictive Prescriptive
Law of Parsimony [Occam’s Razor]
Other things being equal, simpler explanations are generally better than complex ones.
Overfitting
Execution time (Algorithm and data)
Hindsight Insight Foresight
Feature Reduction: Why
51. Model
Training
Try more than one machine learning technique
Fine-tune parameters
Assess model performance
Avoid Over-fitting
52. Assess Model Performance
Area Under the ROC Curve (AUC), Confusion Matrix, Precision, Recall, Log-loss
Model Lift, Model Gains, Kolmogorov-Smirnov (KS), etc.
53. When a measure becomes a target,
it ceases to be a good measure.
Goodhart's law
54. Model
Evaluation
Hold-out (test) set
Tri-fold partitioning, k-fold cross-validation
Out-of-time test set
Value of analytics
Law of Parsimony (Occam’s Razor)
Model execution time
Deployment complexity
Compare against existing business rules/model
Model peer-review (Quality Control)
Model Selection and Performance Assessment
56. Model
Evaluation
AUC
Cumulative Gains chart
Model usage recommendations
Decile reports
How many deciles should be targeted?
Predictor Importance
Each predictor’s relationship with the target
Interpret results as they relate to the business application
Visualization of Modeling Results
58. Model
Deployment
Model production cycle (daily, weekly, monthly, etc.)
Scoring code, or publish model as a web service
Model Documentation (Technical Specifications)
Data preparation, transformations, imputations, parameter settings, etc.
Reproducibility
Docker containers, etc.
Model Persistence vs. Model Transience
59. Model Persistence vs. Model Transience
Aug Sep Oct Nov Dec Jan Feb Mar Apr
Model Build Model Decay Tracking Model Rebuild Model Decay Tracking
Scoring Algorithm Scoring Algorithm
Model
Persistence
Aug Sep Oct Nov Dec
Model Build
Model
Transience
Model Build Model Build Model Build Model Build
• Traditional approach
• Provides stability
• Less resource intensive
• Modern approach
• Able to capture recent trends
• Resource intensive
60. Model
Tracking
Model decay tracking (monitoring) plan
Model maintenance plan
Adding new data sources
Version control
Campaign Set-up and Execution
Experimental Design
Reason Coding
63. Data Munging
Model Building
Model Deployment
Data Preparation Model Tracking
Business
Understanding
Model Evaluation
Visualize Results
Data Science Process: Recap
STATISTICSBUSINESS COMPUTER SCIENCE
64. Data Munging
Model Building
Model Deployment
Model Evaluation
Visualize Results
AutoClassifier
These steps usually take
2 to 3 weeks
But with AutoClassifier,
it takes a few mouse-clicks
and less than 24 hours w w w . d e r i v e . i o
65. Process as Proxy
Good process serves you so you can serve customers.
But if you’re not watchful, the process can become the proxy for the result you want.
You stop looking at outcomes and just make sure you’re doing the process right.
Gulp.
It’s always worth asking, do we own the process or does the process own us?
– Jeff Bezos