Drew Dillon gave a presentation on running a data-driven product organization. He discussed collecting data on key metrics like user growth and retention. Insights from data can show what areas need improvement and whether experiments worked. Dashboards and reports help visualize trends and funnels. Multivariate testing allows testing different features to see what improves metrics. Machine learning can power recommendations and natural language processing. While data provides insights, intuition is also important, and not everything can be measured quantitatively.
12. Overview
● The Character of Data
● Collecting Data
● What to Measure?
● Insights
● Dashboards and Reporting
● Multivariate Testing
● Machine Learning
21. What Can Data Tell Us?
● What areas of the product are getting the most use and how
● How we're doing (key performance indicators, "KPI")
● The outcome of an experiment
● Whether a feature "sucks"
● And… what it can’t
○ What to do next
○ Causal relationships between past events
○ When to make larger bets
○ Whether you’ve hit relative maxima
27. Growth
● Users/customers - how many are coming in? Where do they come
from?
● Retention - are people coming back 48 hours, one week, one month
after joining? Different apps may care about different retention
intervals.
● Invites (k-factor) - how many invites are you getting out of each user? If
you have multiple viral elements, which is most effective?
30. Engaging Apps
• Engagement
– Sign-in, page views, time on site
• Interactions
– App-specific
– Prioritize driving further engagement / growth
What are the two most important interactions on Instagram?
34. What Not to Measure
● Clicks—clicks aren’t always positive, clicking around may signal user
distress or confusion
● Page views—a painfully obvious perverse incentive.
● Time on site—not terrible data, but over-emphasized and not
objectively positive
38. Ad Hoc Questions
Can empower other teams:
● What are behaviors that correlated to paid upgrades?
● Which customers haven't logged in in a while?
And tell you what's worth working on:
● How many people view this page?
● Should we build an Android app? (How might you determine this?)
50. Testing Has Costs
1. Time to delivery
2. Code complexity
3. QA has to test multiple versions
4. Other PMs and Designers have to know about tests
5. Users don’t love the idea
Early heuristic: only test features that impact 30% of users.
54. Million Dollar Bet
Here are the ground rules:
1. The competition will last approximately two years
2. Once a flag is on the object, the other competitor can no longer put their flag
there
3. We both start in San Francisco with $10,000. We can borrow more, but then
we don’t win as much from the bet.
What would be your strategy?
62. Flavors of Test Data
Aggregates
Number of times a certain action was taken
● Number of invites sent
● Total replies
● Days Engaged
Binaries
Whether a certain action was taken
● Invite sent
● Replied
● 24 hour retention
70. Experiment Outcomes at Google
Failure SuccessFlat
33.333333% 33.333333% 33.333333%
The very best product managers at Google are “wrong” 68%
of the time.
78. Review
● The Character of Data
● Collecting Data
● What to Measure?
● Insights
● Dashboards and Reporting
● Multivariate Testing
● Machine Learning
83. www.productschool.com
Part-time Product Management, Coding, Data, Digital
Marketing and Blockchain courses in San Francisco, Silicon
Valley, New York, Santa Monica, Los Angeles, Austin, Boston,
Boulder, Chicago, Denver, Orange County, Seattle, Bellevue,
Toronto, London and Online