17. No KPI is good but thinking makes it so
Engage-
Retention
ment
Monetis-
ation
Virality
CAC
LTCV
18. Virality and value
Diffusion dynamics of games on online social networks, Wei, Yang, Adamic, de Araújo and Rehki
http://www-personal.umich.edu/~ladamic/papers/FBgames/FBgameDiffusion.pdf
Play after day 1?
No Yes
Recruited by friend
Other
10% of inviters responsible for 50%
of successful invitations
20. Monetisation and value
Almost all
Experience
differences
Individual
differences
How many • interpretable
or
people • stochastic
A few Even fewer Almost none
Nothing A bit A lot A whole lot
Pay how much?
21. Time and value
Data: Sonamine, aggregatated over >12 casual games
http://www.gamasutra.com/blogs/NickLim/20120626/173051/Freemium_games_are_not_normal.php
34. Tools trends
Activities
Logging Reporting Exploratory Model-testing
App tracking
BI frontend
Data science
predictives
graph db services
cloud
35. Predictives: segmentation
there are many techniques for
identifying groups
new ones are emerging e.g. graph-based
expected future behaviour is
a strong focus
e.g. Player Lifecycle Management ™
ConvertSoon™
esp. behaviours which link to ChurnSoon™
KPIs PurchaseMoreSoon™
InfluenceSoon™
what you do with the groups source: Sonamine
is a different question...
36. Segmentation can suggest action
%Volume
7%
0.55% 6%
%Paying
36% 25% 2.34% 7Day Ret
Virality Potential
$0.75 57%
1.30% $4.40 CAC
26%
31% $2.21
0.89%
22% Early Enthusiasts
12%
$1.75 0.86
59%
Confident Completers
14% $3.57
0.97 Social Involver
5% 21%
0.19 $1.94 Sporadic SemiEngaged
9%
$2.38
Losing Momentum
Need Guidance
Revenue Potential Borderline Incompetent
Source: Games Analytics
37. It can also be opaque
Dmitry Nozhnin, Head of analytics and monetisation, Innova:
“We have tested over 60 individual and game-
specific metrics. None of them are critical enough
to cause churn. None of them! We haven't found
a silver bullet -- that magic barrier preventing
players from enjoying the game.”
http://gamasutra.com/view/feature/170472/predicting_churn_datamining_your_.php
43. We do
hundreds and
hundred of
experiments
Most
every quarter. fail.
Ken Rudin, General Manager, Analytics, Zynga
http://tdwi.org/videos/2010/08/actionable-analytics-at-zynga-leveraging-big-data-to-make-online-games-more-fun-and-social.aspx
Sometimes people do 1-2 A/B tests per week
and then complain that it doesn’t work for
them – they probably need to 5-10X their A/B
test output in order to get a win or two per
week. Andrew Chen, 2 July 2012
http://andrewchen.co/2012/07/02/what-does-a-growth-team-work-on-day-to-day
44. Quant variation rules
‘you need to invite 3 friends to get
to the next level on Bubble Witch
Saga….’
Alex Dale, CMO King.com
Facebook Mobile Hack
London March 2012
45. Quant variation rules
not 2, not 4
‘you need to invite 3 friends to get
to the next level on Bubble Witch
Saga….’
Alex Dale, CMO King.com
Facebook Mobile Hack
London March 2012
46. Answers don’t always make sense
We don’t understand what’s going on. All
we know is we’re going to keep running
economist in residence to look at
June 23 2012 Value hires Greek
these experiments to try and understand
better what it is that our customers are
telling us. And there are clearly things that
shared currency issues.
we don’t understand because a simple
analysis of these statistics implies very
contradictory yet reproducible results. So
clearly there are things that we don’t
understand, and we’re trying to develop
theories for them. It’s just an exciting time
but also a very troubling time.
Gabe Newell, Founder, Valve
Oct 23 2011
http://www.geekwire.com/2011/experiments-video-game-economics-valves-gabe-newell/
Hiya, welcome. My name’s Heather. I work on analytics, as an advisor and consultant. What I’m going to be talking about today is THIS.
That is to say...
What’s the relationship between business intelligence type measures, and ways of understanding user experience, in games analytics?I’m talking about this because I’m trying to work out the answer, and there’s nothing like having to explain something to make you try to get it straight in your head. So if you have ideas I’d like to hear them. There will be some time for questions, drop me a line, shout out on twitter. I can’t reply in real time as I’ll be up here. I’ll be putting the slides up on slideshare, and I’d also encourage you to pick up one of my new biz cards. There’s a choice of colours but I’m not running an experiment. Honest.Just so I know a bit about you - how many people here have MBA’s? Just need to know in case I diss anyone. And how many people are reasonably experienced with game analytics? You can help me answer questions.
To set the scene a bit – here’s the business environment we’re in. This is from one of the tool vendors in the games analytics space, who has a good view of what’s going on.What this says, is that genres that are on the up and up have shifted in the past 6 months. TV show trackers are up, casino is way way up. The momentum’s been stolen from RPG, Trivia, and Casual.So - what? So - the market is fickle – and you need to make the most of the window you’ve got. That means being ready willing and able to change your design based on what happens when people use it.
I’m going to cut to the chase – here’s where we’re at. Launch and learn. There’s two different melody lines going on.On the one hand this is a key belief of a cult whose HQ is somewhere between San Francisco and San Jose. And back on Planet Earth, on the other hand, there is also this ‘oh shit’ vibe. We’re changing it because it didn’t work.So there’s a bit of dissonance. But its also a huge, huge opportunity to get out in front of it and surf it. Change is part of the design.
So here’s the start of a syllogism. Design is change, and analytics powers change. Therefore....Analytics is part of the design. It’s not the only part. By a huge stretch. But it’s a smart part.
Here’s a little bit of level-setting about analytics. Hang onto your hats, I’ll be quick.
You see this?This is such a biggie. Repeat after me.
If you don’t believe me believe him.It’s a myth that doing analytics will turn you into Zynga. For better or worse.And putting stats in isn’t the same as doing analytics.But he’s not wrong.
So – change is part of your design. And that’s ok. In fact, it’s good. And Analytics powers change.Where this leads to, is that analytics is part of the design.Where people get their knickers in a twist about this is they think that it’s weird and evil for analytics to be THE design process.And that’s a good point.But for sure it’s a design process.
Analytics isn’t complicated. It’s really simple. Here’s what you do. You can use big JCBs or whatnot to do it, but this is the basic process.This is simple but useless.
Here’s where it gets interesting.
There is some change that’s going to be of the ‘uh oh’ variety. This doesn’t work. That doesn’t work. Something’s on fire. This is really useful.
There’s also planned analytics-powered design change. Here, you start out with some questions, AND an idea of what impact the answers will have on how you will change your design. There are different flavours. This can be done in conjunction with multivariate testing in a very straightforward way – where you simply pick the winner. Red or blue, blue’s better, ok we’ll have the blue one, next please. But you can also use investigation – observational and experimental - to ask questions that have a more subtle relationship to design evolution. And you can use testing and observational analysis to titrate quantifiable elements like difficulty, or price, or level of competition – changing it progressively, as the result of testing.
Ok, enough with the context, and on to the question at hand. Let’s talk about BI. Business intelligence. Who here is a beancounter? If you’re a beancounter, there’s only one thing that’s interesting. Beans.
Also known as Lifetime Customer Value. There’s a guy called Andrew Chen who’s written on this extensively. Check him out. This is where the chickens come home to roost. What this means is value OF the customer. Not value TO the customer. There is some relationship - “it’s complicated!”.
So LTCV is the good stuff. How do you get it?Here’s the secret of the secret sauce. There isn’t a recipe that suits everyone. Here are some KPIs (key performance indicators) that are – quite rightly! – held up as holy grails to chase.But the takeaway point is that the relationship between these intervening variables and LTCV is not the same for every product. Cost and monetisation are pretty direct – but they are also driven by other factors that have more complicated inter-relationships.
Let’s drill down. Virality – user-to-user contagiousness is clearly good for lowering acquisition costs. But they can also be good for engagement, and whatever it is that flows from that. For some games, players invited via friends are much more likely to stick. So the inviter’s value to the game can be greater than the sum of their own purchases. And here’s another pattern to look for – some people are super-inviters. They have network value. There’s also an interesting point here about
The point here is that ‘normal’ patterns of play are different for different contexts. How they relate to paying is something that you have to work to findout. Whatever you do don’t use DAU/MAU. I’ve got a long rant on that you can find on google. Games Analytics (a predictive analytics service provider) and others have found that regardless of what the ‘normal’ pattern is, momentum – the rate of change of can be a good predictor of churn. And churn means no more retention, and no more monetisation, bye bye acquisition cost. I would also be interested in looking at the 2nd derivative. It’s important to get straight about what you think the causal relationship is. A slowdown may be indicative of churn, but somehow making a person play more frequently it might not stop them from churning.
Here’s a typical pattern of how many people pay how much in a typical freemium game. What isn’t clear is why people end up in one category or another. Differences in game experience, or player type (however you define this) are worth looking at, though you might not get an answer.
Here’s an interesting breakdown from Sonamine – it shows that the time it takes people to monetise for the first time has a long tail. Even 20 days in they might still convert. You don’t necessarily know what someone’s lifetime value is for quite some time.
Ok so much for BI. The point I have been trying to make is that the relationships between these various high level BI measures aren’t set in stone – they are there to be discovered. This has all been pretty straightforward I hope but also pretty abstract so I thought I’d give you all a break and show you a picture of something.Switching gears from BI to user experience, more particularly the visible bits of experience that we see – user behaviour. And while you can get a lot of info from videoing sessions and running focus groups, etc, all of which I’ve done, I’m going to be focussing on what you get from interaction records. Which are composed of clicks and other stuff like that.
My point is this – that you have to boil your clicks to distill them into something useful. There is very seldom the nuclear button click – the one that changes everything. You have to look carefully to discover the effective context in which an action occurs.
The main point here is that part of the context is what happens before an action or event, and part happens afterwards. What this ‘stuff’ is isn’t necessarily best looked at as a specific action, but as an example of a more abstract type. As does the event itself. It needs to be boiled. There are as many ways to do this as there are ideas about game play, and psychology. Which is for sure the subject of another talk. But that shouldn’t hold you back. It’s better to do something than nothing. One thing that’s quite simple to look at, that doesn’t require much boiling, is looking at attribution analysis and the nature of someone’s engagement. This can help align the messaging and delivery.
There’s another dimension of context - which is what is happening concurrently. Again it’s open season what you grab onto here. And again you may well want to use derived measures that make some kind of abstraction of the situation. Being simple and obvious is a good idea.
Let’s say you murdered Professor Plum in the library with a samurai sword at midnight. Didn’t you?This can also be seen more abstractly. Hostile action.
What you grab in the way of concurrent context can include factors from the past. To leap ahead to the tools topic, this where some logging and reporting services can run into difficulty. They’re a great way to get started. If you find yourself shipping over the kitchen sink to describe every event, it might be time to consider bringing it inhouse.
So, to get back to my original question – can one lens see BI and UX. Yes. I think the answer is that it’s analytics which reveals those relationships – it’s what helps you pull focus, relating one level of description to another.
Now we’ve built up a picture of what we’re doing – let’s look at how it relates to tools and processes.
Here’s the simplest picture I’ve come up with about tools. There’s really 3 bits of the market that typically don’t connect up very well. You’ve got specialist cloudbased services for app tracking, which might have a platform focus, and might have some game-specific vocab built in. You’ve got a variety of BI front ends. You’ve got some hard core stats tools – from traditional stats packages like SPSS and SAS, to specialised machine learning algorithm implementations. These assume you’ve got your data all lined up – somewhere - and standing in an orderly array ready to be consumed. Which it might not be, so you will need to do some work to spoon feed it properly.
Here’s a pop quiz. How many think 12 is bigger than 16?
This is the right answer.
This is why you need stats. In the first case, a group with a mean of 12 is a different group from the group with the mean of 16. In the 2nd case, the right answer is ‘dunno’.This is the perspective that’s lacking from some BI approaches, and some app trackers. You get an answer but you don’t know how hard to lean on it.
On the app tracking side, you’ve got xxx, and MixPanel, enabling more exploratory analysis.Kontagent moving into data mining, and professional services.For BI front ends, Tableau is the current darling, on the strength of its approach to visual exploratory analysis.On the hard core side, there’s a specialist strand of infrastructure to do with graph db. And you’ve got traditional stats tools moving more into machine learning style predictive analytics e.g. IBM Clementine, and more cloud-based services, from SAS but not yet from IBM. There is also an uptick in activity for service-based approaches specific to predictive analytics in games, Playnomics, Games Analytics, Sonamine. Typically what you get here is access to expertise in model building as well as a bit of secret sauce.
When it comes to predictive analytics, there’s a strong focus on identifying cues to segments which predict future behaviour.A lot of this is transcribed tune from another key, in telco CRM.
Sometimes when you do a segmentation it gives you some pretty strong clues about what action to take. And here, with Games Analytics, they have a focus on a specific type of intervention, in-game messaging. The groupings here are fairly interpretable.
But sometimes when you use heavy artillery, you get useful results that are meaningless. As in: the system can predict really well that someone’s about to churn. But you don’t know why.
Then it’s difficult to understand the right change.
Do tools matter? I mean you could do it all with a lego turing machine, couldn’t you?
Well, no. Try doing multiplication in roman numerals. Or something really modular in Excel.Don’t blame your tools. Use them, get more if you need to. For predictives you want to leverage stuff that’s out there.
So now I want to tie all this in with processes. Specifically, the role of A/B testing in design change.
Which is better?The top one’s Google, the bottom one’s Bing. Google tested 42 different shades of blue to get the one that performed the best. The designer ran screaming out of the room and went to work for Twitter. If you want someone to blame for the mania for A/B testing you could try them. Or Amazon. Or Facebook. Last month they were testing an option where you could pay to promote your stories to your friends, like Badoo.Daring. Decisive. Conviction. SXSWDoug Bowman on Design at GoogleWhen a company is filled with engineers, it turns to engineering tosolve problems. Reduce each decision to a simple logic problem.Remove all subjectivity and just look at the data. Data in your favor?Ok, launch it. Data shows negative effects? Back to the drawing board.And that data eventually becomes a crutch for every decision,paralyzing the company and preventing it from making any daringdesign decisions.every 100 ms increase in load time of Amazon.com decreased sales by 1% (Kohavi and Longbotham 2007) invented the amazon.com recommendation engineGoogle VP Marissa Mayer asked a group of Google searchers how many search results they wanted to see. Users asked for more, more than the ten results Google normally shows. More is more, they said.So, Marissa ran an experiment where Google increased the number of search results to thirty. Traffic and revenue from Google searchers in the experimental group dropped by 20%.Ouch. Why? Why, when users had asked for this, did they seem to hate it?After a bit of looking, Marissa explained that they found an uncontrolled variable. The page with 10 results took .4 seconds to generate. The page with 30 results took .9 seconds.Half a second delay caused a 20% drop in traffic.
A/B testing is a demanding discipline. And it sometimes does make people run screaming from the room.
What you see a lot are people playing with quant variations. Prices. Offers. More, or less, rather than different. It’s easier to get your head round.
So you optimise, but that optimisation doesn’t lead you to any further insights. Why not 2? Why not 4? Totally a black box.Useful, but not magical. Can the answer suggest more questions? No.
And again what you get here is that the answers aren’t always what you’d expect. So they’ve hired an economist. Possibly a better choice than a banker – we’ll have to see.
I’ve been noticing that the casino sites often feature breasts. The interesting question here is not ‘is breast A better than breast B’, and optimising along some continuum via some hill climbing algorithm.. You need to link the independent variable to a hypothesis – about, say, whether being exploited into a state of mild excitement makes players more likely to take risks. That leads to more fruitful ideas about possible design interventions, in addition to plastic surgery. So the test isn’t just about A cup vs a B cup.
The test I showed in the last slide isn’t a real one. It’s two different views of the same thing. A time series.These are a suggested parameters for of body physics add-on in 2nd Life. And the controls that control it are in the hands of the user, for something they have created. There’s a fundamental difference in attitude, between designs which are extractive, and designs which are inclusive, and engage people in adding the value of the experience. I owe this distinction to Niall Ferguson in the recent Reith lecture. Although there are probably casino’s on 2nd life.
What I’m thinking about, and arguing for, is to look beyond data-driven design, and think about it as discovery driven. Or as design-driven analytics.Who likes the first, and who likes the 2nd best?
You can’t drive just by looking at the rear view mirror.
You need to look ahead, and plan your investigations and analysis to feed into your design change. Not just wind tunnel, but a guidance system.