This document discusses using artificial intelligence to test levels in the Candy Crush Saga game. It describes King.com, the developer of Candy Crush, and their use of QA teams. It then covers using AI techniques like Monte Carlo tree search and neuroevolution to have automated bots play levels and provide feedback. This could help level designers, reduce human testing workload, and assist data scientists. Challenges include maintaining the bots and integrating them into King's development process.
Simple mobile games, can they be a challenge to AI? Single-player game with simple mechanics? There is only one level in chess and Go and no randomness. This is a BIG difference.
Talk about the great success of AI in games, Kasparov 97 and Sedol 2016
Spend some time here, explaining how it improves the work for level designers etc.
The traditional AI in games, like in chess, builds a function which can rank the moves – a heuristic function – based on the relevant features of the game. We have done something like that for many of our games. Very fast and pretty good ... to begin with
High maintenance, no transfer of method between games – new heuristic for each game. How would it choose between the possible actions? Should it try to think ahead, predict the future?
We would like to have some heuristic but the levels are always changing. Get feedback about automatic heuristic construction.
So, we tried simulation
Here are three of the possible moves, how can we figure out which move is the best?
Let‘s simulate each action. One simulation each move very unreliable. How should the playout be? Random at the moment. => repeat! Not too fast here, make sure everybody follows and understand how we play until the end while still thinking – no action taken yet.
After repeated simulations we start to get some kind of an estimate for the expected value of each of these three actions.
Current position is in the middle – the root of the tree. The bot is still thinking and has used 100 simulations. The smartness of MCTS comes from how we manage our resources – greed vs. exploration
The circle shows the possible actions in the current position
Further down the tree we see possible future states and future actions. Both trees from the same level but with different values for the parameter controlling greed. One is ”flat” the other more ”deep”.
Colored tree, more simulations, red = loss, green = win
The solid line shows the mean/average. The gray line is where we want to be
Game engine code is not optimized for this. Easily scalable though. How can we improve? Talk about different methods available – get feedback from audience and steer it towards random playout. Strength and speed, one can often be exchanged for he other.
We tried NEAT and will continue experimenting with automatically constructing heuristic.
We want a bot that can weigh the importance of the game features, e.g. Number of candies in a combination, striped candy, number of jellies and number of blockers as shown here. But we are lazy and don‘t want to do anything ourself.
Let’s create a lot of bots, and we are still lazy so we just create them randomly and let them play.
Exemple for one child
Mom uses interaction between candies/special candies
Dad focuses on blockers
Child do both
Emphasize that this is how you generate the new generation
Slights changes between newborns
A couple won’t always produce the same child
Different couples
Complex pattern = More interactivity between features
Straightforward = all features are independent
A more complex pattern allows deeper thinking.
The dotted line is the mean and the solid line is the mean with ANN. Remember, we want to be close to gray. Talk about that the heuristic can save us time – we can get good enough with more simulations
We tried NEAT and will continue experimenting with automatically constructing heuristic.
We tried NEAT and will continue experimenting with automatically constructing heuristic.
We tried NEAT and will continue experimenting with automatically constructing heuristic.