Video games are the bane of my existence. My win-loss ratio is in the negatives, my reaction time is measured in galactic years, respawn time is 80% of my game play, and FPS stands for Future Personal Suffering. In summary, I’m pretty bad at playing games. So I decided to make a bot to play games for me.

I decided to start my foray into video game bots by applying an artificial intelligence algorithm to a falling-asteroid game. In this game, the player has to “catch” falling asteroids to prevent them from hitting the Earth. Looks like this:

Black squares represent falling asteroids while red squares represent the asteroid catcher.

You can play along by clicking in the darker gray or lighter gray areas to move the “catcher” left or right respectively. The percentage of asteroids caught is displayed in the top left corner.

The Algorithm

Note: this post assumes a basic understanding of reinforcement learning and neural networks. If you are not familiar with these concepts, check out my posts on those two.

A simple three-layer neural network is all we need for this game. To convert the current configuration of the game into an input for the neural network, the game is broken up into several squares where each square has a number value. 0.0 represents an empty square. 0.5 represents a square with an asteroid in it. 1.0 represents a square with part of the catcher in it.

For example, with a smaller game board:

Game Converted Into Input Matrix

For the actual game, there are sixty-three input nodes with each node given the state of one square of the game as input. The middle layer has two-hundred nodes, chosen because there are sixty-three input nodes.

The last layer is the output, the neural network’s decision. It has three nodes, each corresponding to a different action. One node corresponds to moving the catcher to the left, one for not moving, and one for moving it to the right. The highest value output of these three nodes will be the action the neural network will take.

Neural Network

Cool! How do we train it?

Training

Since this neural network follows a reinforcement learning algorithm, we will award the neural network for every move leading up to catching an asteroid and penalize the neural network for every move leading up to missing an asteroid.

For example, if the output of the moves was [0.5, 0.3, 0.8] and this move lead to catching the asteroid, we would tell the AI that it should be more confident in moving right by telling it the correct output should be [0.5, 0.3, 0.99]. If it led to missing the asteroid, we tell the neural network the correct output should be [0.5, 0.3, 0.01] so that it would become less confident in making the same decision.

Mistakes

After a couple minutes staring blankly at the code (my adrenaline rush wore off), I found that I forgot to represent the catcher, so the AI only took as input the position of the asteroids. It didn’t know where the catcher was! And still caught 99% of the asteroids! Darn.

The AI probably only output “left” or “right” or “stay still” based on the position of the asteroids. If there was an asteroid on the left side, it would output “left” repeatedly, zooming across to catch the asteroid. This tactic worked on a smaller board which was small enough that just going left or right would most likely catch the asteroid, but the moment I expanded the board, its catch rate fell to 76%. After fixing this mistake, the catch rate rose rapidly, showing the importance of knowning the position of the catcher.

Results

After several adrenaline-saturated hours, this is the result:

Cool! The AI learns after only 20 minutes.

The Bigger the Better

We make the game even bigger so that it becomes even harder to catch all the asteroids. The percentage of asteroids caught drops to 93%. Still pretty good.

Github

If you want to replicate the results yourself, I put everything on github. You are more than welcome to play around with the neural network yourself!