Well, this is another later night than I was hoping to put in due to my making some stupid (classic) mistakes, but lets not talk of that right now. Instead we’ll (quickly) talk about how I implemented some AI for today’s update. As I did before, here is a link to a tweet that has a big animated GIF of what we’re about to talk about:
— Terence Martin (@OdatNurd) December 17, 2016
This wasn’t what I was planning on doing today, but while I was on my morning walk my brain kept coming back to this topic so I figured I might as well jump there first. It has to be done eventually, plus I wanted to assure myself that what I was planning on doing would actually work. It also seemed like it might be a good idea to have solidified how this is going to work before I start on the state machine.
Basically what the AI does is simulate pushing all of the balls into the maze and then score each result based on how many bonus bricks the block hit along the way and how far down the maze it got (with a bonus for reaching the bottom). Then the ball that achieved the highest score is the one that is selected to push. In the case that there are more than one ball that come out to the same score, one of them is randomly chosen. The scores here are subject to change when that aspect of the game is actually implemented, but they’re good enough for now.
This is a little naive in that (as seen in the twitter image above) when the ball enters a Teleport entity during the simulation, it might not be the same one chosen when the move is played out. A more ideal (read: smarter) AI would run through all possible permutations of the move and assign a score based on the probability of a good move in this regard, but this is good enough for this prototype.
Another sticking point (such as it is) is that the simulation all runs within a single “tick” as far as the game is concerned, and so all automatic arrows are guaranteed to always be pointing in the direction they are when the move starts. The simulation does not take time into account to see when the arrow changes, since that seems unfair. Again, a more ideal AI might also calculate all permutations of this kind of arrow being in both orientations and then score as appropriate, but I also did not do that.
The simulation itself is pretty simple, all things told. The MazCell base class has been given some new methods and additional parameters to existing ones.
First, the callbacks that indicate that the entity is touching the ball or has successfully moved the ball aside (so basically bricks, teleports and arrows) have a new parameter that indicates if this is happening in a simulation or not. When in a simulation, internal state is modified but no visual changes are made. So for example an arrow will flip directions but not visually start rotating.
Secondly there are now methods in the entity to tell it to save all pertinent internal state and to restore it from the saved state. For arrows this would be the direction they’re pointing, for bonus bricks it’s whether or not they have been picked up.
When it’s time to run the AI, we first get all entities that might change their state to save it, find all balls that are eligible to be pushed (in the top row, not blocked), and then run the same code that we originally used before the animated ball dropping was in to see where the ball ends up. Every time we run a simulation on a ball we restore all entities back to the state they started in.
I’ve added a simple test of this that triggers by a press of the question mark. This runs the AI to find a ball and then jumps the player to that position and pushes the ball.
I’m pretty pleased with the results, all things told. Now I think I can return to my plan of including the state machine tomorrow, which should make it possible to have the computer make their own turn as appropriate. This of course assumes my brain doesn’t obsess over something else on my walk tomorrow.