A Worthy Opponent

12th August 2015 Justin Libby

 

"’The computer cheats.' - That's what they say in the forums."

"Well, does it?" I asked myself.  It was my second month at Demiurge Studios, and I was curious how the Marvel Puzzle Quest game engine worked. It was lunchtime, and I was hounding my fellow engineers with questions: “Does it look at the upcoming tiles?  Does it try to set up combos?  Does it target player tiles?  Oh, oh, oh! Does it intentionally drop new tile colors to mess up your turn?”

I was a big fan of D3's Puzzle Quest: Challenge of the Warlords. I remember feeling waves of frustration playing boss battles. At one point, I nearly snapped my DS in half. But I noticed that opponents in MPQ felt different.

Looking for answers, I dug through our code to the place where the computer chooses a pair of tiles to swap.  The flow for the computer's turn is:

  1. Make a list of all valid swaps available on the visible board
  2. Assign a score to each swap
  3. Sort the list by the score
  4. Pick the best swap, or randomly from the top swaps if there's a tie

Here's the code that drives the score for the swap sort comparator:

    Float GetScore() const
    {
        return (m_cTilesMatched * 10.f) + (bChargesAbility ? 1.f : 0.f);
    }

The boolean value bChargesAbility is true if the match will charge up an ability on the computer's team. The integer m_cTilesMatched is the number of tiles matched after searching in a straight line, up to 4 tiles. The section that tallies up the number of tiles was written to check one tile behind the swap and two tiles ahead, so it doesn't look far enough for match‑5s in a line.  Since it only checks along one axis, it can't see L and T shaped match‑5s.

This scoring system means that the criteria for selecting a swap are:

  1. Number of matched tiles in a straight line, up to 4
  2. Does the match charge an ability on the computer's team?

That's it.  That's all the logic.  The first rule takes precedence over the second: the computer will always take a match‑4 over a match‑3. If there are two match‑4s available, the computer will prefer the one that will charge its abilities.  If there are multiple best swaps, it will randomly choose one.  Sometimes the match-4 also results in a match‑5, but the computer doesn't “see” the match-5.

The computer does not look ahead.  It does not try to set up combos. The computer has two very simple rules that could be applied equally well by a human looking at the same board.

The computer does not try to screw up your turn by dropping bad tiles either. Each new tile is selected at random, individually, without any feedback from the swap‑selection code.  I did learn that Team-Up tiles drop with a slightly lower rate than color tiles.  Each color is weighted with a value of 1.0, while Team‑Up tiles are weighted with 0.75.  That means there's about an 11% chance that a new tile will be a Team‑Up, and about a 15% chance for each of the colors.

I had answered my questions: the game was not cheating. So what made players think otherwise?

The computer is only guilty of being complete.  It never misses a match‑4. After years of playing match-3 style games, my eyes still miss match‑4s and match‑5s once in a while.  The computer, however, never makes a mistake.  This creates the illusion of an opponent that is more clever than it really is.

On top of that, as a player, we're guilty of holding a cognitive bias, only noticing when the random number generator doesn't go our way.  When I get a long cascade of matches it feels great, but it also feels expected because I’m in control. When the computer gets the same cascade, I'm left feeling powerless and enraged. That sense of frustration is what I remember later on.

I talked to Kevin, the development director on MPQ who is also an engineer, about the enemy‑turn logic. He said that he had written it for an early build of the game, then never got a chance to revisit it. Time passed, and the game got into the hands of players. Players thought the game's swap choices were already too good. That stifled any attempt to fix the underpowered computer opponent.

 

Faster Better Stronger

Convinced I could make things better, I started writing code.  I knew it would be a lot of extra work to teach the computer to set up combos for future turns.  The computer wouldn't know where a player was going to make a swap.  So I limited myself to information available looking at the current board. I added more criteria to the swap decision:

  1. Number of matched tiles, up to 7, including L and T shapes and double match-3s
  2. Does the match include a Critical tile?
  3. Does the match destroy a player-owned tile?
  4. Does the match not destroy a computer-owned tile?
  5. Does the match charge an ability on the computer's team?
  6. Does the match not use the team's weak colors?

I chose the rules and ordering based on the logic I used in my head when I was playing the game.  Using different score weights, designers could adjust the rule ordering, or turn rules on or off for different battles. The computer could be told to intentionally make mistakes, randomly. If desired, enemies would always choose the worst moves instead of the best.

Using the new default rules, fully enabled, in the order listed above, I tried playing a test build of the game.  It felt much harder.  The computer wasn't destroying its own Bombs and Strike tiles right away.  I yelped as the computer targeted my Protect tiles and Countdown tiles.

It was clear that changing the enemy-turn heuristic would have a strong effect on gameplay and balance.  After a discussion with the team, we decided to put my new code on ice.  As much as I wanted to make the computer opponent better, changing the way the computer plays would have unforeseen consequences, no matter how much testing we did.

The decision is common in established software: changes have a cost; don't make big modifications unless you understand the cost and the effect.  A change has echoes and ripples that will come back as bugs and follow-up feature requests.  The more the code is connected to other parts of the code base, the further those ripples will propagate.

What's more, the game is meant to be fun. If the game is too difficult, players would stop playing. Or worse, they would smash their phones to the ground in frustration. Making a good game means riding the line between winning with ease and losing repeatedly.  Good games feel like running down a knife's edge, with skill and luck buffeting the player from both sides.

What do you think? Would a more clever opponent make MPQ better? What about a random and unpredictable foe? Or are things fine the way they are now?