Over the past couple months I have had very few opportunities to play Pokémon. The time that I haven’t been using for Pokémon has gone to my studies, both for school and for college applications. Math has always been one of my favorite subjects, and Pokémon articles that focus on applied mathematics have been some of my favorites in the past.
For this reason, my article today will be about making optimal plays and how application of economical and mathematical concepts play into a variety of aspects of your game. We’ve seen quite a few articles on SixPrizes talking about the impact of making the correct choices, but today I’ll be focusing on ways for you to more easily quantify this data and apply it to a broader spectrum of situations rather than remembering the ones that have been covered by other writers. Deck building, choosing which cards to play, making the correct sequence of card plays, deck choice, and more can all be broken down into more comprehensible analyses than you might think.
Game theory, a mathematical economics concept that intends to model the outcomes of two or more decision-makers’ actions, will be the primary focus on my article. Statistics also plays a big role in Pokémon on the math side, but there have been plenty of articles in the past that calculate probabilities of drawing cards, starting with certain Basics, etc. For this reason I will bring a somewhat unique application of statistics that has to do with playtesting. The rest of the statistics will be done in the game theory section to give the article more continuity rather than having to reference between sections as I go deeper into one concept compared to another.
Most of the best players use these analyses without thinking or knowing the underlying concepts, so I hope that this article will be able to appeal to a wide range of audiences. These concepts can be applied to any level of gameplay: from the beginner who is looking to drastically improve their play, to the top-ranked player who can make small improvements by knowing these skills.
One important thing to note before diving into this article is that there are tons of assumptions that I am making in each example in order to easily model the data and explain the concepts that I’m trying to cover. These situations may seem unrealistic, but the underlying concepts are what you can apply to your game. I will try to give as many ways to eliminate these assumptions as possible.
Table of Contents
Game theory, also known as interactive decision theory, allows us to look at the possible outcomes of a situation and pick the most optimal line of play. This is one of the more advanced ways to look at in-game decisions, as well as deck choices. There is quite a lot of simplification that has to be done in order to make the underlying concepts comprehensible, so let’s assume that the probabilities that play into the outcomes have been approximated and generalized for a range of different variables. One thing to note is that we are assuming that both players are rational decision-makers, which is to say that they will both be making the most optimal play given the information that they have. Often times an “irrational” decision from your opponent will increase your chances of winning regardless, but this is an important assumption that has to be taken into account.
First let’s look at the two different ways to display a game. One way to display a game is to use a tree diagram, which is usually used to display an extensive form game. Extensive form games are games that have a sequence of timing between moves. This will be the form that we use to analyze optimal plays within a game. The next option of display is to use a matrix, which is primarily used when representing normal form or strategic form game; extensive form games can be put into a matrix, but it is usually easier to visualize on a tree diagram. This matrix shows the different results based on the choices that both players make, and we can use this to analyze deck choices, due to there being no inherent sequence of these moves.
Example 1: In-Game Analysis
Our first example of game theory will be a situation that I’m sure many of you have been in before. Both players have 2 Prizes left, and it is Player 1’s turn. Player 1 has used their Supporter for the turn and has a decision to either Knock Out the opponent’s non-EX Active Pokémon or pass. We have two more pairs of scenarios here: Player 1 can either have a Lysandre in their hand, which guarantees a win, or not. Player 2 then has the option to either N or Professor Juniper. Professor Juniper has a chance of winning the game, but if Player 2 doesn’t win after the Professor Juniper Player 1 will have the win.
The probabilities of each player winning are made up for the purpose of the example. Player 2 doesn’t know if Player 1 has a Lysandre or not, so bluffing will come into play here and we will look at two different tree diagrams to determine the best choice for each player. For later on in the problem, let’s assume that the probability of Player 1 having a Lysandre is 35%. The numbers represent the odds of each player winning (Player 1 is red, Player 2 is blue).
Lysandre in hand.Tree Diagram 1: Player 1 does have
Lysandre in hand.Tree Diagram 2: Player 1 does NOT have
A quick explanation on my reasoning for these numbers is that Juniper will have the same chances either way if Player 2 has the win in hand, but if Player 2 does not then they will have to draw into the win or lose. The probability of the N outcome changes, because if Player 2 does not have the win in hand they will have 2 additional draws.
Player 2 has a dominant strategy in both of these cases, which means that they will make the same play regardless of what Player 1 does. Playing N will always give them a probability of winning that is greater than or equal to probability of winning when playing a Juniper. Player 1 knows this and will thus make the action that gives the greatest probability of winning given N being played second. Player 1 will pass on both scenarios, because the probability of winning is greater when passing if the opponent is going to N.
After determining the equilibrium, which is the outcome that is going to occur, we can now figure out the probabilities of each player winning the game. The probability of Player 1 winning is going to be (0.35)*(35) + (0.65)*(40) = 38.25% and Player 2’s probability of winning can be calculated by the same formula, or by doing 1-P(P1), which would be 61.75%.
It is difficult to apply this while playing a game due to the importance of having percentages as accurate as possible. We can do rough estimates, though, which often times can lead you to finding the equilibrium play. Remember that the equilibrium won’t always be the optimal play, due to the fact that your opponent most likely won’t be acting with the same information as you. Despite this, you can still try to find a dominant strategy, like Player 2 had in the example above. Basing your plays on these models will result in a higher overall likelihood of success if done correctly, so I would highly recommend thinking about extensive form games in situations like these.
With all of the assumptions we make in the example, we can’t accurately make a tree diagram while in a game and figure out the possible outcomes. What we can do with this knowledge is use the idea of an extensive form game to think about processions of plays while in a game. Comparing the possible outcomes your opponents possible lines of play based on a decision you make on your turn is an important concept when determining optimal plays.
Example 2: Metagame Analysis
For the sake of simplification, let’s assume that there are only three choices of decks in a certain metagame: Donphan, Yveltal, and Virizion/Genesect. We will also assume that each player’s deck will have the same cards in order to eliminate variations in certain matchups. Each player is also as the same skill level as you, so that you neither gain an advantage nor a disadvantage based on who you play. Everyone playing Virizion/Genesect will be the same, everyone playing Donphan will be the same, and everyone playing Yveltal will be the same. The normal form matrix below shows the matchups between the three decks. The red numbers show the percent chance of the deck in the row winning and blue numbers show the percent chance of the deck in the column winning.
Figure 1: Matchup Matrix
In order to find the optimal deck we have to calculate each deck’s probability of winning any individual game. This is found by multiplying its probability of winning a certain matchup by the percentage of the field that the other deck occupies. Or, as an equation:
P(win)= Σ P(deck n)*P(deck n win)
Using this equation we can compare the probabilities of each deck winning any individual game by plugging in the variables that we have from the matrix and computing. Here are the values:
Donphan P(win)= (50)*(1/3) + (30)*(1/3) + (55)*(1/3) = 45%
Yveltal P(win)= (70)*(1/3) + (50)*(1/3) + (55)*(1/3) = 58.33%
Virizion/Genesect P(win)= (45)*(1/3) + (45)*(1/3) + (50)*(1/3) = 46.67%
In this scenario we can predict that Yveltal, having the highest probability of winning, will be the most optimal play. Remember what I said before about rational decision-makers? In our scenario of perfect information, we will also have to assume that everyone knows these calculations, leading to a shift in decks back and forth until each deck has a perfect 50% win percentage. To find out the optimal distribution of decks in this situation we have to set up a 3-variable system of equations. x will be Donphan, y will be Yveltal, and z will be Virizion.
50× + 30y + 55z = 50
70× + 50y + 55z = 50
45× + 45y + 50z = 50
By solving this system we can get the values of x=1/6, y=1/6, and z= 2/3. If you would like to do this calculation yourself there is a website where you can plug in these values and it will solve the system of equations for you. One of the most important things to take away from this is the fact that the equilibrium of decks won’t have the “best deck” being played the most. I see many players talking about which deck is the best and trying to focus on beating it. In theory, this is the wrong approach, and actually the metagame will shift away from the best deck on its own.
Although this idea of perfect information and rational decision making is quite obviously not how real-life events play out, we can still look at these values to see where the distribution of decks should be gravitating toward. An example of this would be to calculate your values based on the distribution of decks in the weekends leading up to your tournament and then seeing where the distribution should be gravitating toward, as well as the trends of the proportions of each deck. There are a multitude of factors that play into the distribution of decks on any given weekend, so keeping everything else in mind while looking at these computations is important. Numbers are far from an absolute truth.
This is a very basic metagame, but we can apply these concepts to more complex situations by adding more decks and calculating the values of the matchups to a more precise number. Figuring out the proportion of the decks is much more difficult than finding the matchups, but it can usually be roughly approximated due to information from previous tournaments or from online hubbub.
The use of game theory for metagame analysis can be applied more directly to real life situations than can the in-game analysis in the last section. Running the numbers for a few possible distributions of decks can be a useful way to determine which decks thrive in various metagames. This is a fantastic way to improve your metagaming skill, which is the ability to choose a correct deck for the metagame. Closed environments such as City Championship marathons are a great way to apply these figures for a few reasons.
- Most of the same people will be at each event with very little turnover.
- You can gauge fairly easily which players are going to continue playing the same deck.
- The results of one day will directly affect the day after.
By making a more reasonable estimate of the distribution of decks you can estimate which deck will be the optimal play with much more certainty.
An interesting way to look at the distributions can also be to estimate a good scenario and a bad scenario of the matchups and look at the range of the win percentages. This is mostly useful when you have very little idea of which decks are going to be at a tournament. The decks that have the lowest variance between the difference in deck distribution, and the ones that have a high average win percentage will be the decks that would be your best choice when going into an unknown event. Let’s look at this broken down to a good distribution, an even distribution, and a bad distribution for a Yveltal deck:
Yveltal P(win)= (70)*(1/2) + (50)*(1/3) + (55)*(1/6) = 60.833%
Yveltal P(win)= (70)*(1/3) + (50)*(1/3) + (55)*(1/3) = 58.33%
Yveltal P(win)= (70)*(1/6) + (50)*(1/3) + (55)*(1/2) = 50.833%
The figures of 1/2, 1/3, 1/6 are just to show metagames that have more or less of your good and bad matchups. Having only 1/6 of the field be your bad matchup is a good metagame, while having it occupy 1/2 of the field would be a bad metagame. Here we can see that Yveltal will still have good win percentages even when the distribution isn’t in its favor. This concept is difficult to display with only three decks in the format, so I would recommend you try it out yourself with the current format. If you don’t have any matchup data, approximations will work just fine.
Most players will playtest countless games and make small tweaks to their decks based on how they’re doing, but I’ve come across very few who document the results of every game in order to determine matchups. Determining matchups is important when choosing a deck to play, and when trying to make changes to a deck to give it a better balance against the format.
In order to roughly determine matchups, you should play at least 10 games between all possible matchups. This may seem like a lot, but you can draw your results from tournaments, playing online, playing with your friends, others’ experiences, etc. Working with a few other people it shouldn’t be too difficult to work out 10 games of each matchup.
The amount of games a certain deck wins divided by the amount of games played can be used in place of my estimations of matchups for a more accurate result most of the time.
In order to make our model more realistic, we have to eliminate as many of the assumptions that we made as possible. First and foremost we will have to add in every possible deck in the format and the proportion of the metagame that each will occupy. These will have to be rough estimates, and any unknown decks will have to be ignored.
To eliminate the assumption that every deck has the same cards we can look at the matchup of your deck versus another deck that has either teched for you deck, teched for another matchup, or is purely consistent. A simple way to look at this is to look at a mirror match, let’s say Donphan. Straight Donphan vs. Straight Donphan is a 50/50, Straight Donphan vs. Donphan teched for mirror is a 40/60, and Straight Donphan vs. Donphan teched for another matchup is 55/45. By using the equation from earlier, P(win)= Σ P(deck n)*P(deck n win), we can figure out an overall matchup based on proportions of Donphan players with each variant.
If you don’t know the distributions of a deck at a tournament and have no way to tell what they will be, for example the first event with a new set out or going to a new area to play, you can look at different distributions that you think are possible and pick a deck that does the best in the most of these metagames. Usually when going into an unknown metagame sticking with the deck you are most comfortable with is the best option, but if you have a multitude of options that you are indifferent to these equations can be an interesting way to pick a deck.
MAKING TOP CUT
Statistics is a fantastic way to analyze the data that we have accumulated thus far. For this section, I will talk about one of my favorite ways of playtesting and how to apply statistics to eliminate the busywork. Playing 5 timed games in a row of the same matchup can be a great way to look at how decks would do in a tournament against a not-so-great matchup. It also helps you test techs for certain matchups and see how a tournament setting would play out. Playing the same matchup over and over again might seem like a bad way to simulate an event, but I’ve found that testing your best and worst possibilities can be a great way to adjust your deck.
The example I’m going to use will be based on the Virizion/Genesect and Yveltal matchups from the last section. Yveltal had has 55/45 matchup before, but to add in ties I’ve taken 5% from each side to make the new probabilities P(win)=0.50 P(loss)=0.40 P(tie)=0.10. To “make top cut” in this simulation we will assume that 10 match points is the cutoff for making top cut.
Before getting on to the calculations, we first have to come up with the different records that will result in more than 10 match points. These would be:
We now need to apply these different possibilities to our equation. For a scenario where we have 3 possible outcomes, win/lose/tie, we will be using a multinomial distribution equation, which is:
n is our total number of games, which is 5. The rest of the numbers are denoted by the letter next to them. nW is the number of wins, pW is the probability of winning, and so on. By plugging the values of all of our possibilities of getting about 10 match points into the equation we get the probabilities of each of these outcomes. Then, by summing these together, we get our approximation of “making top cut.”
P(5, 0, 0) = 0.0313
P(4, 0, 1) = 0.0313
P(4, 1, 0) = 0.125
P(3, 0, 2) = 0.125
P(3, 1, 1) = 0.1
By adding these together we get our final probability of 0.4126, meaning that Yveltal will “make top cut” 41.25% of the time against Virizion in this scenario. This data can be useful if you expect a large amount of Virizion and want to change your Yveltal deck to counter Verizion. Seeing how changing matchups affects your “top cut chance” vs. different decks can be an interesting piece of data to look at; matchup percentages aren’t always the only thing you should focus on.
This is one of the concepts that is harder to apply directly to your game. If you have figured out the probabilities of each outcome the multinomial distribution equation is a way to visualize how changes in your matchups due to teching or adding more consistency plays out in multiple rounds. You can simplify this setup to have n be the number of games you expect to play against the other deck in an event. My calculating the probabilities of the possible win, loss, and tie outcomes you can set a goal or a standard of wanting to have a certain record or better against another deck.
An easy-to-use spreadsheet that will automatically solve the above calculations above for you is available here. Click “File” then “Make a copy” or “Download” to edit it and play around with different cell values for matchups and expected metagame composition.
This type of article was entirely new to me, but I had a fantastic time writing it and I hope that you all find the concepts I covered as interesting as I do. I wouldn’t consider myself an expert at any of these models; a lot of the things I learned about game theory were on my own time because its so interesting to me. Reading that some financial traders look at Magic: The Gathering to explore advanced game theory sparked my idea for this article, so maybe as I learn more I can find ways that Pokémon can have similar applications!
If I used something incorrectly please let me know so I can improve and possibly bring you all a more in-depth article later on. Any and all feedback is welcome as always, I especially like to hear from my readers when I do something new in order to gauge interest, and to pick my topics with more confidence in the future.
Good luck to everyone at their coming City Championships, especially with the marathons coming up very soon. I’ll hopefully be able to attend the Chicago Marathon and maybe a few Ohio/Indiana Cities in early January. Winter Regionals are also coming up fairly soon, so hopefully I’ll be able to see you all in Virginia and Florida!
Until next time,
…and that will conclude this Unlocked Underground article.
After 45 days, we unlock each Underground (UG/★) article for public viewing. New articles are reserved for Underground members.
Underground Members: Thank you for making this article possible!
Other Readers: Check out the FAQ if you are interested in joining Underground and gaining full access to our latest content.