The Observer Effect

A Follow-Up to ‘In Defense of Net Decking’
“Through the magic of the World Wide Web …”

The quality of Pokécontent has improved substantially since I started playing Pokémon less than two years ago. There appears to be something of a near arms race today as ProPokemon, 60Cards, PokéBeach, and SixPrizes compete to produce better content and justify their subscription fee business models. Similarly, the video content being produced by their partners, Squeaky Marking and Some1sPC, demonstrate the pressure to evolve their content platforms as the market matures. I suspect that few players will be willing to invest to join multiple subscription-based communities, so the battle has some characteristics of a zero-sum game: The platform that offers more value to players than other platforms should capture the vast majority of the revenue available.

One of the central characteristics of these subscription sites are the decklists. You can tell how important the lists are because the list is the most frequent point of interruption in reading an article that has a preview and then a paywall. This is recognition that many readers are simply looking for strong lists for playing rather than thrilling play-by-play of prior tournaments or even nuanced discussion of technique. Frankly, the challenge in this regard may simply be that discussion of technique is incredibly difficult work. For excellent list builders, generating lists is pretty easy. For far too many players, the describing decisions about one card over another — or even appropriate plays at a given moment — relies more on gut feel (frequently informed by many years of experience) than data-driven analysis.

How easy is list generation? So easy for many that they can look at it almost through a lens of gamesmanship. Articles such as “Ray G. Biv” – Nine Different Builds of Mega Rayquaza-EX (categorized under Fun on SixPrizes) is simply a slew of lists where every type of Energy is used to build a Mega Rayquaza deck.

This led me to wonder: Are the lists people are viewing good? Or maybe more to the point: Do the people who provide us these lists view the lists they provide as good? For that, I did some analysis of the data.

Research: Who wrote prior to Nationals?

professor oak's research 16-9
Let’s gather some info …

Similar to my last article, I used data accumulated from Facebook to look at decks played by the Top 64 players at the US National Championship. I then scoured the web for articles written by players that placed in the Top 64. I found four players in the Top 64 that had written articles immediately prior to the tournament where they offered up decks that thought of as “good plays” for the National Championship.

The authors and articles (in alphabetical order) were:

This isn’t a lot of data, but I noticed that Chase Moloney had written an article prior to Canadian Nationals and I, along with the entire universe, knew he ran Metal, so I opted to include that in my sample.

Admittedly, I may be missing people in the Top 64 that published articles and I was simply unaware of them. Also, there were many articles published by people who finished outside of the Top 64. It is somewhat revisionist history to look at the Top 64 players and judge their articles while excluding others. Having said that, one could say that we are looking through a lens that might be more focused on whether successful players play the decks they recommend at tournaments, rather than just lists recommended by other players. Although I recognize that these tournaments are very challenging and all of the articles written were by competent and more-than-competent players, few net deckers feel regret not playing a deck if that player or deck failed to perform well.

Results: What decks did those writers play?

  • Brit Pybas: Brit ran Toad/Garb at the tournament but did not highlight it as a potential play in his article.
  • Chase Moloney: Chase ran Metal at the tournament but did not highlight it as a potential play in his article.
  • Dylan Bryan: Dylan ran Klinklang at the tournament but did not mention it in his article. In his post-Nats interviews, videos, and articles, he has been somewhat forthcoming that he picked that deck the morning of the tournament and arrived in Indy hyping Metal/Rayquaza (which he indeed hype in his article) but thought secretly about running Primal Kyogre, which is absent from his article. He also has talked about how he came to the conclusion that Metal/Ray was a better play than Primal Kyogre (so he generally believed his hype) before somewhat spontaneously settling on Klinklang.
  • Dustin Zimmerman: Dustin ran Metal and in his article he included a Metal list and talked about how he has been testing it a lot! While I had access to the list he provided on the site, I do not have access to the list he played, so I cannot compare whether those lists are similar.
  • Kevin Baxter: Kevin ran Primal Groudon and highlighted Primal Groudon as one of his three big plays. I do not have subscriber access to 60Cards, so I have neither his tournament list nor the list he offered on 60Cards.


koga's ditto 16-9
Players are constantly shaping — and reshaping — the meta.

So in 60% of cases, in articles written prior to US Nationals, decks that propelled authors to Top 64 finishes went unidentified. One could leap to the conclusion that they are “holding out on us” or trying to game the meta by misleading players. Further, one could cynically conclude that many of the decks available as suggestions for tournaments (as distinct from decks published after a tournament) are not strong or are flawed in some way. I find myself incredibly optimistic about human nature and am not inclined to believe that there is any devious behavior to be found. I think this speaks to the complexity of the metagame. What this really drives home to me is that players at the top of the game are capable of simply picking up a deck, rapidly tweaking and optimizing lists, and playing them incredibly well.

For these players, the key decision point was the seminal moment of asking themselves “What deck should I play?” rather than optimizing individual card counts. The micro-optimization is easy for great players (hence generating lists is easy for them). Thus the success of the tournament, particularly in a rock-paper-scissors meta, rests on choosing a platform that offers the optimal matchups based on anticipated decks being played.

For some players, that may mean a last-minute change in deck based on data that they become aware of in the moments immediately prior to a tournament beginning. Fortunately for them, they have so much skill in deck building and in-game play that once that decision is made, they can have remarkably successful tournaments now leveraging a deck that used market intelligence to ensure that its matchups are strong across the board.

One more point that deserves notice is that this tournament may not represent a good sample: As almost every player knows, with so little understood about the meta in the wake of the banning of Lysandre’s Trump Card, there were no Masters that I spoke with that felt like they had picked a deck and locked it down on Thursday. The number of players that picked a deck overnight prior to Friday seemed extremely high. While I may have simply been receiving misinformation, I am more inclined to think that this was in fact the case.

I know that even in the Junior meta, while my youngest son had locked in on a deck in mid-June (Toad/Bats), my oldest son did not dial in his deck until just a day or two before the tournament — and even then he was open to a convincing argument by a trusted Master that he should switch. To me this indicates that as players grow more sophisticated in their thinking about the meta and feel more confident in playing a diversity of decks, the focus on being nimble in response to the meta becomes more important.

When I look back at Massachusetts Regionals, as the meta was fairly well defined comparatively, the Masters that I was in contact with were much more aggressive about picking their decks in advance and we also had certainly locked in our decks well prior to the tournament.

Micro-optimizations of decks (the individual card counts) were unfortunately not possible to analyze in this study and are a fruitful area of research for future analysis.


catcher robot croppedBulbapedia
The inelastic net-decker is lost in the flow of the format.

Net-deckers beware. The meta is an ever-changing landscape and the advice that top players provide is merely a snapshot of their opinions at a fixed point in time and examples of best practices. Much like the observer effect in quantum physics, the act of measuring and describing the meta changes it. If Chase Moloney had written a long article about Metal and its virtues prior to the Canadian Nationals, it may well have been that attendees would have tweaked their decks to counter Metals, causing Chase to run yet a different deck in response. One cannot expect that top players will blindly play concepts that they had prior to a tournament if they see shifts in the meta occurring. That is counter to the very approach that made them top players in the first place.

Many of these decks are intended to be strong plays or at the very least provide a strong framework for playtesting matchups. Having said that, there is no substitute for developing solid in-game skills and developing an understanding of the meta. Similarly, these lists can inform your thinking as you think about the meta. If you are looking for decks that are strong counters to a meta concept, scouring for decklists can inform that thinking in a constructive manner.

On the other hand, if you are thinking that by copying a deck from a website that you will play the same deck as a well-known player, you are probably mistaken. The best players playtest constantly and are always evolving their thinking about the best decks to play and the exact composition of those decks.

Reader Interactions

Leave a Reply

You are logged out. Register. Log in. Legacy discussion: 24