TCO win rate, NKFL, and Effective Testing.

I like statistics. I often look over my decks on TCO to see number of games played and win rates. One of my all-time favorite competitive decks has a win rate of 55% on TCO. Am I overvaluing the deck? Is TCO a good indicator of success?

I set out to gather some data, not knowing where the data will take me.


NKFL is a very complex format where very odd decks get a chance to shine. I am not entirely sure why I decided NKFL was the data I wanted, but that’s the data I wanted. And once the data came in, I found it quite interesting. My instruction was to share win rates of decks in NKFL and TCO. I didn’t limit the data, but obviously if a deck was only played 1-3 times in NKFL the data isn’t exactly high quality.

Data in web page

Data in Google Sheets

I took some information from the data set that I thought makes sense. Difference is NKFL win rate minus TCO win rate. There is one outlier which I suspect is the result of 1 game with the deck, so I put the number without the outlier in brackets.

Max difference: 0.65 (0.31)

Min difference: -63 (0.30)

Average difference: 0.00 (0.00)

Median difference: 0.01 (0.01)

Standard deviation: 0.18 (0.14)

I am no mathematician or data analyst, but from my layperson’s knowledge of statistics, I find this data interesting. An average difference of 0 means that on average TCO is an accurate indicator of success for NKFL. However, TCO being correct on average doesn’t mean much for a specific deck. Just as many decks outperform TCO as underperform.

Regardless, this data is a surprise to me. I would expect that overall NKFL win rate would be higher than TCO. When I play in a competitive league I tend to play more focused. I bring my A game to NKFL.

Do you have any conclusions from this data?


The aforementioned favorite deck is Falconsight the Listlessly Eagle-eyed. My win rate with it on TCO is 55% but it is hard to draw any conclusions from the 75% win rate in NKFL due to only having played it 4 times (it gets banned a lot). However the deck also got 3-1 result in KFPL, and it lost to Nova playing Anteater, which is a terrible matchup for it.

The success I have with the deck in competitive play seems to suggest that I am right in considering it a competitive deck. Why then do I look at TCO win rates to determine if new decks are worth considering for competitive play?

I think the answer to that question is that I don’t really have a better tool. Some teams use Gauntlets of known decks to run their deck through and see how it performs. I don’t have a regular playtesting group with which to do this exercise, but is TCO win rate the best I can do?

Effective Testing

One obvious way to test a deck’s prowess is to play with it in a tournament. But that is only an effective method after I have sifted through my collection for decks that I consider worthy of competitive play.

In episode 44 of bouncing deathquark they covered their best idea for testing decks. It is somewhat Archon Standard focused, but it is a good basis for the discussion. The premise they have is that you want to go to a tournament and you want to pick a deck as quickly as possible so you don’t waste time getting reps on a deck you’re not taking.

They don’t explicitly state this, but their approach requires a metagame assessment. Figure out what kind of meta you’re expecting to see, and then think what kind of deck you’d like to play. Then you find a deck that you think can do the thing you want it to do, and you test specifically if the deck does it. They caution against relying on TCO win rates. Winning because you got lucky is not informative. Losing because your opponent got lucky isn’t either. For quality data you want to see if the deck can consistently pull off its game plan.

Back when the episode was recorded we only had AoA and CotA, so meta assessment was easier, and deck archetypes were limited. These days I find it nearly impossible to assess the meta. There are board decks, rush decks, combo decks, flood decks, and all sorts of niche archetypes like curia decks. Still, their method is sound.

I also think that picking a deck based on what you want it to do is not a luxury that everybody has. Sometimes you want to pick the best deck from your collection. I think playing a deck a few times to get a feel for what it can do, and then testing to see how consistently it can do it would be an acceptable alternative.

And if you’re not able to put your deck through a gauntlet of specific decks, do a test run on TCO for X number of games and take notes:

Did the deck pull off the game plan?

Was I lucky?

Was my opponent lucky?

Was my opponent’s deck something I expect to face in a tournament?

Is the matchup guaranteed win, favorable, unfavorable, or hopeless?

When you answer those questions the quality of your data will improve, and you will be able to make better choices regarding which decks to take to competition in less time.

Venture deeper into your collection

I’m personally going to have a closer look at some of my decks with closer to 50% win rate (or lower) and see if they have a place in any of my lineups. Hope you do the same. And good luck in upcoming tournaments!


Aurore is a competitive KeyForge player and the founder of Timeshapers. She's a content writer by trade and aspiring game designer. Follow @Timeshapers1