The 11th Company 40K Podcast
Welcome to the 11th Company BLOG. The 11th Company is a Warhammer 40K podcast dedicated to players, strategies, and tactics.
You can download our episodes at the website, from ITunes, several podcast sites, or connect directly to the RSS Feed. We try to release a new Episode every Monday Night. Check it out!
Forums: http://the11thcompany.freeforums.org
PODCAST WEBSITE: http://www.tangtwo.com/11thcompany
PODCAST RSS FEED: http://www.tangtwo.com/11thcompany/podcasts/the11thcompany.xml
Podcast Archive: http://the11thcompany.libsyn.com/webpage
You can download our episodes at the website, from ITunes, several podcast sites, or connect directly to the RSS Feed. We try to release a new Episode every Monday Night. Check it out!
Forums: http://the11thcompany.freeforums.org
PODCAST WEBSITE: http://www.tangtwo.com/11thcompany
PODCAST RSS FEED: http://www.tangtwo.com/11thcompany/podcasts/the11thcompany.xml
Podcast Archive: http://the11thcompany.libsyn.com/webpage
Search This Blog
Monday, November 22, 2010
Tiered Competitive Event Part 3b
Still trying to figure out the best way to group like skilled players prior to an actual tournament! Last time, I ran some algorithms which would attempt to place 64 players into appropriate brackets (say a Bronze, Silver, Gold, and Platinum) based on the skill level of the player. The idea of course is to put like skilled players together into brackets of 16 whereupon they can compete to win in the bracket with players close to their skills.
Read the previous post to see what the 4 algorithms used were. Basically put, I wasn't really satisfied with any of it!
So, I decided to try a new algorithm this week to see what my results would look like as well as a serious modification to the old algorithms to try and get better numbers.
In a nutshell, here's how it works! Take 64 people and rate them from 1 to 64, where 1 is the worst player in the group and 64 the best. The idea is, if our preliminary matching works, when it's time for the actual tournament, players 1 - 16 will be in the bronze bracket, 17 - 32 will be in the silver, 33 - 48 in the gold, and 49-64 in the platinum. If it fails, players of non-like skill will be scattered all over the brackets.
So, the first new thing I wanted to try this week was adjusting battle points based on the proximity of the players to each other in skill level. The idea here is that players are NEAR LIKE skill shouldn't end up with much of a battle point spread whereas players who are no where near each other in terms of skills should end up with a blow out. For example, if player #46 plays a game against player #50, the game should be pretty close with battle points reflecting that. Whereas, if player #60 and player #8 play each other, it should be a complete blow out where Player #60 gets full points and player #8 gets 0 points.
Using this theory, I changed the way the original 4 algorithms calculated battle points which resulted in the following: (the number in parentheses is the original % correct)
--- Percentages reflect the % of players correctly placed in that bracket
Algorithm #1:
Bronze: 80% (78.756%)
Silver: 50% (41.25%)
Gold: 58.75% (42.5%)
Platinum: 86.25% (75%)
Algorithm #2:
Bronze: 66.25 (60%)
Silver: 43.75 (35%)
Gold: 51.25 (33.75%)
Platinum: 71.25 (62.5%)
Algorithm #3:
Bronze: 78.75 (75%)
Silver: 57.5 (38.75%)
Gold: 61.25 (43.75%)
Platinum: 82.5 (76.25%)
Algorithm #4:
Bronze: 75 (65%)
Silver: 53.75 (41.25%)
Gold: 45 (42.5%)
Platinum: 66.25 (67.5%)
Here's some food for thought on this idea. First, all algorithms got better across the board, some with substantial gains in placement. This is completely expected because we are attempting to model "reality" now with battle points and not just a random occurrence. However, this "reality" has a few problems.....
..... First, we are assuming that somehow we have devised a set of missions that will lend itself to the idea of "spreading' battle points based on player skill.....
..... Second, we are assuming that players will play to their ability each game....
So, aside from those caveats, this appears to work much better than last time, but it's still not really satisfactory! Satisfactory is a 75% on average in every category!
So, I decided that I would incorporate the concept of "Player Choice" into a new algorithm we'll call Algorithm #5. Player Choice is the idea that players will have some "say" in which bracket they want to attempt to play in.
Imagine that you walk in the door of the tournament. I tell you that for Day 1, we are doing pre-lim matches that will decide placement into the actual tournament which will take place on Day 2. There will be 4 brackets, designed to group players of like skill together to ensure everyone has fun and a good chance at winning in their bracket. I tell you that before we start the pre-lim matches, I would like for you to tell me which bracket you think you fit into:
Bronze Bracket - A bracket reserved for new players or players with little to no game skill or experience. You expect to lose every game you play today!
Silver Bracket - A bracket reserved for players who know the game but don't consider themselves strong gamers. You expect to do your best today but definitely not win all your games.
Gold Bracket - A bracket reserved for players who play well but don't consider themselves "great". You might be the guy that knows the game well but doesn't consistently do well at tournaments. You expect to win most of your games today!
Platinum Bracket - A bracket reserved for players who consider themselves good players or who are looking for the challenge of playing against top gamers. You normally place well in tournaments and could go 4-0!
So, you place yourself! Thing is, I'm not so dumb as to believe that people will spontaneously or magically pick the right spot for themselves. First reason is that they don't know the other players very well. Second, people don't always do such a good job at being honest with themselves. :)
That being known, algorithm #5 takes over. First, I assign each player to a "chosen" bracket. The way this is done is by looking at the player #, where Player #1 is the worst, and Player #64 the best, and have each player pick their own category. I do this by stating that each player has a 50% chance to choose correctly, a 35% chance to pick a category that is either 1 higher OR lower than their actual placement, and a 15% chance to choose a category that is either 2 higher or 2 lower than where they actually are.
From there, we match players in each chosen bracket for 4 games. We simply, randomly match the players in each bracket, recording Battle Points using the same system as above where players of like skill are close in battle points and players with huge gaps create blow outs.
At the end, we try to determine a calculated bracket for each player. To do this, we examine their battle points which should be representative of how well they did in their chosen bracket. If they have a lot of points, we can assume they might be playing against players they don't need to be playing with. Likewise, this is also true if they don't have very many. So, if a player has less than 10 Battle Points or greater than 70 (where the max battle points to be earned each round is 20), we adjust their bracket by 2 either down or up, respectively. If they have 11-25 or 55-70, we adjust one bracket. Otherwise, they are in the bracket they chose.
Lastly, we sort by Calculated Bracket and then by total Battle Points. This gives us an order between 1 and 64 for all players. We then break them into the 4 brackets with the first 16 being in the bronze bracket, the next 16 in silver, etc.
Algorithm #5:
Bronze: 78.75
Silver: 60
Gold: 68.75
Platinum: 85
So, I looked at this, and it is better than the other algorithms but only ever so slightly. Mostly, it's better in the Silver and Gold brackets which is where we have weakness. It is worth mentioning that some data sets placed 100% and 93% in some categories which hasn't happened yet with the other algorithms.
The next thing I wanted to test was to see if maybe I wasn't so pessimistic on people's ability to place themselves. So, this time I said players have a 70% chance to pick correctly, a 20% chance to mis-place by 1 bracket, and a 10% chance to mis-place by 2 brackets.
Algorithm #5 Modified:
Bronze: 85
Silver: 73.75
Gold: 76.25
Platinum: 86.25
Wow! First time we got numbers that hit the satisfactory mark! Sweet! Problem is....
.... We are still relying on people to properly place themselves. The more accurately you place yourself, the better the matching system works. Problem is, it's complete speculation as to how well people can place themselves! What would you think people would do?.....
.... Still got the issues with battle points......
Monday, November 15, 2010
Tiered Competitive Event: Part 3 a
So, I'm really interested now how one could go about setting up a tournament format in order to appeal to the amateur competitor. Remember that the amateur competitor is the person defined basically as someone who wants to play in a competitive event but wants to play with people of like or equal skill.
This really should be something we should strive for: pairing players of like skill against each other. This creates an environment that is both competitive yet fun for all involved.
So, I've been putting some serious thought into how this might get accomplished. The biggest challenge, of course, is how does one determine the skill level of a player so that they can be matched together?
Well, the first thought that comes into mind is a preliminary event that must occur in order to rate and/or rank players. From there, we could make determinations about like skills.
So, I wrote a computer algorithm which would randomly explore 3 types of pre-liminary matches for me. It would then conclude a final ranking system for pairings for an actual tournament. It would run the experiment 5 times and find an average of the ability of each algorithm to properly place individuals in the actual tournament.
In less complex terms.... take 64 players and rank them from worst to best in terms of skill. Now, pretend that we want to run a tournament where we put "like skilled" players together. Let's say we are allowed to run 4 brackets. That puts 16 players in each bracket. If our algorithm for determining their placement works, players 1 - 16 (the worst to the 16th worst player in our group of 64) should all be in a bracket together. Likewise, players 48 - 64 should all be in a bracket of 16 together as the best players. Hope that makes sense!
So, the computer program I wrote is going to randonmize and test 3 algorithms which seeks to create 4 brackets of 16 as described above. It then calculates the % of players in each bracket that were properly placed. In other words, if the worst players in the lot somehow ends up in the "top bracket" this creates an error both in the top and bottom brackets. What we then measure is the % of players in each bracket that are properly placed. The algorithm which achieves the highest average %'s in each bracket is the superior yard stick for player placement.
Each Algorithm assumes you will play 4 games as a prelim.
Algorithm #1 is a random algorithm based on W/L pairings. Round 1 is an entirely random matching of players. From there, Rounds 2, 3, and 4 randomly match players together based on their current W/L record. So, for example, in Round 2 everyone who won in Round 1 are randomly matched as is everyone who lost in Round 1. In Round 3, everyone who is currently 2-0 gets randomly paired, everyone who is 1-1 gets randomly paired, and everyone who is 0-0 gets randomly paired. Also, a random amount of battle points is assigned each round for 0-9 for the loser and 11-20 for the winner.
Algorithm #2 pairs individuals based on W/L pairings and battle points acrued using a seeding algorithm. Round 1 is entirely random. Also, each round, battle points are assigned for 0-9 for the loser and 11-20 for the winner. This is important because those battle points are then used to seed for the following rounds. From there, like W/L records are used, but instead of being randomly pairied, the top battle points player in the bracket is paired with the worst battle points player in the bracket.
Algorithm #3 is the same thing as Algorithm #2 expect instead of using seeding, the top players in each bracket are paired, then the 3rd and 4th players in each bracket are paired, etc.
No science experiment is complete without a control. The Control Algorithm in this case is randomly pairing each player, every round, and at the end, assigning them to brackets based on W/L record and Battle Points just like the other algorithms do.
The results of all of this are actually quite disheartening! Take a look.
Algorithm #1 Average Correct Placement %:
- Low Bracket: 78.756%
- Middle / Low Bracket: 41.25%
- Middle / High Bracket: 42.5%
- High Bracket - 75%
Algorithm #2 Average Correct Placement %:
- Low Bracket: 60%
- Middle / Low Bracket: 35%
- Middle / High Bracket: 33.75%
- High Bracket: 62.5%
Algorithm #3 Average Correct Placement %:
- Low Bracket: 75%
- Middle / Low Bracket: 38.75%
- Middle / High Bracket: 43.75%
- High Bracket: 76.25%
Control Algorithm Average Correct Placement%
- Low Bracket: 65%
- Middle / Low Bracket: 41.25%
- Middle / High Bracket : 42.5%
- High Bracket: 67.5%
So, Algorithm #1 and #3 are almost identical in results. Algorithm #2 and the Control show no significant differences. It would appear that all algorithms are much better at placing players into the Lowest and Highest brackets, but the middle brackets are a total crap shoot.
In all, I'm not really satisfied with ANY of the results. I would like to be able to correctly place around 75% (or 12 out of 16) correctly in EACH bracket. Otherwise, it's not a very reliable method for placing people for the actual tournament. 12 out of 16 means that 2 players in every bracket have been displaced.
Any ideas out there?
This really should be something we should strive for: pairing players of like skill against each other. This creates an environment that is both competitive yet fun for all involved.
So, I've been putting some serious thought into how this might get accomplished. The biggest challenge, of course, is how does one determine the skill level of a player so that they can be matched together?
Well, the first thought that comes into mind is a preliminary event that must occur in order to rate and/or rank players. From there, we could make determinations about like skills.
So, I wrote a computer algorithm which would randomly explore 3 types of pre-liminary matches for me. It would then conclude a final ranking system for pairings for an actual tournament. It would run the experiment 5 times and find an average of the ability of each algorithm to properly place individuals in the actual tournament.
In less complex terms.... take 64 players and rank them from worst to best in terms of skill. Now, pretend that we want to run a tournament where we put "like skilled" players together. Let's say we are allowed to run 4 brackets. That puts 16 players in each bracket. If our algorithm for determining their placement works, players 1 - 16 (the worst to the 16th worst player in our group of 64) should all be in a bracket together. Likewise, players 48 - 64 should all be in a bracket of 16 together as the best players. Hope that makes sense!
So, the computer program I wrote is going to randonmize and test 3 algorithms which seeks to create 4 brackets of 16 as described above. It then calculates the % of players in each bracket that were properly placed. In other words, if the worst players in the lot somehow ends up in the "top bracket" this creates an error both in the top and bottom brackets. What we then measure is the % of players in each bracket that are properly placed. The algorithm which achieves the highest average %'s in each bracket is the superior yard stick for player placement.
Each Algorithm assumes you will play 4 games as a prelim.
Algorithm #1 is a random algorithm based on W/L pairings. Round 1 is an entirely random matching of players. From there, Rounds 2, 3, and 4 randomly match players together based on their current W/L record. So, for example, in Round 2 everyone who won in Round 1 are randomly matched as is everyone who lost in Round 1. In Round 3, everyone who is currently 2-0 gets randomly paired, everyone who is 1-1 gets randomly paired, and everyone who is 0-0 gets randomly paired. Also, a random amount of battle points is assigned each round for 0-9 for the loser and 11-20 for the winner.
Algorithm #2 pairs individuals based on W/L pairings and battle points acrued using a seeding algorithm. Round 1 is entirely random. Also, each round, battle points are assigned for 0-9 for the loser and 11-20 for the winner. This is important because those battle points are then used to seed for the following rounds. From there, like W/L records are used, but instead of being randomly pairied, the top battle points player in the bracket is paired with the worst battle points player in the bracket.
Algorithm #3 is the same thing as Algorithm #2 expect instead of using seeding, the top players in each bracket are paired, then the 3rd and 4th players in each bracket are paired, etc.
No science experiment is complete without a control. The Control Algorithm in this case is randomly pairing each player, every round, and at the end, assigning them to brackets based on W/L record and Battle Points just like the other algorithms do.
The results of all of this are actually quite disheartening! Take a look.
Algorithm #1 Average Correct Placement %:
- Low Bracket: 78.756%
- Middle / Low Bracket: 41.25%
- Middle / High Bracket: 42.5%
- High Bracket - 75%
Algorithm #2 Average Correct Placement %:
- Low Bracket: 60%
- Middle / Low Bracket: 35%
- Middle / High Bracket: 33.75%
- High Bracket: 62.5%
Algorithm #3 Average Correct Placement %:
- Low Bracket: 75%
- Middle / Low Bracket: 38.75%
- Middle / High Bracket: 43.75%
- High Bracket: 76.25%
Control Algorithm Average Correct Placement%
- Low Bracket: 65%
- Middle / Low Bracket: 41.25%
- Middle / High Bracket : 42.5%
- High Bracket: 67.5%
So, Algorithm #1 and #3 are almost identical in results. Algorithm #2 and the Control show no significant differences. It would appear that all algorithms are much better at placing players into the Lowest and Highest brackets, but the middle brackets are a total crap shoot.
In all, I'm not really satisfied with ANY of the results. I would like to be able to correctly place around 75% (or 12 out of 16) correctly in EACH bracket. Otherwise, it's not a very reliable method for placing people for the actual tournament. 12 out of 16 means that 2 players in every bracket have been displaced.
Any ideas out there?
Monday, November 8, 2010
#1 Tip for Improving Your Game?
Simple. Know the rules. The first tip we ever issued on the Podcast, episode #1, was RTFM (read the rulebook in less abusive language :) )
-----------------------------------------------------
So before getting into WHY this is going to improve your game, let's cover a few obligatory topics.....
First and foremost, most likely, you are not a genius or a memory master, and most likely, just like me, you don't have the time to drill 40K rules on a daily basis to ensure that you could compete on a 40K Game Show like this goes does sports trivia: (http://www.youtube.com/watch?v=uuZ_0jx0oGw).
So, accept your human limitations, and go ahead and get all the "boo hoo I don't have time to memorize rules" out of your system. Nobody does! You are not a computer, and no one expects you to be. You never will be. This is something that Chess players recently figured out. (Computers versus Chess Players). Second, that guy at your local store who seems to "know it all".... he doesn't. So, just because he says it doesn't mean he is right. Look it up yourself. Also know that the more you look things up, the more apt you are to remember them. Practice makes perfection.
Third, know that there is a seemingly limitless supply of ambiguous and contentious rules in 40K. Next time you are really bored, do some research on the topic. The more familiar you are with these ambiguous rules and how they get resolved at large, the better you know the rules. Large FAQs like the INAT are a good place to start. Don't take anyone's word for it on a rules ruling, but you most certainly should discuss them with your play group. Also know how contentious and ambiguous rules will be resolved at any event you might want to attend. (At least be prepared for some core rules to get changed on you in the middle of the game by a judge. That's also just a fact of 40K life. )
Lastly, before I really start discussing how knowing the rules is going to help your game, beware the "this is the way we've always done it" mentality. Lots of people play this game in a certain fashion because they didn't update themselves on the new editions. It's okay to house rule things, but know what is and is not a house rule in your local club. This could be a big problem if no one even knows that they are using a house rule.
-------------------------------------------------
So, how is knowing the rules well going to improve your game? First, it'll make the game go by faster since you aren't stopping to look things up every few minutes. This will let you get more games in in an allotted time which will equal more practice!
Aside from that, let's examine what is at the core of any strategy game. The foundation of any game is decisions. You make decisions to reach an outcome. The assumption that strategy games "insert" is that there is such thing as a "correct" decision, or more often than not in strategy games, a "most correct" decision given a certain circumstance. Theoretically, if you always make the "most correct" decision in every circumstance, you win! Some things seek to disrupt this though when it comes to 40K. The first, most obvious, disruption is dice rolling. Since dice are random, it is quite possible to always make the "most correct" decision and still lose a game because your dice go sour. Another big kicker in 40K is differences in codices. Sometimes, the "most correct" decision in any given circumstance might still net you a loss because you have a bad match-up. Scenarios are another great example. It's exactly this randomness, though, that makes 40K more entertaining in a lot of ways than chess because the decision trees are much, much longer. You also can't simply study and memorize them.
So, what's a decision tree? A decision tree is a long list of this... "if I do this, and then this, then my opponent will do this, which means I will respond with this, to which he will respond with that.... " and so on. It's a tree because you take a fork every time a decision is made, and the combination of all the decisions made will give you the outcome of the game! If you were a computer, like a chess computer, you would calculate billions of outcomes in a matter of seconds and choose the path that leads to victory! You aren't though! :)
So, what does this have to do with knowing rules? Well, you should always be striving to make the "most correct" decision in every CIRCUMSTANCE. A circumstance is defined by the rules of the game! Think about it. The position of your models, what they can do, what will happen if you shoot, move, assault, and even what your opponent will do to react to you is all defined by the rules of the game. If you don't know the rules, you can't fully analyze your situation, otherwise known as your circumstance, and thus, you won't be able to define the "most correct" decision because you haven't yet to define the circumstance.
Simply put, the more rules you know, the better able you are to see and comprehend what the best decision to make is because you more fully comprehend the situation you are in.
Here's an analogy. Let's suppose you are blind. Now, you can rely on your other four senses to help you negotiate the world. You can smell, touch, taste, and hear. You can use those four senses to help you make decisions about many things in your world. However, these decisions won't help you know to "duck!!!" when someone tosses a dodge ball at you. The thing is, you know what ducking is. You know that it is an appropriate reaction when something is heading for you, but what you DON'T KNOW, is that the circumstance you are currently in might warrant ducking as the "most correct" decision to be made because you can't see the oncoming ball.
Here's a 40K example. If you don't KNOW that Orks can declare a WAAAGH! once per game that gives them fleet, you might think you are SAFE from assaulting to be 13 inches away from a big mob. You won't even find out you were wrong until next turn when your opponent is on your poor marines like white on rice in a snowstorm because of his WAAAGH!. Had you have known about the WAAAGH! move, you probably could have made the "more correct" decision which would have been to GET AWAY! (or DUCK as in the previous analogy).
Often, I get asked, how do you think TACTICALLY? The answer is actually a lot simpler (as it always is in life) than you might first think. Thinking tactically is simply making the best decision possible in a given situation. The only way you can do that is to fully comprehend your situation or circumstance. In the game of 40K, the only way to truly comprehend your situation is to know the rules to the best of your ability.
Quiz yourself. Challenge yourself to answer questions, and then, check the book to see if you are right. These are all great exercises to help out.
Friday, November 5, 2010
Building an Argument for Balance Part 3: Exploring the Arbitrary System
So, last time I posted about this topic, I had created a system for arbitrarily assigning point values to units in a Codex in an effort to empirically examine units to discover if balance exists.
To recap, why are we doing this? The answer to this is simple. We can argue till we are blue in the face about our opinions as to why things are or are not balanced, but none of those arguments will have empirical results capable of being reproduced behind them (i.e. actual evidence). So instead of approaching this conversation from a "but I knowz DA IGz da bestest cuz my BFF tablez me every thyme we playez", it should make a lot more sense to spend energy developing a system that we can approach from a measurable stand point. Then, we can expend our energy arguing the system into an acceptable state rather than spinning in circles. If we build our system correctly, eventually, point gaps will exist which show clear distinctions, provided there actually are any which we don't know for a fact, which will hopefully overcome the invariable "nit picking" that will be associated with it.
So, last time I came up with a system which contained very little thought which allows us to make actual measurements and can be reproduced. What we need to do next is start arguing the validity of it (the subjective piece in all this). An boy howdy are there a lot of problems with that system!
Before I get started, I went ahead and used that points system to break down several troop choices across a few codices just to see what the system produced. Here are the results: (note that I am not including all calculations because it would be far to long, just the results)
Space Marines Troops Analysis:
Tactical Squad
Movement: 1 Point
Shooting: 12 Points
Assaults: 0 Points
Saves: 4 Points
Stats: 3 Points
Special Rules: 8 Points
Total Points: 28
Scout Squad
Movement: 1 Point
Shooting: 15 Points
Assaults: 0 Points
Armor: 2 Points
Stats: 1 Point
Special Rules: 11 Points
Total: 30 Points
Space Marine Troops Average Score: 29
Space Wolves Troop Analysis:
Grey Hunters:
Movement: 1 Point
Shooting: 37 Points
Assaults: 2 Points
Armor: 4 Points
Stats: 3 Points
Special Rules: 8 Points
Total: 55
Blood Claws
Movement: 1 Point
Shooting: 32 Points
Assaults: 3 Points
Armor: 3 Points
Stats: 1 Point
Special Rules: 10 Points
Total: 50
Space Wolves Troops Average Score: 50
Necron Troop Analysis:
Warriors:
Movement: 1 Point
Shooting: 9 Points
Assaults: 0 Points
Armor: 4 Points
Stats: 3 Points
Special Rules: 2 Points
Total: 19 Points
Necrons Troop Average Score: 19 Points
Chaos Space Marines Troop Analysis:
Chaos Space Marine:
Movement: 1 Point
Shooting: 37 Points
Assaults: 5 Points
Armor: 4 Points
Stats: 3 Points
Special Rules: 0 Points
Total: 50 Points
Khorne Berserker:
Movement: 1 Point
Shooting: 0 Points
Assaults: 13 Points
Armor: 3 Points
Stats: 5 Points
Special Rules: 4 Points
Total: 26 Points
Noise Marines:
Movement: 1 Point
Shooting: 1 Point
Assaults: 0 Points
Armor: 3 Points
Stats: 5 Points
Special Rules: 1 Point
Total: 11 Points
Plague Marines:
Movement: 1 Point
Shooting: 34 Points
Assaults: 0 Points
Stats: 5 Points
Armor: 3 Points
Special Rules: 4 Points
Total: 47 Points
Thousand Sons:
Movement: 0 Points
Shooting: 0 Points
Assaults: 0 Points
Armor: 6 Points
Stats: 2 Points
Special Rules: 3 Points
Total: 11 Points
Summoned Daemons:
Movement: 1 Point
Shooting: 0 Points
Assaults: 1 Point
Armor: 1 Point
Stats: 2 Points
Special Rules: 5 Points
Total: 10 Points
CSM Troops Average: 26
Average of all Codices: 31 Points
Standard Deviation of All units: 16 Points
This post is long enough as is, but here are some things to chew on while you look at this analysis. Remember, the goal here is to point out areas of glaring error so they can be corrected.
1) Necrons, Space Marines, and Chaos Space Marines are all within 1 Standard Deviation of each other. Space Wolves are not. Will this change if we included troop choices from other Dexs? I think so! Necrons will likely fall outside of 1 standard deviation (maybe even 2) and Space Wolves will likely be 2-3 standard deviations out as well. Standard Deviation is a good way to detect outliers. In our case, analysis of units that fall within a single standard deviation are "close to each other" where those outside of that are likely less balanced.
2) It would appear that, overall, there is a problem in the shooting category as some units get very high scores there and other get very little. The cause of this is their ability to obtain cheap melta guns or combi-meltas in amounts of 2-3, and they are picking up points for killing or damaging vehicles since we have no range restrictions in our calculations. Likewise, it would also appear that we may not be giving enough credit for killing Infantry, as can be evidenced by Thousand Sons getting no points in shooting despite having AP 3 bolters.
3) We may not be assigning enough points for the assault phase. There are many problems here not the least of which is no calculation about sweeping advance potential, causing morale checks, fearless wounds, etc. We all know that the Assault Phase is generally more "killy" than the shooting phase, but this is not necessarily represented well. Only Khorne Berserkers score any significant amount, and the points earned do not balance with shooting potential. Should they balance? I'm not sure on that at the moment considering that most people "prefer" the shooting phase.
4) Noise Marines and 1K Sons are certainly outliers. Our system does not appear to be appealing to any of their strengths. The only other alternative is that they do not have very many significant strengths. Which is it? What is the system missing?
5) Wolves got VERY high scores, but this is because they dominated the shooting category due to multiple meltas + combi-meltas being able to kill vehicles.
There's more, but I will analyze further later!
Subscribe to:
Posts (Atom)