tag:blogger.com,1999:blog-38600807.post5190330063444724325..comments2018-06-02T14:19:34.554-04:00Comments on Advanced Football Analytics (formerly Advanced NFL Stats): Playoff Probabilities: Week 9Unknownnoreply@blogger.comBlogger27125tag:blogger.com,1999:blog-38600807.post-32553133205046606852011-11-08T21:39:56.273-05:002011-11-08T21:39:56.273-05:00As I see it, there are three things I want to do w...As I see it, there are three things I want to do with this program:<br /><br />1. Estimating the seeding slate for each conference (i.e. division winners & wild cards, this one will sample, instead of iterate over all configurations)<br /><br />2. Finish up computation of division winners (i.e. generalize code to iterate over all divisions, instead of just concentrating on the AFC South).<br /><br />3. Compute seeding slate for entire conference (not sampled). This should be practical once there are less than about 40 games in the season (per conference), so around week 12 or 13, so I've got a few weeks to work on this.<br /><br />Chris, assuming I get #1 or #3 done, would you be interested in collaborating? (Actually, just having someone to discuss issues over and check my understanding would be fantastic!)Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-85590780609594098132011-11-07T18:51:40.312-05:002011-11-07T18:51:40.312-05:00Ian: It's encoded in my comment about joint pr...Ian: It's encoded in my comment about joint probabilities.<br /><br />E[g(X)] = sum[all x elem of X] g(x)f(x)<br /><br />where<br />X = random variable for NFL seasons<br />g(x) = goals we're interested in, e.g. Team A wins division<br />f(x) = probability mass function, probability that a particular season x occurs<br /><br />Now, I'm attacking this directly by taking a series of N samples of X (x_1, x_2 ... x_N) and computing f(x_i)g(x_i), i.e.<br /><br /> E[g(X)] ~ (1/f_s) sum[x elem X_s] g(x)f(x)<br /><br />where X_s is the set of N samples of X<br />and f_s = sum[x elem X_s] f(x)<br /><br />This is basically Monte Carlo integration.<br /><br />This was a good exercise, since now I know exactly what I'm computing!<br /><br />And now this leads me to understand that what Chris is doing is Monte Carlo simulation with importance sampling. So he should require fewer samples than I do to converge on the correct answers.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-82373547768245593252011-11-07T04:36:07.048-05:002011-11-07T04:36:07.048-05:00Sam - sorry brains being slow again. How do I use ...Sam - sorry brains being slow again. How do I use a random 0 or 1, each with probability 0.5, to simulate a result where a team wins 40 percent of the time?Ian Simcoxhttps://www.blogger.com/profile/01518825067469269377noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-50602041595214716932011-11-06T23:05:03.068-05:002011-11-06T23:05:03.068-05:00Importance sampling done correctly actually speeds...Importance sampling done correctly actually speeds up convergence. I'm not quite sure I'm doing it correctly though :-) (it's been over 20 years since college...)Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-35940662998522574232011-11-06T10:21:46.515-05:002011-11-06T10:21:46.515-05:00Sam -- OK, now I get it. That's interesting. I...Sam -- OK, now I get it. That's interesting. I can see where it would be much faster to generate x simulations. So for me, the interesting case is when you are running some kind of MC scheme to estimate the probabilities when the sample space is too large to sample exhaustively. So the difference between the two schemes is that your method samples the entire probability space uniformly while my method samples the most likely areas of the probability space most intensely. I wonder if your method would require extra simulations to in order to get "enough" samples in the most relevant regions? It still might be faster, even if it does.Chrishttp://nfl-forecast.comnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-15145192525657296072011-11-06T05:06:16.666-05:002011-11-06T05:06:16.666-05:00Chris: I decide who wins each game, then multiply ...Chris: I decide who wins each game, then multiply all the game winner's win probability together to find the joint probability of that particular configuration of winners, then accumulate the joint probabilities for each winner. At the end, the total probability space is divided into the accumulated probabilities for each team (I think... right now this is moot since I iterate over all probabilities so the total probability = 1 and this method is obviously correct). I think this is a form of importance sampling.<br /><br />Gah, the strength of victory tiebreaker looks to be annoying to implement. Right now my only handling of three way ties is to detect that there is one.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-6790857867680715212011-11-06T01:15:34.504-04:002011-11-06T01:15:34.504-04:00Things get a whole lot more complicated when you s...Things get a whole lot more complicated when you start looking at full playoff seedings 1-6 in each division. Strength of victory tiebreakers routinely become important. <br /><br />Not to mention three way ties, which sometimes require multiple iterations through the tiebreakers. The logic is very tricky. <br /><br />I don't do any tiebreakers past strength of schedule. I just flip a coin at that point. If the playoff race is interesting to me, I will go through all of the tiebreakers manually and figure out who has the advantage for any tiebreakers that go beyond strength of schedule, but those are truly rare instances.<br /><br />I consider the probability of individual games outcomes as predicted by Brian's team efficiency ratings. In that case, each team has a continuous decimal probability of winning each game that can range from 0 to 1. Random bits won't resolve that, you need random numbers.Chrishttp://nfl-forecast.comnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-69122382293558949962011-11-06T00:43:21.090-04:002011-11-06T00:43:21.090-04:00Ian: in this monte carlo simulation, you only need...Ian: in this monte carlo simulation, you only need a binary decision for each game, so (pseudo)random bits (iid uniform with probability 0.5) are all that are necessary. Modern stream ciphers are a pretty good way to generate random bits fast.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-52907969115565048422011-11-06T00:38:08.986-04:002011-11-06T00:38:08.986-04:00[I originally had a longer response here, but...]
...[I originally had a longer response here, but...]<br /><br />I can see you've spent a significant time on your UI--mine currently is a text editor where I adjust code and data tables and recompile.<br /><br />I currently handle the first two within-division tiebreakers (head-to-head and best in-division), I detect when those aren't enough and so far remaining ties are under 1% of probability. Adding even this amount of tie-breaking handling did significantly increase run time, I didn't keep records but I'd place it around 50%. Since other tiebreakers occur so rarely, I can afford them to be relatively expensive, though the next two in order seem straightforward and fast. The fifth (strength of victory), I have no idea what it means (* googling just came up with a definition...), and the sixth (strength of schedule) I think I know but some better explanation would be appreciated. What're you doing about the remaining tiebreakers--do you just freeze current statistics or do some sort of calculation or simulation for remaining games?Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-68844186751589245432011-11-05T20:44:44.052-04:002011-11-05T20:44:44.052-04:00It's totally believable that you've spent ...It's totally believable that you've spent more time on the GUI than on the simulation--right now, my user interface is making changes to code or data tables and recompiling...<br /><br />What are you doing for the tie breakers? Right now, I don't handle all of the tie breakers, but handling tie breakers quickly is one of the things I've had to spend a lot of time on. Right now, I'm just handling head-to-head and win-loss-tie within division. I keep track of ties not handled by the above separately so I'm not incorrectly adding probabilities. Adding support for the tiebreakers I do handle increased run time by about 50% I think.<br /><br />Adding win-loss-tie within conference is pretty straightforward. I'm pretty sure I've got a fast common-games computation figured out. I have no idea what "strength of victory" is. I think "strength of schedule" is the ranking from last season used for "strength of schedule" scheduling in current season, which would be simple to implement, if my supposition is correct.<br /><br />How are you handling the other tie breakers which involve points/touchdowns scored? Do you compute expected points and touchdowns for the rest of the season?<br /><br />Ian: most (but not all) stream ciphers essentially generate a stream of key-dependent) pseudo-random bits, which are xor-ed against the plain text to produce the cipher text. For a stream cipher to be good, the bits it generates needs to be indistinguishable from a stream of random bits. In the past, most stream ciphers were designed to be fast and cheap in hardware, but in the last decade, a number of stream ciphers were designed to run fast both in hardware and in software. You can read (and get reference implementations) about some good modern stream ciphers <a href="http://www.ecrypt.eu.org/stream/" rel="nofollow">here</a> -- this was a project of the EU to identify/develop some good stream ciphers.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-22162826912569291022011-11-05T18:26:03.062-04:002011-11-05T18:26:03.062-04:00I just did a crude performance test. I compared th...I just did a crude performance test. I compared the time required for simulations of the the remaining 9 weeks of the season to the time required to simulate 1 week. The results are that the time to simulate 1 week is 81% of the time required to simulate 9 weeks.<br /><br />So that strongly suggests, at least for my implementation, that the time required to run the tie breakers is far greater than the time required for game simulations. <br /><br />While undoubtedly increases in performance can be achieved, a sole focus on speeding up the game simulations is unlikely to achieve the expected gain.Chrishttp://nfl-forecast.comnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-39563816222295164022011-11-05T13:56:58.433-04:002011-11-05T13:56:58.433-04:00For the record, the effort I've put into game ...For the record, the effort I've put into game simulation is negligible to the effort for the GUI and the tie-breaker procedures.Chrishttp://nfl-forecast.comnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-5578241205881492082011-11-05T11:58:16.819-04:002011-11-05T11:58:16.819-04:00Sam - how does the random bit generation work? Tha...Sam - how does the random bit generation work? That's not a tactic I'm familiar with but sounds a useful thing to know if doing complex monte carlo simulations.Ian Simcoxhttps://www.blogger.com/profile/01518825067469269377noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-27318874699626904812011-11-04T16:16:19.577-04:002011-11-04T16:16:19.577-04:00BTW, modern fast stream ciphers (which make excell...BTW, modern fast stream ciphers (which make excellent random bit generators) run at speeds around 1 bit/cpu cycle in software, so a million random bits can generated in under a thousandths of a second on modern gigahertz CPUs.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-51320769207495769472011-11-04T16:05:11.951-04:002011-11-04T16:05:11.951-04:00Ian, don't forget you don't need to comput...Ian, don't forget you don't need to compute entire seasons, just remainder of seasons. Also, you only need random bits, which is a lot more compact than numbers.Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-48961295386882185202011-11-04T14:55:02.329-04:002011-11-04T14:55:02.329-04:00Sorry, forgot to say - as 256 games x 5,000 season...Sorry, forgot to say - as 256 games x 5,000 seasons is ~1.3 million numbers, you just generate one number between 1 and 700,000 and then use that as a seed to pick whether you take pre-gen'ed numbers 11 to 1,280,010 or whatever.Ian Simcoxhttps://www.blogger.com/profile/01518825067469269377noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-57155361569262748362011-11-04T14:52:50.702-04:002011-11-04T14:52:50.702-04:00On developing nfl-forecast as quicker web app, as ...On developing nfl-forecast as quicker web app, as there are 256 games in an NFL season you could have a pre-generated list of, say, 2 million random numbers to use instead of generating new match result each time (random numbers are fairly processor intensive to generate).<br /><br />All you'd need to do then is generate one number, and then use that to decide where to start reading in the random number table.<br /><br />I haven't tested, but it seems that would make it much quicker to simulate seasons.Ian Simcoxhttps://www.blogger.com/profile/01518825067469269377noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-27795295914589566632011-11-04T10:30:32.311-04:002011-11-04T10:30:32.311-04:00Chris, I was wrong. It does look like 82%. I think...Chris, I was wrong. It does look like 82%. I think that I just miscounted a bar. Based on misreading that game I think I overestimated the differences between the two systems. That being said, ideally the percentages would match exactly.<br /><br />Btw, big fan of nfl-forecast. I always have it open when I read articles predicting future scenarios. Thanks for being one of those productive people. Would be nice to see a web-based version, maybe attached to Brian's stellar site.Ian Bnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-74866100402110178192011-11-04T03:31:40.894-04:002011-11-04T03:31:40.894-04:005000 simulations in 10 to 15 seconds sounds rather...5000 simulations in 10 to 15 seconds sounds rather slow to me. I haven't benchmarked it, but my program's got to be running at least tens of millions of seasons/sec, and probably closer to 50 million on a rather slow CPU (for a single division).Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-456659720069027452011-11-04T01:24:28.232-04:002011-11-04T01:24:28.232-04:005000 simulations can be run in 10 to 15 seconds an...5000 simulations can be run in 10 to 15 seconds and typically gets you within half a percent of the results achieved by 50,000 simulations (which is what my published forecasts are based on). If you are analyzing a large number of scenarios, you will appreciate getting the results back in 15 seconds compared to more than 2 minutes.<br /><br />Ian B -- where do you see 77% for NO over TB in the NFL Forecast software? I read it at 82% which is very close to 83% given by Brian. I calibrated the home field advantage a few years ago based on Brian's published predictions. On average, this week my HFA is about 1 to 2% below Brian's. I don't think this is a significant difference, but I could modify my calibration to more closely match his if this trend is consistent over a number of weeks.Chrishttp://nfl-forecast.comnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-3242954955339481852011-11-03T23:16:57.626-04:002011-11-03T23:16:57.626-04:00The state space is so large, I do wonder how accur...The state space is so large, I do wonder how accurate 5000 samples can be. That said, the last time I compared the results from my division champion program which does enumerates the entire state space* came only a few percentage points different.<br /><br />*(Possible after week 6 since each division had around 32 games remaining.)Sam's Hideouthttps://www.blogger.com/profile/17861031623526621055noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-44422310283008090682011-11-03T17:46:48.842-04:002011-11-03T17:46:48.842-04:00Ian: You're right--the software uses a differe...Ian: You're right--the software uses a different algorithm than Brian does to convert team efficiency ratings to game probabilities, so the probabilities differ slightly.<br /><br />In general, the algorithm gives somewhat less of a home-field advantage than Brian's model--usually on the order of 1-3 percentage points worth of probability. This may affect things on the margins, but it leaves the overall picture intact. (Since teams play both home and away games, the deviations--which are small to begin with--tend to cancel each other out as opposed to compounding.)Josh Katzhttps://www.blogger.com/profile/06646400031653670129noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-42910018033861711602011-11-03T16:59:12.814-04:002011-11-03T16:59:12.814-04:00Hi Josh, I just wanted to point out another exampl...Hi Josh, I just wanted to point out another example of what I mentioned earlier. In your article you write that Philadelphia has a 72% chance of winning this weekend. In the nfl-forecast software, it looks like Philly has a 68-69% chance. Other games have much larger differences. My point is that if individual games are off by 3-5%, the summaries that you end up with could be way off.Ian Bnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-50992508056535621712011-11-03T15:29:07.915-04:002011-11-03T15:29:07.915-04:00The numbers used for the main tables are done usin...The numbers used for the main tables are done using a modified version of the software that does 50,000 simulation runs, giving standard errors ranging from about .10 percentage points for values close to zero or one and .22 percentage points for values close to 50%.<br /><br />As for the software, 5,000 runs gives percentages with standard errors ranging from .3 percentage points to .7 (assuming I have the math right). <br /><br />That might seem like a lot, but you have to consider what we're using it for. <br /><br />Does it matter whether we estimate Baltimore's chances of making the playoffs as 75% or 76? To me, not so much. I'm much more interested in the broad strokes of the playoff picture and conditional probabilities -- How does a particular scenario affect a team's chances? What are the high leverage games that can really swing a team's probability one way or the other? <br /><br />To answer these questions, five thousand simulations seems to strike a good balance between precision and speed.<br /><br />And ultimately, as was pointed out, these are estimates done using projections that will never be 100% accurate. Which is one of the reasons why the numbers are presented rounded off, so as to avoid the appearance of accuracy/precision that just isn't there.Josh Katzhttps://www.blogger.com/profile/06646400031653670129noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-11522374205421537932011-11-03T14:19:05.664-04:002011-11-03T14:19:05.664-04:00Since the numbers are only given to percent-level ...Since the numbers are only given to percent-level accuracy, I suppose you'd want the Monte Carlo to be accurate to rounding error on that number. So for an error of half a percent, root-N gives 40 thousand.<br /><br />On the other hand, we haven't estimated the systematic errors in the model; if those are bigger than the statistical error, you'd just be wasting your time adding more simulations. That is, you're calculating the wrong number very precisely. Given what we know about luck and injury in the NFL, I'm skeptical that the numbers can get better than 5%ish, so 5000 simulations would be adequate under that assumption.Xnoreply@blogger.com