tag:blogger.com,1999:blog-38600807.post6059233476312944139..comments2018-06-02T14:19:34.554-04:00Comments on Advanced Football Analytics (formerly Advanced NFL Stats): Game Model CoefficientsUnknownnoreply@blogger.comBlogger18125tag:blogger.com,1999:blog-38600807.post-17745105587978533082013-09-02T13:21:57.372-04:002013-09-02T13:21:57.372-04:00Any thoughts .. on why these particular variables ...Any thoughts .. on why these particular variables were used in the final logistic regression model ? Did you create a large number of predictors, and then pick the best ones based on some measure like T-test or some other variable selection method ? How does logistic regression compare to other classifiers like CART, Random Forest, etc. ? How you handle variables that are strongly correlated ? Thanks !Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-79750547628583465782012-12-24T19:41:07.390-05:002012-12-24T19:41:07.390-05:00Seconded. AOPASS and BDPASS should be perfectly co...Seconded. AOPASS and BDPASS should be perfectly coordinated. Including both terms should only be necessary if one of them was a dummy variable, right?.https://www.blogger.com/profile/09352233873932309871noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-24652943683588635232012-08-23T13:11:20.292-04:002012-08-23T13:11:20.292-04:00Hello,
I am slightly confused as to how you avoid...Hello,<br /><br />I am slightly confused as to how you avoided collinearity in the model. I'm pretty sure I may have your methodology slightly wrong, but you commented as seeing yourself as either team, so wouldn't AOPASS be perfectly correlated with BDPASS for example? Anyway, I know this post is very old so I hope you get it - was just wanting to clear this up. Jimmynoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-13031845274207628962011-11-25T21:09:56.817-05:002011-11-25T21:09:56.817-05:00Quick question Brian.
So you built your logit mod...Quick question Brian.<br /><br />So you built your logit model using regular season 2002-2006. So say its the beginning of the 2007 season and you want to start predicting some outcomes. How many games into the 07 season before you have enough data to put into your regression equation? Assuming that the team has changed from the end of the 06 season to the beginning of the 07 season you don't have reliable numbers for the 1st game of the 07 season, correct? And then is just the first game of 07 good enough to predict game 2 of 07, or do you need to wait until X games have been played in this season?JustSomeGuyhttps://www.blogger.com/profile/18124091781446478006noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-27385595612194999142008-12-30T12:16:00.000-05:002008-12-30T12:16:00.000-05:00Hi, your system seems great, but have you publish ...Hi, your system seems great, but have you publish an updated coefficient and a sample calculation? That would be really helpful since I try to do the same system! Thanks!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-38600807.post-77899434578639769112008-10-16T19:27:00.000-04:002008-10-16T19:27:00.000-04:00CE (and others who have asked) -- I promise to pub...CE (and others who have asked) -- I promise to publish updated coefficients and a sample calculation. I've been out of the country for 3 of the past 5 weeks (Karachi is so nice this time of year!) so when I get home I'll have some time to answer the mail.Brian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-7881442830728533112008-10-16T17:19:00.000-04:002008-10-16T17:19:00.000-04:00Brian,I'm working to replicate your work here so t...Brian,<BR/><BR/>I'm working to replicate your work here so that <BR/><BR/>a) I can get updated coefficients with defensive interception rates removed.<BR/>b) I understand the process end-to-end and can apply it to other sports. <BR/><BR/>So far I'm not having luck getting values close to what you have or prediction percentages close to your level. Can you possibly point out where I'm going wrong?<BR/><BR/>I originally tried to calculate the necessary data points for a given game using only that game's data (ie. aopass would be using only the passing data from that specific game). This led me to problems when I tried to run a regression against that data as it cause data errors due to the fact that some variables were identical (ie. aopass = bdpass)<BR/><BR/>I then tried to calculate the necessary data points for a given game using the average of the last 4 games for a given team (is. aopass used the passing data from the past 4 games for team a) This allowed my to run my regression but the outputted coefficients were fairly different than yours. Most disconcerting was the fact that aointrate and aofumrate were positive values.<BR/><BR/>Is there something wrong with my current approach? Am I butchering the process?johnbarthttps://www.blogger.com/profile/06785303602216595110noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-9195247330591824042008-08-30T23:15:00.000-04:002008-08-30T23:15:00.000-04:00Hi, Brian. Thanks. You'd be correct if this were a...Hi, Brian. Thanks. You'd be correct if this were a linear regression, but it's a logit regression. The 0.74 represents the natural log of the change in the odds ratio of the home team winning. It works out to a 57.5% likelihood that the home team wins. Non-linear logit models are ideal for dichotomous outcomes, such as win/lose.Brian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-68368594071820177152008-08-30T18:57:00.000-04:002008-08-30T18:57:00.000-04:00Help me out here as it has been a long time since ...Help me out here as it has been a long time since I took econometrics. <BR/><BR/>The coefficient for AHome is 0.74. Keeping all other variables constant, doesn't this mean that making team A the home team causes their win probability to increase by 74% as opposed to if they are the away team? This does not make sense. <BR/><BR/>I'm sure I'm missing something so please help me understand. <BR/><BR/>Great site by the way.<BR/><BR/>BrianBrianhttps://www.blogger.com/profile/15394006910997218733noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-43051044481620799392007-10-08T23:34:00.000-04:002007-10-08T23:34:00.000-04:00The thing to then watch for, year to year, are yea...The thing to then watch for, year to year, are years where the error is significantly larger. You can figure out what the "expected error distribution" is by assuming the error truly is binomial (which is what you're presuming in the regression anyway, since it's a chi-squared fit), and doing a Monte Carlo.<BR/><BR/>If the error is <I>always</I> within the expected error distribution, then you've got a model which almost perfectly represents the game. It almost certainly won't be, since, well, it's a model, and the game is more complex.Pathttps://www.blogger.com/profile/05228159984123927949noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-91888100900848815412007-10-06T11:04:00.000-04:002007-10-06T11:04:00.000-04:00Patrick-Thanks. All great stuff I was not familiar...Patrick-Thanks. All great stuff I was not familiar with. I used some different software that can randomly select cases as training cases and validation cases. I posted the results in graph format to the original post.<BR/><BR/>My interpretation is that you'd want to see two things. One, the test and validation plots are tightly intertwined. And two, they both follow the diagonal path tightly so as not to diverge to far from actual vs. expected.<BR/><BR/>I don't have an exact error number yet. That will take a bit of work in Excel. But my reaction to the graph is that there is not much divergence between expected and actual.Brian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-21042841886605741252007-10-06T04:07:00.000-04:002007-10-06T04:07:00.000-04:00But how does one know what an expected accuracy wo...<I>But how does one know what an expected accuracy would be?</I><BR/><BR/>Run through the season. Predict each pair of games. The regression will give you some number that is related, somehow, to the probability that team A will beat team B (and obviously the probability that B will beat team A). You obviously know that conversion from the calibration. Average the <B>larger</B> of the two numbers for all games (the larger represents the winner).<BR/><BR/>Then, subtract that number from your observed accuracy. That's the error.<BR/><BR/>Now, interpreting that number is a bit of work: see a post in my Eagles blog <A HREF="http://boards.philadelphiaeagles.com/index.php?automodule=blog&blogid=802&" REL="nofollow">here</A>, although I think you have to register, so sorry about that.<BR/><BR/>But the basic idea is simple: suppose you have 4 games, and you expected to get 70% of them right, and you got 3/4 of them right. The error in that case would be 5% - but the problem is that the uncertainty on that error is huge, due to the low statistics. If the same games had been done 25 times more often, 3/4 is perfectly consistent with 90/100, so in truth, your "error" is really 5+/-40% or so.<BR/><BR/>There's also one other thing which is important for comparing ranking systems which is often overlooked: <B>the convergence speed</B>. In your case, the "team ranking" is based on real statistics, so the question there is "how fast does the combination of those statistics stabilize"?<BR/><BR/>That number essentially tells you what the uncertainty in your <I>prediction for each game</I> is - that is, if you say "team A is going to beat team B 70% of the time", how precise is that 70%? You have an estimate for how precise that 70% is from the errors in the regression, but that's just uncertainties in your model - you also have to accept that there are uncertainties in the <I>data</I>, too.Pathttps://www.blogger.com/profile/05228159984123927949noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-2637746111077029932007-10-04T06:43:00.000-04:002007-10-04T06:43:00.000-04:00Patrick-Are you refering to "calibration?" I think...Patrick-Are you refering to "calibration?" I think we just have some differenct terminology. Here's how last year's calibration numbers looked.<BR/><BR/>http://www.bbnflstats.com/2007/03/assessing-models-accuracy.htmlBrian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-43015339344063465812007-10-04T06:27:00.000-04:002007-10-04T06:27:00.000-04:00Pat-Thanks.Retrodictive--I couldn't remember that ...Pat-Thanks.<BR/><BR/>Retrodictive--I couldn't remember that word.<BR/><BR/>Help me out. You're saying my observed accuracy is 69.5%. But how does one know what an expected accuracy would be? Last year, this model (or one very close to it) was correct 65% of the time, and was well calibrated, i.e. 80% winners won 80% of the time, etc. But 2006 was a very odd year in which home teams only won 53% of games when they normally win 58%. I would expect it fall somewhere between 65 and 70% correct for future games. Anyway, how would I calculate error?Brian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-68115037950075612192007-10-04T02:34:00.000-04:002007-10-04T02:34:00.000-04:00The word is "retrodictive", not "retrospective," i...The word is "retrodictive", not "retrospective," incidentally.<BR/><BR/>And the more interesting number in that case isn't the prediction accuracy - that's determined by the set of games used for the test - but the expected accuracy versus the observed accuracy (i.e. the error).<BR/><BR/>The observed accuracy is mostly irrelevant without knowing the expected accuracy: if one model expected 60% accuracy, observed 70% accuracy, and another model expected 65% accuracy and observed 65%, the second is likely a better model, as the first, in all likelihood, just got lucky.Pathttps://www.blogger.com/profile/05228159984123927949noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-44825610208739176282007-10-02T21:20:00.000-04:002007-10-02T21:20:00.000-04:00Derek-Yes.Derek-Yes.Brian Burkehttps://www.blogger.com/profile/12371470711365236987noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-64995914281146898652007-10-02T20:58:00.000-04:002007-10-02T20:58:00.000-04:00By which I mean, did you test the 2002-2006 games ...By which I mean, did you test the 2002-2006 games on using the logistic regrssion model produced by training on the same set of games?Derekhttps://www.blogger.com/profile/17941314072950152029noreply@blogger.comtag:blogger.com,1999:blog-38600807.post-80855003269063496362007-10-02T20:57:00.000-04:002007-10-02T20:57:00.000-04:00By 69.5% accuracy retrospectively, do you mean tha...By 69.5% accuracy retrospectively, do you mean that you're testing on games that the model was trained on?Derekhttps://www.blogger.com/profile/17941314072950152029noreply@blogger.com