Courtesy of Chris Cox at NFL-forecast.com, I bring you the latest playoff probabilities for each team.

These are generated with the help of the NFL-Forecast software app, which uses the win probabilities generated by the team efficiency model to simulate the NFL season 5,000 times. And if you don't buy the game probabilities from Advanced NFL Stats, you can tweak them as much as you like to generate your own playoff projections. I encourage everyone to download the app and test out your own scenarios.

The AFC North: A Closer Look

Though not doing much to shake up the team rankings, Pittsburgh's win over New England helped the Steelers further solidify their position in the AFC and increased their overall playoff probability by 24 points to 90%. As of now, the model sees the North as being more or less a two-team race between the Steelers and the Ravens, with one of these two winning the division in 92% of simulations.

This may seem counter-intuitive—the Bengals are 5-2 (and actually win the tiebreaker over the Ravens based on common game win percentage), yet they are forecast to win the division only 8% of the time. The reason for this apparent disparity lies in the tepid competition Cincinnati has faced thusfar—those five wins come courtesy of a schedule with the second worst GWP in the league, with victories against such football powerhouses as Seattle, Cleveland, and Indianapolis. In the end, the Bengals are forecast to finish the season with around eight or nine wins—almost certainly not enough to win the North.

Speaking of winning the North, much of that will be decided on Sunday night when Baltimore faces off against the Steelers in Pittsburgh. I'm guessing this game will get a lot of hype as a pivotal mid-season matchup between division rivals, much like last week's Cowboys-Eagles showdown. Much of this will be deserved: If Pittsburgh wins, they are projected to win the North 74% of the time (vs. Baltimore's 18%), and if Baltimore wins, they are projected to win the North 64% of the time (vs. Pittsburgh's 27%). The playoff implications of this game are somewhat lessened, however, by the fact that both teams are well-positioned for a postseason berth regardless of the outcome, with the loser still making the playoffs in at least 70% of simulations.

High Leverage Games of the Week

Two games make the cut yet again this week as being of particularly high importance to the unfolding playoff picture.

New York Jets at Buffalo | Sunday, November 6 | 1:00pm ET

Playoff Prob. |
BUF Win | NYJ Win |

NYJ | 12 | 40 |

BUF | 80 | 51 |

The truly important division battle this week will take place in Buffalo, where the Jets will look for their first win on the road against a team who—along with every other team in the AFC East not named the Miami Dolphins—are undefeated at home. With New England's stumble against the Steelers opening things up in the East, the outcome of this game will have a large impact on the race for the division title.

If the Jets win, their probability of winning the East increases to 17% and their overall playoff probability rises to 40%. Yet this outcome might benefit New England most of all, allowing the Patriots to open up some space between themselves and the Bills in anticipation of their own game against the Jets in Week 10.

Buffalo, on the other hand, has an opportunity to knock New York down a peg and reduce the Jets' chances to win the East to a meager 2%, effectively turning the East into a two-team race between themselves and the Patriots (a race they go on to win in slightly more than half of simulations).

Chicago at Philadelphia | Monday, November 7 | 8:30pm ET

Playoff Prob. |
PHI Win | CHI Win |

CHI | 26 | 60 |

PHI | 53 | 24 |

The model has this game tilted rather sharply in Philadelphia's favor, projecting an Eagles victory 72% of the time. And if you're entering a multi-team playoff race already several games in the hole, one thing you cannot afford to do is lose games that you're projected to win. Throw in the fact that Chicago is among the Eagles' primary competitors for a wild card spot and you have a game that Philadelphia does not want to lose.

The outcome of the game has an even larger impact on the Bears, who have less than a 1% chance to pass both the Lions and the Packers to win the NFC North and will almost certainly have to rely on a wild card berth in order to make the postseason. However, apart from a Week 16 matchup at Green Bay, the rest of Chicago's schedule is full of very winnable games, so an unexpected win here would almost double their total playoff probability, raising it to 60%.

The probabilities below may not add up to 100 (in percent form) due to rounding. Enjoy.

AFC EAST | ||||

Team | 1st | 2nd | 3rd | 4th |

NE | 51 | 39 | 10 | 0 |

BUF | 42 | 40 | 18 | 0 |

NYJ | 8 | 21 | 70 | 1 |

MIA | 0 | 0 | 2 | 98 |

AFC NORTH | ||||

Team | 1st | 2nd | 3rd | 4th |

PIT | 57 | 33 | 9 | 1 |

BAL | 35 | 45 | 18 | 2 |

CIN | 8 | 20 | 62 | 10 |

CLE | 0 | 2 | 11 | 88 |

AFC SOUTH | ||||

Team | 1st | 2nd | 3rd | 4th |

HOU | 92 | 7 | 0 | 0 |

TEN | 7 | 75 | 17 | 0 |

JAC | 1 | 17 | 74 | 9 |

IND | 0 | 0 | 9 | 91 |

AFC WEST | ||||

Team | 1st | 2nd | 3rd | 4th |

SD | 47 | 30 | 20 | 3 |

OAK | 29 | 29 | 34 | 7 |

KC | 23 | 36 | 32 | 10 |

DEN | 1 | 5 | 14 | 80 |

NFC EAST | ||||

Team | 1st | 2nd | 3rd | 4th |

NYG | 42 | 28 | 20 | 10 |

PHI | 25 | 33 | 28 | 14 |

DAL | 28 | 29 | 28 | 15 |

WAS | 5 | 11 | 23 | 61 |

NFC NORTH | ||||

Team | 1st | 2nd | 3rd | 4th |

GB | 86 | 13 | 1 | 0 |

DET | 13 | 69 | 18 | 0 |

CHI | 1 | 18 | 76 | 5 |

MIN | 0 | 0 | 5 | 95 |

NFC SOUTH | ||||

Team | 1st | 2nd | 3rd | 4th |

NO | 86 | 12 | 2 | 0 |

ATL | 10 | 49 | 29 | 11 |

TB | 3 | 27 | 40 | 29 |

CAR | 1 | 12 | 28 | 60 |

NFC WEST | ||||

Team | 1st | 2nd | 3rd | 4th |

SF | 97 | 3 | 0 | 0 |

STL | 2 | 43 | 31 | 25 |

SEA | 1 | 35 | 38 | 27 |

ARI | 1 | 19 | 31 | 49 |

AFC Percent Probability Playoff Seeding | |||||||

Team | 1st | 2nd | 3rd | 4th | 5th | 6th | Total |

HOU | 23 | 27 | 30 | 12 | 1 | 2 | 95 |

PIT | 29 | 17 | 10 | 2 | 24 | 10 | 90 |

NE | 14 | 19 | 15 | 3 | 14 | 16 | 81 |

BAL | 16 | 12 | 6 | 2 | 23 | 16 | 75 |

BUF | 13 | 14 | 12 | 3 | 14 | 15 | 71 |

SD | 1 | 3 | 8 | 34 | 1 | 2 | 50 |

OAK | 0 | 2 | 6 | 21 | 1 | 3 | 33 |

CIN | 3 | 2 | 2 | 1 | 9 | 12 | 29 |

KC | 1 | 1 | 3 | 18 | 1 | 3 | 27 |

TEN | 1 | 2 | 3 | 2 | 6 | 11 | 24 |

NYJ | 1 | 2 | 3 | 1 | 4 | 10 | 21 |

DEN | 0 | 0 | 0 | 1 | 0 | 0 | 1 |

JAC | 0 | 0 | 0 | 0 | 0 | 0 | 1 |

CLE | 0 | 0 | 0 | 0 | 0 | 1 | 1 |

MIA | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

IND | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

NFC Percent Probability Playoff Seeding | |||||||

Team | 1st | 2nd | 3rd | 4th | 5th | 6th | Total |

GB | 75 | 9 | 2 | 0 | 12 | 1 | 100 |

SF | 11 | 37 | 26 | 23 | 0 | 0 | 97 |

NO | 3 | 23 | 30 | 29 | 1 | 3 | 90 |

DET | 7 | 4 | 1 | 0 | 54 | 16 | 83 |

NYG | 3 | 11 | 14 | 14 | 5 | 12 | 59 |

DAL | 1 | 7 | 11 | 9 | 3 | 14 | 45 |

PHI | 0 | 4 | 9 | 12 | 6 | 14 | 45 |

CHI | 0 | 0 | 0 | 0 | 13 | 22 | 36 |

ATL | 0 | 2 | 4 | 5 | 3 | 9 | 23 |

WAS | 0 | 1 | 2 | 2 | 1 | 5 | 11 |

TB | 0 | 0 | 1 | 2 | 0 | 2 | 6 |

STL | 0 | 0 | 0 | 2 | 0 | 0 | 2 |

SEA | 0 | 0 | 0 | 1 | 0 | 1 | 2 |

CAR | 0 | 0 | 0 | 1 | 0 | 0 | 1 |

ARI | 0 | 0 | 0 | 1 | 0 | 0 | 1 |

MIN | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

Why only 5,000 simulations? Wouldn't the numbers get more stable at a much higher number? How about a million?

I'm a little skeptical of the results from nfl-forecast because on the Advanced Analysis tab of the app, the game probabilities don't exactly match Brian's that he publishes for the NY Times. For example, Brian predicts NO to beat TB with 83% probability. NFL-forecast predicts 77% (judging visually on the slider). Not a huge difference, but over more than a hundred games and 5,000 simulations, you might get some weird results.

Since the numbers are only given to percent-level accuracy, I suppose you'd want the Monte Carlo to be accurate to rounding error on that number. So for an error of half a percent, root-N gives 40 thousand.

On the other hand, we haven't estimated the systematic errors in the model; if those are bigger than the statistical error, you'd just be wasting your time adding more simulations. That is, you're calculating the wrong number very precisely. Given what we know about luck and injury in the NFL, I'm skeptical that the numbers can get better than 5%ish, so 5000 simulations would be adequate under that assumption.

The numbers used for the main tables are done using a modified version of the software that does 50,000 simulation runs, giving standard errors ranging from about .10 percentage points for values close to zero or one and .22 percentage points for values close to 50%.

As for the software, 5,000 runs gives percentages with standard errors ranging from .3 percentage points to .7 (assuming I have the math right).

That might seem like a lot, but you have to consider what we're using it for.

Does it matter whether we estimate Baltimore's chances of making the playoffs as 75% or 76? To me, not so much. I'm much more interested in the broad strokes of the playoff picture and conditional probabilities -- How does a particular scenario affect a team's chances? What are the high leverage games that can really swing a team's probability one way or the other?

To answer these questions, five thousand simulations seems to strike a good balance between precision and speed.

And ultimately, as was pointed out, these are estimates done using projections that will never be 100% accurate. Which is one of the reasons why the numbers are presented rounded off, so as to avoid the appearance of accuracy/precision that just isn't there.

Hi Josh, I just wanted to point out another example of what I mentioned earlier. In your article you write that Philadelphia has a 72% chance of winning this weekend. In the nfl-forecast software, it looks like Philly has a 68-69% chance. Other games have much larger differences. My point is that if individual games are off by 3-5%, the summaries that you end up with could be way off.

Ian: You're right--the software uses a different algorithm than Brian does to convert team efficiency ratings to game probabilities, so the probabilities differ slightly.

In general, the algorithm gives somewhat less of a home-field advantage than Brian's model--usually on the order of 1-3 percentage points worth of probability. This may affect things on the margins, but it leaves the overall picture intact. (Since teams play both home and away games, the deviations--which are small to begin with--tend to cancel each other out as opposed to compounding.)

The state space is so large, I do wonder how accurate 5000 samples can be. That said, the last time I compared the results from my division champion program which does enumerates the entire state space* came only a few percentage points different.

*(Possible after week 6 since each division had around 32 games remaining.)

5000 simulations can be run in 10 to 15 seconds and typically gets you within half a percent of the results achieved by 50,000 simulations (which is what my published forecasts are based on). If you are analyzing a large number of scenarios, you will appreciate getting the results back in 15 seconds compared to more than 2 minutes.

Ian B -- where do you see 77% for NO over TB in the NFL Forecast software? I read it at 82% which is very close to 83% given by Brian. I calibrated the home field advantage a few years ago based on Brian's published predictions. On average, this week my HFA is about 1 to 2% below Brian's. I don't think this is a significant difference, but I could modify my calibration to more closely match his if this trend is consistent over a number of weeks.

5000 simulations in 10 to 15 seconds sounds rather slow to me. I haven't benchmarked it, but my program's got to be running at least tens of millions of seasons/sec, and probably closer to 50 million on a rather slow CPU (for a single division).

Chris, I was wrong. It does look like 82%. I think that I just miscounted a bar. Based on misreading that game I think I overestimated the differences between the two systems. That being said, ideally the percentages would match exactly.

Btw, big fan of nfl-forecast. I always have it open when I read articles predicting future scenarios. Thanks for being one of those productive people. Would be nice to see a web-based version, maybe attached to Brian's stellar site.

On developing nfl-forecast as quicker web app, as there are 256 games in an NFL season you could have a pre-generated list of, say, 2 million random numbers to use instead of generating new match result each time (random numbers are fairly processor intensive to generate).

All you'd need to do then is generate one number, and then use that to decide where to start reading in the random number table.

I haven't tested, but it seems that would make it much quicker to simulate seasons.

Sorry, forgot to say - as 256 games x 5,000 seasons is ~1.3 million numbers, you just generate one number between 1 and 700,000 and then use that as a seed to pick whether you take pre-gen'ed numbers 11 to 1,280,010 or whatever.

Ian, don't forget you don't need to compute entire seasons, just remainder of seasons. Also, you only need random bits, which is a lot more compact than numbers.

BTW, modern fast stream ciphers (which make excellent random bit generators) run at speeds around 1 bit/cpu cycle in software, so a million random bits can generated in under a thousandths of a second on modern gigahertz CPUs.

Sam - how does the random bit generation work? That's not a tactic I'm familiar with but sounds a useful thing to know if doing complex monte carlo simulations.

For the record, the effort I've put into game simulation is negligible to the effort for the GUI and the tie-breaker procedures.

I just did a crude performance test. I compared the time required for simulations of the the remaining 9 weeks of the season to the time required to simulate 1 week. The results are that the time to simulate 1 week is 81% of the time required to simulate 9 weeks.

So that strongly suggests, at least for my implementation, that the time required to run the tie breakers is far greater than the time required for game simulations.

While undoubtedly increases in performance can be achieved, a sole focus on speeding up the game simulations is unlikely to achieve the expected gain.

It's totally believable that you've spent more time on the GUI than on the simulation--right now, my user interface is making changes to code or data tables and recompiling...

What are you doing for the tie breakers? Right now, I don't handle all of the tie breakers, but handling tie breakers quickly is one of the things I've had to spend a lot of time on. Right now, I'm just handling head-to-head and win-loss-tie within division. I keep track of ties not handled by the above separately so I'm not incorrectly adding probabilities. Adding support for the tiebreakers I do handle increased run time by about 50% I think.

Adding win-loss-tie within conference is pretty straightforward. I'm pretty sure I've got a fast common-games computation figured out. I have no idea what "strength of victory" is. I think "strength of schedule" is the ranking from last season used for "strength of schedule" scheduling in current season, which would be simple to implement, if my supposition is correct.

How are you handling the other tie breakers which involve points/touchdowns scored? Do you compute expected points and touchdowns for the rest of the season?

Ian: most (but not all) stream ciphers essentially generate a stream of key-dependent) pseudo-random bits, which are xor-ed against the plain text to produce the cipher text. For a stream cipher to be good, the bits it generates needs to be indistinguishable from a stream of random bits. In the past, most stream ciphers were designed to be fast and cheap in hardware, but in the last decade, a number of stream ciphers were designed to run fast both in hardware and in software. You can read (and get reference implementations) about some good modern stream ciphers here -- this was a project of the EU to identify/develop some good stream ciphers.

[I originally had a longer response here, but...]

I can see you've spent a significant time on your UI--mine currently is a text editor where I adjust code and data tables and recompile.

I currently handle the first two within-division tiebreakers (head-to-head and best in-division), I detect when those aren't enough and so far remaining ties are under 1% of probability. Adding even this amount of tie-breaking handling did significantly increase run time, I didn't keep records but I'd place it around 50%. Since other tiebreakers occur so rarely, I can afford them to be relatively expensive, though the next two in order seem straightforward and fast. The fifth (strength of victory), I have no idea what it means (* googling just came up with a definition...), and the sixth (strength of schedule) I think I know but some better explanation would be appreciated. What're you doing about the remaining tiebreakers--do you just freeze current statistics or do some sort of calculation or simulation for remaining games?

Ian: in this monte carlo simulation, you only need a binary decision for each game, so (pseudo)random bits (iid uniform with probability 0.5) are all that are necessary. Modern stream ciphers are a pretty good way to generate random bits fast.

Things get a whole lot more complicated when you start looking at full playoff seedings 1-6 in each division. Strength of victory tiebreakers routinely become important.

Not to mention three way ties, which sometimes require multiple iterations through the tiebreakers. The logic is very tricky.

I don't do any tiebreakers past strength of schedule. I just flip a coin at that point. If the playoff race is interesting to me, I will go through all of the tiebreakers manually and figure out who has the advantage for any tiebreakers that go beyond strength of schedule, but those are truly rare instances.

I consider the probability of individual games outcomes as predicted by Brian's team efficiency ratings. In that case, each team has a continuous decimal probability of winning each game that can range from 0 to 1. Random bits won't resolve that, you need random numbers.

Chris: I decide who wins each game, then multiply all the game winner's win probability together to find the joint probability of that particular configuration of winners, then accumulate the joint probabilities for each winner. At the end, the total probability space is divided into the accumulated probabilities for each team (I think... right now this is moot since I iterate over all probabilities so the total probability = 1 and this method is obviously correct). I think this is a form of importance sampling.

Gah, the strength of victory tiebreaker looks to be annoying to implement. Right now my only handling of three way ties is to detect that there is one.

Sam -- OK, now I get it. That's interesting. I can see where it would be much faster to generate x simulations. So for me, the interesting case is when you are running some kind of MC scheme to estimate the probabilities when the sample space is too large to sample exhaustively. So the difference between the two schemes is that your method samples the entire probability space uniformly while my method samples the most likely areas of the probability space most intensely. I wonder if your method would require extra simulations to in order to get "enough" samples in the most relevant regions? It still might be faster, even if it does.

Importance sampling done correctly actually speeds up convergence. I'm not quite sure I'm doing it correctly though :-) (it's been over 20 years since college...)

Sam - sorry brains being slow again. How do I use a random 0 or 1, each with probability 0.5, to simulate a result where a team wins 40 percent of the time?

Ian: It's encoded in my comment about joint probabilities.

E[g(X)] = sum[all x elem of X] g(x)f(x)

where

X = random variable for NFL seasons

g(x) = goals we're interested in, e.g. Team A wins division

f(x) = probability mass function, probability that a particular season x occurs

Now, I'm attacking this directly by taking a series of N samples of X (x_1, x_2 ... x_N) and computing f(x_i)g(x_i), i.e.

E[g(X)] ~ (1/f_s) sum[x elem X_s] g(x)f(x)

where X_s is the set of N samples of X

and f_s = sum[x elem X_s] f(x)

This is basically Monte Carlo integration.

This was a good exercise, since now I know exactly what I'm computing!

And now this leads me to understand that what Chris is doing is Monte Carlo simulation with importance sampling. So he should require fewer samples than I do to converge on the correct answers.

As I see it, there are three things I want to do with this program:

1. Estimating the seeding slate for each conference (i.e. division winners & wild cards, this one will sample, instead of iterate over all configurations)

2. Finish up computation of division winners (i.e. generalize code to iterate over all divisions, instead of just concentrating on the AFC South).

3. Compute seeding slate for entire conference (not sampled). This should be practical once there are less than about 40 games in the season (per conference), so around week 12 or 13, so I've got a few weeks to work on this.

Chris, assuming I get #1 or #3 done, would you be interested in collaborating? (Actually, just having someone to discuss issues over and check my understanding would be fantastic!)