An Academic Take on What’s Wrong With the World Ranking

Rory McIlroy has once again taken over the No. 1 spot in the world ranking from Luke Donald. Photo copyright Icon SMI.

Rory McIlroy and Luke Donald have been passing the No. 1 world ranking back and forth this spring like it’s a hot potato. McIlroy reclaimed the top spot with his tie for second at the Wells Fargo Championship last weekend just a week after Donald ousted him in New Orleans. The No. 1 ranking has now changed hands five times between the two of them in just the last 19 weeks.

One of those times, McIlroy took over on a week he didn’t play. That’s the kind of thing that causes critics to lampoon the system as mysterious and overly complicated. But there’s really nothing mysterious about that aspect of the ranking. It’s simply a built-in aspect of any system that is based on a fixed number of previous weeks, as the world ranking is (in this case, 104 weeks, but the same thing would happen in a 52-week system).

In such a system, players lose points for the week that falls out of the ranking (and in the world ranking, they lose points for events within the ranking period on a sliding scale). If two players are close in the ranking and neither earn points in a given week, they will change positions if the player who was in front loses more points. Nothing weird about it.

The only way to avoid this would be if you wiped the slate clean at the beginning of the year and compiled the ranking cumulatively, without reducing any points as the year went on (in other words, like the money list or FedExCup standings). But that’s not effective as a ranking system until late in the year. The world ranking is trying to determine who are the best players, not who has been playing the best for x number of weeks when x is a relatively small number.

What makes the most sense is a system that covers a long enough time period to give better players an edge over temporarily hot players, but gives more weight to recent events in order to give current form greater consideration. That’s just what the world ranking does.

So, in that sense, the world ranking has it just right. (Some have suggested cutting the ranking period from two years to one year, but two tends to provide a more reliable gauge of player quality. There are circumstances in which this is not ideal, such as a player dropping completely off a cliff after playing well for a year. But there are no systems that are ideal under all circumstances, and overall a weighted two-year period is the best.)

Of course, there’s more that goes into the world ranking recipe than the time period. Trickier problems are how much weight to give each tournament, particularly when different Tours around the world are involved, and how much weight to give to each player’s performance in the tournament (such as, giving x points for a victory, y points for 2nd place, etc.).

Recent research by a pair of college professors has tackled those issues. It’s been called an alternate ranking system, but that’s not exactly the way coauthors Mark Broadie of Columbia (one of the men behind the PGA Tour’s excellent strokes gained/putting stat) and Richard J. Rendleman of Dartmouth view it. In a presentation at the World Scientific Congress of Golf Conference, they stated, “We’re trying to add to the dialogue some evidence-based analysis. The powers that be may have their reasons for adding bias [to the world ranking], but it would be nice to know if it’s biased, how much is the bias. They may decide they want that.”

In other words, the goal is more to check the existing system for bias than it is to create a new system.

It might sound like the professors are trying to make the world ranking devised by the International Federation of World PGA Tours (based on an original system devised by IMG founder Mark McCormack) look silly by saying it isn’t mathematically based and that the governing bodies might want to have bias. But the statement is to be taken at face value. There may indeed be reasons for choosing a less mathematically rigorous system.

In fact, most people in the golf world—players, commentators, and officials alike—probably would opt for a system where points are given for places in the standings (with no points beyond a certain place for most tournaments) rather than figured by strokes as in the Broadie-Rendleman system. The world ranking gives 100 points for a victory in a major championship and 60 for second place (then 40 for third, 34 for fourth, etc.). That’s the kind of premium for victory that squares with the way most observers rate players. There’s also a victory premium, but less so, in regular tournaments. In the professors’ system, a stroke that makes the difference between a tie for 30th and a tie for 40th means the same as a stroke that makes the difference between winning and finishing second.

But the strength of Broadie-Rendleman is that it doesn’t rely on any outside assessment of the strength of field of various tournaments. Through a sophisticated mathematical model, it analyzes scores in various tournaments and spits out a single “skill” number or rating for each player. Golfers are all connected to each other—indirectly, if not directly—by scores in common tournaments. Even if Player A and Player B have not played in the same events, they might have each played with Player C. Put all of those myriad combinations together, and you are able to compare golfers from different Tours around the world.

That’s the Holy Grail for a ranking system that encompasses events on all the Tours. The world ranking system starts with a set of assumptions about the strength of various Tours and assigns points to each tournament based on the number and ranking of players in the world top 200 and the home Tour’s top 30 in the field. There are bound to be imperfections in this system, further magnified by awarding a minimum number of points to each Tour’s flagship events.

None of that is needed in the professors’ system—all you need is the players’ scores in each event and the math does the job of fairly comparing the scores. In a way, it’s simpler—although the math involved is above most people’s heads. And the results give the best true indication of the relative strength of players on various Tours. Broadie and Randelman analyzed the 2009-10 data that produced the final 2010 world ranking and found that there was a significant bias against PGA Tour players in the ranking.

The largest reason for the bias are the “home Tour” points added to each event’s field strength based on the entry of top 30 players from that Tour. That wasn’t a part of the original world ranking system, but was a later addition at the behest of the lesser Tours. Eliminate those and stop awarding a minimum number of points to select events, and most of the bias would be gone. (While they are at it, they could address the problem of giving too many points to events with very small fields of top players, especially unofficial events like Tiger Woods’ Chevron World Challenge.)

This is where “the powers that be may have reasons for adding bias” comes in. They may indeed want international players to have a better chance to qualify for World Golf Championship events. Japanese players may have a case that their performance in big events—and thus their much-worse rankings in the professors’ system—is hurt at least somewhat by the fact that all of them are “road games.” And is it really so bad to virtually ensure that the top Japanese player will be in the top 50 and thus qualify for the majors?

In any case, Broadie and Rendleman have done a service by showing just how great the bias is. At the very least, it gives the PGA Tour a case to take to the other Tours to argue for adjusting the world ranking to reduce the bias, even if they don’t want to eliminate it.

The professors’ system rewards consistency more and winners less than the way players are typically judged, and it might not value major championships quite as highly. It’s more like a season-long (or two-season-long) adjusted scoring average where the adjustment is highly sophisticated. When it comes to judging top players, most people want to see the “W’s,” especially in majors. Broadie-Rendleman also does not weigh recent events more heavily, so current form is not a factor the way it is in the world ranking.

Could the professors’ system be tweaked to include a victory bonus, more weight to major championships, or less weight to events that happened longer ago? Probably. That could be the best approach, as there’s a certain beauty—and definite utility—to a mathematical model that can evaluate scores from all around the world and essentially normalize them.

Will that be enough to overturn an entrenched system? Probably not. But the weight of the Broadie-Rendleman argument, and the authority of its creators, could be enough to lead to some improvements in the world ranking. PGA Tour players looking to crack the top 50 in the world certainly hope so.

One Response to “An Academic Take on What’s Wrong With the World Ranking”


Leave a Reply

  • (will not be published)