Cw

Part two of this week's rankings Rally continues, and concludes, with Kamakshi Tandon's dissection of the current system and the weird results it can produce. (You can read part one here.) I won't attempt a response; there's nothing I could possibly add. Hope you enjoy this, and learn as much as I did from it. I'll be back tomorrow.

*

Steve,

That’s a very incisive remark by Ilie Nastase, especially since he was one of the first to have none other than No. 1 hung over his head. Your point about the psychological impact of rankings is a very interesting one. It definitely creates a very explicit and fluid kind of hierarchy we wouldn’t otherwise have, with costs and benefits. We’ve seen the impact of it in very stark terms recently with Jankovic, Safina, Wozniacki and even Ivanovic to an extent. If they hadn’t reached No. 1, their achievements would have been perceived both by themselves and by others as praiseworthy improvements rather than a lack of something more, and they might have been able to keep pushing ahead instead of trying to back something up.

Would they have been No. 1 under the old system? I can’t say for sure, but I’m told just barely for Jankovic and Safina, and no for Wozniacki. There was a great collection published at the end of last season of various ways of ranking the players, and here are the year-end rankings under the current Best Of system, the 1996 average, and the 1997 cumulative:

2010

  1. Wozniacki, Caroline
  2. Zvonareva, Vera
  3. Clijsters, Kim
  4. Williams, Serena
  5. Williams, Venus

1996 (Average)

  1. Clijsters, Kim: 218
  2. Zvonareva, Vera: 173
  3. Wozniacki, Caroline: 172
  4. Williams, Venus: 161
  5. Williams, Serena: 160

1997 (Cumulative)

  1. Wozniacki, Caroline: 3791
  2. Zvonareva, Vera: 3282
  3. Clijsters, Kim: 3058
  4. Stosur, Samantha: 2667
  5. Schiavone, Francesca: 2373

I guess we can safely say that Zvonareva should have been No. 2 no matter what. As I said, these ranking variations create fairly small differences, but sometimes those differences can be significant. We want to know that they’re the result of legitimate criteria. Hence my constant drumbeat: design the system, not the results.

For me, any decent tennis ranking system has to have three things:

1. Circularity: It should run over the course of a year instead of starting again at the beginning of the season, which skews the early results too much. Seems obvious, but remember that the ATP in 2000 actually wanted the Race to take the place of the regular rankings, another case of someone taking leave of their senses. (Measuring over two years, the way Nadal wants, or even just six months is also possible, but I think a year makes perfect sense for tennis given that it coincides with the season.)

2. A level playing field: Finding some way to compare players on an equivalent basis, so that Serena’s four tournaments aren’t judged beside Wozniacki's 24 without accounting for the difference. This can be an average or Best Of whatever, or something else. I’m split between the two, but they both basically do the job without being too complicated.

3. Quality control: In practice, this means bonus points. Winning a tournament beating Serena, Venus and Clijsters along the way is not the same as winning a tournament beating Pironkova, Radwanska and Kvitova along the way, and the rankings should reflect this.

The current rankings have the first two, but not the third. The ATP got rid of bonus points as part of its sense leave-taking in 2000, and the WTA a few years ago. I’m not sure why, except that they obviously made calculating and anticipating the rankings a lot harder. This is also one of the main things I’d point to when talking about the problems with the system. Wozniacki has beaten very few Top 5 players, and the system doesn’t care at all. Based on what I remember of the men, bonus points for beating a quality opponent could be as much as 20 percent of a player’s total, or even 30 percent in one or two cases.

(I say all this, but should mention that bonus points would be less important on the women's tour these days because anyone can beat anyone on a given day. Structurally, though, they’re important to have.)

The comments seem to split between ‘Slams are what matter’ and ‘why shouldn’t other tournaments count for something'? This is the classic quality versus quantity debate. But if you won a Slam not beating a Top 20 player, it wouldn't say all that much. Having round points and bonus points is a better way of striking a balance than just increasing the weight given to Slams, which is already considerable.

So while the lack of bonus points is a problem, it’s probably not the real explanation for the incongruity at the top of the WTA rankings these past couple of years. What is? It’s that the top players don’t play a schedule that matches the ranking system anymore.

Apart from the removal of bonus points, the other big change in the rankings has been the creation of mandatory tournaments, about 12 of the 16–18 tournaments required by the ATP and WTA are now must-count events. So in effect, everyone’s being judged on the same schedule. But increasingly, you have a split between the kinds of schedules that players play. Some play everything they should—Wozniacki, Jankovic, Safina—and a select few only play what they want—Serena, Clijsters, Henin.

Clearly, the former group has been winning the rankings war, because the rankings are set up to reward a full schedule. But if the top players won’t follow the system, is it time for the system to follow them? It’s a tough choice. On the one hand, the WTA has to have a system that gives players a reason to play regularly, or there would be no tour. On the other, if they impose a system that doesn’t reflect reality, they’re flushing their biggest source of legitimacy, the rankings, down the toilet.

The other question is how this happened. Just hazarding here, but I think the WTA started this process by abusing the power of the rankings—introducing things to influence player behavior even though they weren’t as effective at measuring performance, like removing the average and bonus points, bringing in mandatory events. That affected the credibility of the rankings and the burden it imposed on players, especially as the demands of the tour were working in the opposite direction, toward a reduction in play.

Now we’re at the point where the big names have all but rejected the rankings in favor of setting their own schedule. The WTA has dropped the number of tournaments from 18 to 16 and changed the commitment requirements for the Top 10 since 2009, but this has been overshadowed by the injury epidemic.

I don’t mean to exclude the men, who have done similar things but have been much more effective in getting the players to play all the events they’re meant to. Because of this, the system has worked pretty well for measuring the top players, though I’m reliably informed it creates a peculiar volatility around the mid-Top 100 since players who don’t have to play the Masters can pile up points at other events and leap up, only to fall again when they must start counting their Masters results.

The reason I said the men’s system is now actually worse than the women’s is because of the changes made a couple of years ago, when the points for getting to later rounds increased substantially, as did the gap between various levels of events (Challengers took a hit in points). It seems like these would make it harder to work your way up to the ATP level, and that streaky players who manage a few good results a year get more reward than consistent players who regularly win a couple of rounds. Perhaps this reflects current values, and certainly it means that the rankings will better reflect our perceptions (one title is a lot easier to remember than four quarterfinals), but I’m not sure it’s a better system. Last year, 27 of the Top 30 men won a tournament (historically unusual). Do Top 30 players tend to win tournaments, or are they Top 30 because they won a tournament?

Again, it may not matter that much in practice right now because there is a bigger gap between the players, but over time this could create, rather than reflect, gaps. Maybe this wouldn’t be bad to most people—protecting familiar names and having a clear hierarchy is probably good business. On the other hand, it's not good competition.

So there aren’t necessarily clear answers, but I’d like to at least know what thoughts are going into the choices that are made. It’s very hard to have a detailed conversation about rankings with officials. When the ATP changed its points table two years ago, I tried to talk about it with Justin Gimelstob (OK, so he wouldn’t have been my first choice, but he is an ATP Board member). I asked generally about the change, and he replied that they had had a board meeting in November where they were presented with some algorithms and picked out one.

“So why’d you pick that one?” I asked.

“They showed us several algorithms and we felt this was the best one.”

“But why?”

“We chose from the ones that were presented to us.”

“But why? Why’d you pick that one?”

“We decided it was the best one.”

“But why?”

After about a half-dozen but whys, I gave up. I suspect it was to match the prize money changes, which increased the money for the later rounds. But that would have been an easy answer to give instead of all that circular talk, which only made me conclude that he was completely lost. The irony is that the players balked at the prize money changes and they were partly reversed, but the points changes stayed as they were.

So Wozniacki could be back at No. 1 again tomorrow. Don’t look at me. It’s the algorithm, you know.

Kamakshi