Saturday, March 16

What a Great Mens Championship at 2013 Worlds

... said no one ever.

Figure skating fans were already worked up over the scoring of four-time World Pair Champions Aliona Savchenko and Robin Szolkowy earlier in the day, when major errors on both individual jumping passes still earned them 2nd place in the free skate and the silver medal overall.

And just when we thought the 'controversy of the competition' was out of the way, Patrick Chan fell all over the ice in his free skate yet still won the overall World title by two points over Denis Ten, who had the skates of his life in both programs. Chan's program was only deemed 5.51 points worse than Ten, and he held a substantial lead following his amazing world-record setting short program.

Current skaters, former skaters, and fans alike heated up Twitter last night. American skater Christina Gao created the hash tag #BSWorlds13, a play on the actual tag being used for Worlds with the F being replaced by the B and the S having just a slightly different meaning.. you get it.

So why are so many people that dedicate time to the sport so fired up? In the end, Chan did land two quadruple toe loops in the free skate and had wonderfully choreographed programs full of intricate in-betweens that he made appear seamless. Shouldn't that be rewarded? Yes, what he did *successfully* absolutely should be rewarded.


Figure skating, whether we want to admit it or not, is a psychological game of scoring for both fans and judges.

How many times have we been totally bored by skaters one year, and then all of a sudden they come up with an interesting or unique piece of music and we decide to love them and their 'program', even if the quality of the skating has not improved at all?

How many times has someone skated to Turandot or any other warhorse music and people exclaimed what a beautiful program it was, when it really was a horrible interpretation to a great piece of music?

How many times have former Champion skaters delivered poor performances or watered down programs in both the technical and program component senses but still received marks that put them right near the top?

How many times has a quadruple toe loop or triple Axel caused an electricity in the building, which, in turn, seems to make the judges go way up with their program component scores versus skaters with lighter jumping content but may be much, much stronger skaters?

Let's stop there. Andrei Rogozine skated in the second group of four last night in the mens Championship. He was skating on home ice, and produced a brilliant effort as far as the jumps were concerned: a quad toe loop and two triple Axels-- one as a three-jump combination. He received 68.22 as his final components score, or a 6.82 average between each of the five sections that are marked.

Earlier in the night, both Peter Liebers and Jorik Hendrickx skated strong, quality programs with much more choreography and transitions than Rogozine, but they were scored at 66.72 and 60.06 points, respectively, on their program component sections. Liebers skated two slots before Rogozine, and Hendrickx was in the first starting group.

There are several phenomena happening here: in addition to the other points I listed above, we all know that there seems to be a strong correlation between start order and the raising of program components throughout a competition. Since the top skaters according to the 'ISU World Standings' compete late in the short program, the general assumption should be that those skaters do indeed deserve the highest components, right?

Well, the judges sure seem to think so. Or is it really that they truly *think* that?

I think there is a mix of situations happening. While the IJS was ideally created to score the skater versus a point system (as opposed to skaters versus each other), the influence of the big tricks technically, the crowd response, and the reputation of the skater all seem to go into the final marking of the program components.

For example, Liebers was great technically, had a very, very strong program choreographically, but was reserved in his interpretation to the music. This meant that the crowd reaction was pretty 'normal'. Rogozine, skating at home, lit up the crowd but had even less going on in terms of the interpretation to the music.

The judges have about a minute or so after a skater ends their program to check any questionable elements or change any scores AND enter their program component scores. I was also judging this in real time, calling my own levels for each element, assigning a grade of execution, and issuing program component scores all in the time that the judges just have to do two of the three.

When a judge has to be so fixated on the 13 elements a man has to perform within four and a half minutes, the components can become much of an afterthought. And this is why I believe we see so many instances of 'components by starting order' or by the technical effort. The crowd was fired up for Rogozine in that one minute the judges have to enter their marks. Psychology game-- for some of them.


There have been so many different instances of IJS in its near-ten year existence. At one point, the panel was split into two: one that assigns the GOE of the elements while levels are still called by a three-person technical panel, and one that assigns scores for the five program components. That practice lasted one competition, and, if I remember correctly, the amount of judges was deemed to cost too much for the ISU to send to each event.

The ISU holds workshops and the technical committee publishes criteria for the scoring of components-- for example, what a 5.00-level skating skill would look like versus an 8.00 skating skill. Yet we still see the panel of 9 judges fluctuate their components marks greatly after marking the exact same performance. Too much to do at once, and/or marking by start order or reputation.

Here's a task. Watch a program and ignore counting the rotations of jumps, the actual jumps themselves, the spin rotations, etc. Don't have a battle in your head whether jump X should have been a -1 or -2. Just watch the skating quality, what the skater is doing between the elements and how they are linked together, how much the skating matches the music, and things like that. You might see a totally different program-- for better or for worse.

Whether 'experts' of the sport or not, I think the same thing would happen for the judges, and that is why I recommend, once again, that the panels be split up. Five judges for components, and five judges for the GOE marking (which doesn't seem to differ as much as the components do). One judge more than there is now, and actually three less when you consider that 13 total are chosen and then rotated between segments of the competition. The ISU should produce some type of test for the judges that would like to score components where they have to name, for example, all of the criteria of skating skills from score 1.00 to 10.00. Performance/execution and interpretation is always going to be subjective, but not to the point that a skater completely ignoring the music is receiving a 7.00 or higher for the way they 'interpreted' it.

Here's something else you can try. Watch a competition. When it comes to the highest-ranked skaters, use the following formula for your own program component scoring:

Final Group: Start your Skating Skill mark anywhere from 7.50 to 9.00. Your decision.

From there, -0.75 for transitions, -0.25 for performance execution, +0.25 for choreography, and +0.25 for interpretation from that original score. Odds are you will be right in line with the judges!

The 'predictability formula' of the program components needs to go. We need to see transition marks that are higher than the other four-- which is rare. We need to see interpretation marks much lower than choreography marks if that is what the case actually is, and so on. Giving the judges ONLY these five areas to focus on, and we might see a greater fluctuation that is more indicative of what the skater is putting out there.


A comment I see frequently on Facebook and Twitter between fans discussing figure skating is that they cannot understand how a skater still had very high components scores with a fall. As it stands, there is no specific criteria for reducing components after major errors such as falling.

In the early days of IJS (it was called COP), any fall made the skater lose 0.50 points off whatever the value of their performance and execution mark would have been. The problem is that only the most elite skaters were kept in mind. What if you are judging a novice or even junior-level competition and the skater would only be worth of a 3.50 or so to begin with, and then they fall 4 times? Is it really a 1.50 performance now? Obviously, this isn't going to work.

A comment made to me when I suggested a few ideas to fix the system was that, "I don't think being reduced to Algebra 2 is the answer." But really-- isn't skating already a numbers game? So, my idea for performance and execution would be the following:

For any fall (major or minor), 5% is reduced from the P/E score that was to be given.
If a judge enters a 9.00 for Patrick Chan but he fell once, he would now receive an 8.50 at best. (Round to the nearest 0.25 increment).
If Chan were to fall twice, 10% would be reduced from the initial P/E score. He's now at an 8.00 maximum.

Many fans seem to want performance/execution scores to drop into the 5.00 range for someone like Chan if he's falling all over the place-- but if he falls twice and does eleven other elements beautifully in the free skate, does he really deserve something like a 5.00? I'd be fine with an 8.00 after two falls if the judge deemed his level to start at a 9.00. This section would need some tweaking, but the general concept is presented.

Falls can also affect, for example the interpretation-- but that is up for the PCS judge to decide rather than having a 'required' reduction.


Another phenomenon I see often in the scoring of programs is that top-name skaters seem to be given much more generous grades of execution for their elements versus a skater that has not 'paid their dues'.

I got into a discussion with a Twitter follower over the scoring of Yuna Kim's triple Lutz-triple toe short program combination the other day versus Kaetlyn Osmond's triple toe-triple toe combination. Kim entered the jump with no transitions, had nice distance on the Lutz, and had good rotation on the flip. All-in-all, a *good* jumping pass that carries 1.9 more points base value than Osmond's.

Osmond, on the other hand, did footwork down half of the rink, including turning in the opposite direction directly preceding the jump, carried great speed, and had big distance on both of her jumps. She also did edge-work and a kick to show her balance right after landing the combination. However, her final grade of execution was less than that of Kim's.


Reputation judging. It's basically telling skates like Osmond that no matter how much more difficult she actually makes the element for herself, good luck earning those top points. Because trust me-- the fact that she's doing any kind of footwork into her *combination*, let alone the fact that it goes in both directions, is insanely difficult. I'd even argue her combination is worth a +3 while Yuna would be at a +1 for me. In the end, Kim would score a 10.8 for the element and Osmond a 10.3. I could totally live with that.

That Twitter follower, by the way, replied to me with something about how Osmond scored really well considering it was her 'first time' [at the World Championships]. Last time I checked, you mark what you see on the ice, not how long the skater has been around or what titles they have earned previously.

Reputation judging in the GOE needs to go, too. Those points really add up more than you think.


Those of you who follow me know that I have been scoring many of the programs myself throughout the week of the World Championships. In some cases, there has been as little as .05 or .10 points separation between myself and the judges on the *final* segment score. Great, right?

It would be great if my own scores and following the rules actually made sense to me. Even with Patrick Chan's disastrous skate last night, I had him at 164.28 points for the segment; Denis Ten earned 166.61 points from me. 2.33 points-- that's it. You've got to be kidding me.

The funny thing is that while I was scoring the first half of Ten's program (in which he was completely on fire), I was thinking to myself, "Wow, this score is going to blow away Chan's on my score card." But then when I saw what my final marks produced, it was definitely a WTF moment.

So we already know the judging can be suspect when it comes to program components and the grades of execution, but now we have a flawed system itself, too? Double whammy.

A few years ago, the grades of execution for poorly-completed elements (negative GOE's) was reduced so that trying a difficult element wouldn't be as much as a risk. This likely happened after Mao Asada was getting hammered for her under-rotated triple Axels as well as male skaters taking themselves out of contention when their quads failed. We hardly saw quadruple jump attempts for a few years.

Then the ISU decided that a fully-rotated quadruple toe loop, for example, with a fall, should earn 7.3 total points for the element. If you also consider the mandatory -1.00 point deduction for the fall, the skater essentially has earned 6.3 points. A base value triple Lutz, on the other hand, garners 6.0 points. Yes, a quadruple toe loop is much, much more difficult than a triple Lutz. But in what other sport does a complete *failure* of an element still earn the athlete most of the points they were attempting?

I'm not saying that a 0 is necessary for the element. What I would suggest is that there is another column added to the GOE scale where elements that are fully rotated, but fallen on, only receive X percent of the initial base value. Last night I suggested 25%, which makes the value of the jump 2.58 points.

I would do away with the 1.00 deduction for falls but rather it would be applied to the performance/execution score as described above.

Harsh to get a 2.58 for the quad failure, isn't it? The skater is still getting nearly double what they would receive if they'd double the jump, and I think that is absolutely fair. There still has to be some risk involved.


Re-Scoring Patrick Chan's program with all of the aforementioned changes, you get this:

4T+3T 16.97
4T 13.16
3Lz 1.50 (FALL)
StSq4 5.60
CCsp3 3.73
3A< 1.65 (FALL)
3Lo 6.61
3F+1Lo+3S 9.70
FSSp3 3.24
2Lz+2T 3.87
ChSq1 3.40
2A 4.06
CCoSp4 4.29

77.78 total for TES now. He had 82.13. By the way-- reputation judging on that final 2A and a lot of other elements here. +3, really? Even +2-- really?!

SS 9.11
TR 8.96
PE 8.61 (-10% because of falls) now becomes 7.75.
CH 9.00
IN 8.96

87.56 for PCS now, while he had 89.28.

Total score isn't that much different, as there is no more 1.00 deduction for falls in my proposal. 165.34 versus the 169.41 he actually got.

Of course, the reputation judging for some of his elements seemed suspect, so in reality he probably should have been even lower.

However, since Denis Ten did not fall, all of his scores would stay the same and he'd wind up with 174.92 points in the program still, and this would be enough to win the title. Albeit, not by much, but enough.

People want to complain that my proposal is too much of a penalty for skaters trying the hard elements. I argue that it's just enough of a penalty that results actually make sense.


I have never seen anywhere near the amount of skaters voicing their opinions at the result as I did last night. The skaters need to work together to come up with modifications to the system that they think would best reflect the performances that are being put out there, and then present them to the ISU. Everyone seems to have an opinion about why things aren't fair, but unless you get to the root of *why* it isn't fair, then all we have are endless amounts of angry Twitter posts.