Coaches Corner A Pseudo-scientific approach to tryouts by Brummie

When asked to coach the Great Britain team and run trials, I was keen to ensure that they were as impartial as possible. Not easy. As it turned out, tryouts were the first step on our way to building - in under two years - the most successful Open team ever to come from the UK.

Setting the Scene

To set the scene, it’s October 2010. WUCC in Prague is a few months behind us, and most of those trying out had just finished playing at the European Championships. The best performing Open team from the UK, Clapham, had finished 10th – far below their aim of semi-finals – and had crashed out of European competition to the Swedish club Skogsyddans for the second consecutive year despite dominating their domestic competition. The gap between the UK and the rest of the world seemed wider than ever. Serious work was needed if the GB team was to be successful, and merely aping one of the UK clubs wasn’t going to cut it.

Having been recently appointed as Head Coach of the GB Open squad by the new manager – and long-time Clapham captain Marc ‘Britney‘ Guilbert – I was keen to see the best that the UK had to offer, and I didn’t want to rely on word-of-mouth (unlike the 2007-8 squad selectors).

I still needed to recruit a group of experienced GB players to help collect data about those trying out, but didn’t want any bias. We put out a general invite to the whole of the UK, and the interest resulted in a tryout with 50+ players in attendance from over 10 different clubs, a wide age range spanning nearly 20 years, and experience levels from GB veterans to people in their second year of playing.

Tryouts needed to be multi-purpose:

  • To find a group of players that would have the potential to be world champions in just two years time;
  • To find out where our strengths (and weaknesses) were as a team in order to guide individual player development; and
  • To have any chance of success in 2012, we not only needed raw talent but we needed a group of players with the aptitude to be coached – moulded – into a team. It fell to me to come up with a way of assessing the current state of play.

Physical Assessment

I’d spent the previous three years training alongside a team of long jumpers under the tutelage of Colin Harris; Colin also works with high performance athletes on basic movement training for improving athleticism, and was a former military man who nearly made the Olympics. He had proven his expertise to me personally (having turned me into a half-decent athlete) so I wanted his input on the entire team.

Having no normative data on what should be expected of high level ultimate players, Colin turned to a more seasoned sport – tennis – for numbers. He took guidance from the Australian Institute for Sport (AIS), which have standards for a number of physical tests, as follows:

  • Countermovement Jump test (single & double leg); essentially, just jumping as high as possible with a dip & arm swing.
  • Plyometrics; single hopping & double leg multi-jumps for distance
  • Overhead medicine ball throw
  • 50m sprint, with timing gates capturing the 5m / 10m / 35m splits
  • 4 x 50m shuttle time
  • Arrowhead agility test
  • Bleep/Beep test

Where scores were unavailable from the AIS, other equivalent normative data was used. All players were shown what they were meant to do, allowed several practices if they wanted, and then two scored attempts at each test where the better of the two was used in the analysis. Afterwards, we collated the data and compared it to our targets.

It didn’t make for great reading. It turns out that ultimate players aren’t really at the expected standard, even for elite amateur sports.

Key findings (Oct 2010)

:

  • Poor jumping technique in many players, to be addressed via basic plyometrics drills & coaching
  • Right leg related activities (including left agility drill which turned off right leg) tended to score better than left leg activities, except multi-jumps which may be down to increased hip strength in left leg; all signs of imbalance
  • Poor flexibility in many athletes resulting in poor running & jumping mechanics
  • Lack of core activation in many players
  • 90% of those who hit the 5m sprint target failed to hit the 10m target – a sign of power with poor technique – while 20% of those who failed to hit the 5m target still hit the 10m target – a sign of low power output. Both need to be addressed, and targeted to each group
  • 50% of those who made the 10m targets failed to hit the 35m target; they would require coaching in the change of mechanics from sprint phase to stride phase via body weight transfer training
  • With less than 6% achieving all running targets, Colin was concerned that our players weren’t used to maximal effort training; this proved to be a problem for some players as the nature of ultimate is that we tend to take short rests compared to sprinters. This “must do more volume” attitude is rife amongst ultimate players, yet actually, maximal efforts require significant rest periods. Adjustment via education!
  • Poor sprint mechanics & running form, to be addressed via coaching & running drills
  • Poor speed endurance in the 4 x 50m; Colin would prefer a target of 28s yet only a few hit 30s. Players require training in change of direction when at max speed, and far more interval training.
  • Those who performed well on the Bleep test were those who performed poorly on the power activities, indicating lack of fast twitch muscle

Again, I felt a lot more confident being able to provide feedback to players – including making selections – based on facts and figures rather than purely qualitative judgements. Colin was also able to see some imbalances – mostly between left and right results – and address that through the conditioning sessions we were given. Given players with such a wide range of ability, Colin had a tough job to put together a general purpose programme that addressed the key areas identified, but fortunately he was able to provide a variety of workouts then each individual was able to do more of the sessions that targeted their weaknesses and less of those that targeted their strengths. This put us on a road for strong overall improvement.

GB Open Test Data, 2010-11

You can see the results here, along with the results taken later that season (note: we were forced to run outside in June, and a headwind meant the longer distances were run slower than expected). While the figures wouldn’t stand up to academic rigor (we were forced to change location, equipment, increased understanding of how to perform the tests themselves and not all the same participants in both sets of tests), the June results were nevertheless encouraging; we’d got the message across that we were serious about being world class athletes and were setting the bar high. We didn’t repeat this process in 2012 – it was too time-consuming, too few new players, and quite frankly we’d already instilled the ethos in our team that we were after. But it definitely worked for us to have a clear understanding of what we were working with, and it meant that Colin could write appropriate conditioning sessions for a wide range of issues.

Colin’s programme was also having clear results – just compare the percentage of players achieving gold or silver standards in October vs June. In October, only seven players made the gold standard over 5m, and the same number failed that test. By June, all players bar two made the gold standard. Bear in mind that these players were all already top calibre ultimate players – they already did “fitness” work for ultimate – but had never been assessed and had programs tailored to their needs before.

Field-Based Skills Assessment

So, how to make sure the players who try out are tested fairly?

Firstly, new game-based drills were designed, rather than using existing ones, because use of any drills used as standard by some – but not all – of the players would give those players an unfair advantage. New drills provide a level playing field. In addition, new drills help to assess who can understand quickly, who can pay attention, and who can adapt to a new situation; i.e. “coachability.”

Why drills, and not just scrimmages? I wanted to force players to show me their skills, not simply show off their strengths and hide their weaknesses. By forcing them into game-like scenarios, nonetheless in a confined drill setting, we could see how comfortable they would be in a variety of situations. We also ran scrimmages later in the day to allow players free reign.

Here is an example of a drill designed to test each player’s ability to assess the field; offensive player B has their back to the field where two defenders cover the receiver C. The set up is shown on the left in Fig. 1. B cuts directly towards A, who throws a gut pass. Meanwhile, the two defenders have decided between them which one will play defence on the long cut that C is about to make; the other defender doesn’t take part in the drill once the disc is checked in. B must then turn, assess where the defender is, and throw long to C by choosing a throw that avoids the defence; in the example here, Fig. 2 shows the defender covering from the left side, so the throw should go to the right shoulder of the receiver.

GB_Tryout_Drill

To reduce bias from coaches assessing the players, I assigned each coach to grade one component of each drill; one graded players throwing, one graded the cutter, one graded the defender. By using the same player to grade each player on a single attribute, I was relatively confident that all players would be graded in the same way. It also prevented any one coach from providing data on any area that wasn’t assigned to them, limiting potential bias when grading club team mates.

Here’s an example of a grading sheet handed to the captains:

breakmarkassessmentsheet

We also set the scene; every player was to imagine they were at the European Championships, representing their country in a big game. Easier said than done, yet each player knew that their chance to play for GB was on the line; tryouts are the closest thing to a big tournament in that respect. We’d need to use other tricks to keep intensity high during the middle of a long season, but at tryouts, people brought energy and passion. We wanted them to bring maximum intensity on the field.

Then we set them at each other. It made for an entertaining and enlightening day. By running several rotations of each drill, then getting mean scores for each category, we were able to build up a picture of the kind of player they were. Some of the results were surprising – that “handler” turned over more than you’d initially suspected, or that “defender” scoring below average on defence, for instance. Any obvious trends led directly to feedback to the players on the areas that we’d like them to work on in the off-season.

In the end, I simply took the mean of all scores and ranked the players top to bottom. The selectors had also been making individual notes on all players; these were collated and discussions had. We were also able to group together similar metrics from multiple drills across broader themes like “defensive skill” or “cutting ability”, and some scores began to stand out. Comments from multiple selectors, now backed up by data, could be used to draw conclusions on the strengths of individuals. I felt a lot more confident cutting someone because I could show that they consistently scored low in comparison to others. I felt that the entire process was successful when the top ranked player was Christian “Wigsy” Nistri, one of the most dominant players in the country at the time; the second ranked was a relative unknown at the time who went on to captain the GB World Games team in Cali three years later: Rich Gale.

In short, most of the top performers were people we’d expected, but this process also drew our attention to some people who hadn’t previously been on our radar. It’s much harder to spot the players who quietly get on with doing the little things right than it is to spot the players who make big plays, so this process was already getting results.

The committee making decisions was comprised of Marc Guilbert (the GB manager), myself as coach, and a small group of captains, all of whom had represented GB Open in 2008. Rich Gale proved to be an excellent student with a likeable personality and a never-ending interest in self-improvement, and he became a fantastic all-rounder for the O line. Gale was just one example of a little known player with bags of potential and an incredible work ethic; we also took Alex Brooks, one of the U19 squad, purely because Marc and I felt it was important that GB gave some experience to younger players who wouldn’t necessarily actually play in this rotation. Brooks turned out to be one of our most improved players — in many ways he was a role model of the ideal teammate — and he easily cemented his place on the team, something that none of the captains expected.

Some subjective opinions were used; one player was cut because it was felt that they were not competitive enough, despite scoring in the top half. We had no metrics for some traits like “competitiveness”, so still had to rely on subjective judgements from our assessors. Another well-known player — Justin Foord — performed so poorly that we just gave him the benefit of the doubt; he turned out to be one of the best players on our 2011 roster and a leading scorer for us in 2012. Fortunately, all the selectors knew Justin’s abilities and we wrote off his performance as a bad day at the office, or possibly a weakness of our selection methods.

Of course, this is a subjective view; if we’re willing to look the other way when a strong player plays badly, then how can we be sure we weren’t seeing others having a good day too? This is largely irrelevant in Justin’s case because he was still comfortably inside the cut-off point, and it gave me a list of things that he would need to work on as a player. It’s also no less subjective than just watching people play and making a judgement, so taking data and using it to drive a decision in combination with the subjective opinions of a selection group is still better than not taking the data. The grading sheet still needs to be interpreted, of course.

Analysis Of Our Process

I also wanted to use the data to guide my initial coaching plan. How to deal with all of the different data points? Quite crudely, I took mean scores for all “throwing” based sections, and likewise for “defence”, “downfield offence”, etc, and normalised the results to compare categories on a 1-5 scale.

It was clear to us immediately that throwing was below the standard required to compete at the top, and Colin’s assessment showed us we were below par athletically. Given how long it takes to improve throwing & fitness, these became our top priority. Other aspects — team structures, cutting patterns, handler resets — were slowly addressed over the course of the 18 month programme; we left zones until the end of year one, for instance. Not because we wanted to, just because there was so much to cover as a team. All of our players were lacking in some critical aspects, and we needed to cover even the fundamentals repeatedly.

It paid dividends though, as the team dominated most opposition at the European Championships at the end of year one (and we’d have taken gold if not for those pesky Swedes; perhaps our decision to delay talking about zones left us unprepared for them?). Certainly our man to man defence was capable of crushing most opposition, and few teams could match us athletically at WUGC — and none could at EUC. As one of our captains, Dave Pichler, pointed out to the squad: “being athletic doesn’t win you games, it gives you a platform to win games. It’s the entry ticket to the competition”. The focus on throwing paid dividends too. Feel free to watch back the NGN or Ulti.tv footage from Japan and you’ll hear a number of time when the commentators remark that our throws look ugly but go exactly where they need to be for the receiver to reel them in.

Did we get the right people?
Well, one thing is for certain, and that is the drop out rate was very high. Whether we failed to screen for commitment level, whether people just didn’t enjoy our training sessions, or they didn’t believe that the team could achieve anything, a lot of people dropped out. From an initial squad of 40+, we made two cuts, brought in two new players, and took 26 to Japan. The others all dropped out; that’s about a third who quit. Some due to injury, some due to becoming first time parents, or other “real world” problems. Two said that they didn’t enjoy the team and didn’t want to be part of it, and a few said they didn’t think the team could achieve anything and quit. It’s a high attrition rate, but given the demands we were making, it was unsurprising.

What did we miss?
Well, as the WUGC final indicates, mental strength. Why does an entire team suddenly start dropping simple catches or turfing open side throws? We’d played through worse wind against Japan and Australia and scored plenty. Revolver definitely punished our mistakes, but they were our mistakes. Credit to them for putting our resets under pressure, but we can’t credit them with making us turn over on uncontested passes. We were a young team that fed on confidence, and in most games we went behind and came back through the strength of our D line.

Clearly, we didn’t come up with a perfect way of assessing everything that makes a great ultimate player on our first attempt. Some of the players that we knew from experience were big play makers fared poorly on our tests. The question is simply: why? Were they just having an off-day? Was our bias showing true? (i.e. they were not actually as good as we had previously thought). Were some people consistently getting tougher matchups than others? More likely that there are a number of aspects that our rigid, formal drills could not assess; we were very aware that ultimate is a game that requires emotion to play well. Drills can replicate some aspects of a game, but not all.

By forcing the scenario, we were unwittingly putting a negative bias on players who can make good decisions about which scenarios they manufacture and which they don’t; for instance, some people never break marks in games because they always dump, while others always try to break the mark and sometimes turn over. The very fact that one of our top players, Justin, scored so badly indicates that the methods are flawed…even if its just because Justin is easily bored by drills. The cost-benefit analysis is impossible to measure at the level of an individual – even if the team objective is to break marks – because having a player who realises their own weaknesses is an asset in itself.

Would I do it again?
Given the time, definitely. The physical assessment is a no-brainer, but only if you are going to use it to show player weaknesses, track improvement, or use the scores for selection. If not, then it’s a waste of time and money; timing gates aren’t that cheap to hire! The skills assessment was good, but we were judging entirely off a single day’s play. For a future GB cycle, I’d recommend multiple tryout dates with a progression of scored assessments that can go much deeper than we were able to. Even our most experienced captains made some very different judgments than the scores indicated — bias is everywhere — so if you truly want a fair tryout, then scoring is a must. Otherwise, you’re going to miss something.

As it was, we were able to use the “objective”, data-driven results to kick start our development programme. Because we’d taken the time and effort to assess each of our squad’s abilities, provide them with facts and figures to support their individual training goals, and send them away with a clear message of what we expected to see from them, everyone got a fair start. I also insisted that every player post on our fitness blog every week, and provide me with a printed workout sheet every practice; this forced them to record every workout, which soon showed up those who did or didn’t train often enough, or at sufficient intensity! I doubt that we’d have seen the improvements that we did over the course of one year without this approach, and I have no doubt at all that it was key to having such a solid start to our programme.

Comments

You must be logged in to post a comment.