SimpliFaster virtually convened a roundtable of esteemed and experienced jumps coaches to cover a bevy of topics related to jumps. We have been presenting these questions, and their answers, in a series of articles. This is the fourth in the series, and covers the topic of testing protocols for jumpers.
SimpliFaster: How do you assess the key physical qualities you are aiming to develop from your program? What specific testing protocol do you use and when do you implement it? In your experience, what testing and test result combinations seem to provide the most accurate depiction of event-specific readiness? Are there specific testing numbers that you use as a guide? If you don’t use a specific testing protocol, can you discuss how you evaluate your athletes and program throughout preparation and competition time?
Travis Geopfert: In the horizontal jumps, specifically, we periodically test our standing long jump and 10-meter fly. Although our jumpers don’t fully realize it, I am consistently monitoring their 10-meter fly almost weekly in conjunction with different drills that we do. Additionally, we often test our triple jumpers in a standing triple jump and 5 bound test to measure power output.
I like the testing protocol of a three-step vertical in the high jump and, admittedly, I need to more consistently test that. However, we do have 15 years of SLJ, STJ, and 5 bound testing to compare to and assess an athlete’s readiness. Obviously, when an athlete has a best or near best in one of these three, you know their power output is there. If an athlete is under one second (10 meters/second) in the 10-meter fly, they are ready to do big things.
I know it’s kind of odd, but I also believe I can see in my head when an athlete is ready, based on their ground contact time even in the simplest movements. I’ve watched guys do basic warm-up sprint drills in their flats numerous times and said to myself, “They’re ready.”
Dan Pfaff: We feel like everything we have on the menu addresses physical qualities and needs monitoring of some sort. If we have selected the right KPI factors and ranked them properly, the data pools should show a positive trend over time, both for the entity in question and the overall competition effort. The density of consistency should also show a positive trend in these subsets and performances.
Determining the KPIs comes from experience, in my opinion, and short experience, and using a system from a trusted mentor or set of mentors will give you a sound platform to study from. Obviously, acceleration abilities, top end speed parameters, and jump-specific metrics are main drivers for this process. We use a generational grid for training qualities and first generational work gets the strictest analysis and data collection time.
One overlooked and under-analyzed physical quality is athlete health over time. I see the same injuries and illness factors occurring with the same athletes and at the same time of the season far too often. It is not bad luck. It is a failure to monitor and seek solutions.
I used to have dedicated testing blocks and time frames when I was a younger coach. Frustration and poor statistical patterns led me away from this approach. We now do most of our testing at comps in the form of film analysis and actual results. We test training menu items during the cycle within the prescribed programming format and perform various medical tests daily; sometimes before training, sometimes during training, and quite often post training. We never do one-off, ad hoc testing. If we cannot test it often and consistently, then it is not tested.
The No. 1 test for me is how athletes execute during competitions. Seeing a positive trend on defined metrics is critical for readiness analysis. The same goes for consistency of meet results both within the comp and over the season. We find approach velocities and accuracy of approach readings to be solid predictors. The ability to consistently program shapes during the entire approach is also another KPI factor but, for some reason, it’s not a keynote for many athletes or coaches.
We also have formulas that will evolve over time for each athlete with the various short run jump parameters in training. Distance jumped on the SRJ is weighted against accuracy and technical landmark execution grades. So, a huge jump with a foul and poor posture during the penultimate step would have a lower grade value. A gassed-up 12-step jump with poor shapes but a huge distance would likewise be graded down. Accountability to the agreed-upon dynamics is critical and not often managed well.
We have grids we use as the season plays out that show how early season meets feed mid-season results, and how that leads to culminating results at the end. We do not chase absolute result progressions in our competitions. We can’t control poor facilities, adverse weather, travel disasters, life stressors timing, etc. Therefore, we demand that athletes keep records of headwind PRs, cold weather PRs, crosswind PRs, extreme heat PRs, fast runway PRs, slow runway PRs, 1-meter board PRs, 3-meter board PRs, time of season PRs, sick as a dog PRs, jetlagged PRs, family chaos PRs, etc.
Nic Petersen: We use a few different tests in our program. But, in all honesty, we don’t test very often and not at all during competitive cycles.
Our main tests are the following:
Standing Long Jump
- Men: 3.20 and beyond
- Women: 2.70 and beyond
Standing Triple Jump
- Men: 10m is the goal; 11m elite
- Women: 8m is the goal; 9m elite
Standing 5 Bounds
- We use this as a guide. What an athlete jumps in this is about what they are capable of triple jumping.
Fly 30m
- Men: sub 2.85, goal 2.80 or below
- Women: sub 3.20, goal 3.10 or below
Fly 100m
- Men: sub 9.90
- Women: sub 11
Overhead Backwards/Underhand Forwards Shot Throw
We do some of the Quad testing and we score the four events. We try to Quad test three times, especially in the fall: once after the first six weeks, once after 12, and then right before we leave for Christmas break. However, one thing about testing is that we only rest for testing once, and that’s after the first six weeks. Other than that, we may test and not be fresh. Therefore, some people may not believe this is true testing.
We also measure some basic short jumps. I test the 10-step long jump, and we also test the four-step HOP HOP STEP JUMP (gator drill). We use these as mock competitions, so these get heated and people will get after it. We compete during short run sessions on occasions where we may not measure things, but just mark jumps and see how far we can go. I try and use competition a lot.
I think testing is a good gauge for fitness and speed, but not everyone is a good tester. The thing about testing is, if you don’t do the tests a lot, you need to teach the tests too. I would say some of my “tests” are more about taking specific training tasks and completing them than “pure” testing.
While testing is a good gauge for fitness and speed, you often need to teach the tests, too. Share on XJeremy Fischer: I use testing protocol and analysis all the time. Of course, there is the standard Max Jones test (30-meter standing long jump, standing three jump, overhead shot), with the addition of underhand shot and a 150-meter. We do analysis with the 30-meter fly, laser analysis of runway speed, five-meter segment runway analysis, weight room strength analysis, power analysis using Keiser equipment, force plate testing of takeoff, force plate analysis of phase force, blood analysis, saliva cortisol level testing, sleep analysis, and FMS.
The data allows for me to keep tabs on training and the progression of training, and also maintain a check and balance on training. I know when to push harder or back off training. As far as preparedness of athletes for meets, that is the million-dollar question.
I start to see some regularities from athlete to athlete, but for the most part it’s what they are doing in practice that shows me preparation readiness. Are they executing their technical positions and how far or how high are they jumping? If an athlete jumps far in practice, they jump far in the meet. If they run fast or bound far in practice, then they jump well in the meet. And, finally, they must be as healthy as possible when they’re on the start line or runway.
David Kerin: Meet performance is the ultimate test. We need to eliminate the learning curve to tests before their results can be valued. A competitive environment provides greater value to testing’s results. The legendary LSU Fall Jumps Testing is a good example. The accuracy of data collected and accurate record keeping in the present, for the year, over an athlete career, and over a coaching career are all important.
Yes, over the years there are benchmark testing numbers that have been shown to equate to event performance levels, but like the “special exercise” question, there is no magic bullet. As stated above, meet performances are the ultimate test. If I had to choose a favorite test, I like OHBs for their traditional value. But I see further value in that I can instruct to the medball or shot as being reflective of an athlete’s COM and the rise of the implement simulating the rise of the COM. More specifically, I like OHBs for high jumpers because of the reflection of in-flight positions during mid to late throw.
The opposite of this is also found in high jump. Every year or so (going back to the ’80s for me and Michael Cooper of the LA Lakers), there is an article about how the NBA dunk champion would be a world-class high jumper. These erroneous statements have roots in their author’s misconception of the mission as discussed earlier. For a specific example, and to bring it back to testing and physical assessment, consider Dwight Stones. He was a holder of the world record for MHJ at heights that would still be competitive today. Yet he has admitted that his measured SVJ was only around 30 inches.
Nick Newman: The key physical qualities I look for include the ability to accelerate smoothly and explosively, maximum speed capabilities, reactive strength and maximum power outputs, simple and complex coordination, and overall freedom of movement. It is essential to monitor these qualities as often as possible throughout the year. Both subjective and objective assessments occur daily in some regard.
As far as specific testing protocols, I have previously fallen victim to the temptation of systematically testing everything I could think of. Collecting data is fun, as are the testing sessions themselves. However, over time I realized many of the tests were redundant and correlations with performance were inconsistent. I also found that too-frequent or overly rigorous testing protocols can take the edge off competition intensity and focus.
As a result, I shifted toward a testing protocol that could occur during regular training sessions. As the training emphasis progresses throughout the year, so does the testing. The most important test, of course, is full-approach jumping during competition.
The tests I use, along with the corresponding elite standards, are as follows:
OUTSTANDING MARKS (JUMPERS) | ||
TEST | MEN | FEMALE |
30m Sprint (3 pt) 5 | 3.70 – 3.8 | 4.05 – 4.20 |
10m Fly Sprint 3 | 0.85 – 0.9 | 1.00 – 1.05 |
150m Sprint | 15.60 – 16.00 | 16.70 – 17.10 |
Standing 4B&J | 17.00 – 18.00m | 14.00 – 15.00m |
Max 4B&J m | 21.00 – 22.50 | 18.00 – 19.00m |
Standing 4H&J | 17.00 – 18.00m | 14.00 – 15.00m |
Power Clean | 1.7 xbw | 1.5 xbw |
Deep Squat | 2.2 xbw | 2 xbw |
10-step LJ/TJ m | 7.50m / 16.00 | 6.50m / 13.50m |
As previously mentioned, I have used many tests over the years. I have found that the ones in the chart are the most relevant and correlated best with event performance.
Speed testing with the 30-meter and 10-meter fly blends to full-approach 11m-6m, and 6m-1m assessments closer to competition. Bounding tests gradually increase entry running steps, as this coincides with my horizontal plyometric training progressions. Short-approach jump testing gradually increases in stride number and can reach up to four to five strides shy of the athlete’s full approach.
During competition periods, we maintain max strength whenever possible with very short weight-testing sessions as we’ll assess speed, bounding, and power output numbers when possible.
Randy Huntington: I use only a few testing protocols these days, although I measure almost everything. I still use a 30-meter fly for speed and a 5R 5L from six steps distance for jumping. I also continually monitor the speed of the last two five-meter segments in approach year-round.
I test Omegawave every morning with each athlete. Using this, along with observation and listening, I then change the workouts accordingly.
Brian Brillon: When I coach jumps, I look for the expression of speed and power in the athlete. We stress these components daily in training. I use a revolving four microcycle, with the fourth week as a testing week. We drop the volumes that week and have the athletes compete against their teammates and their personal bests in a battery of tests.
I believe competition in practice is a must before you go “under the lights.” Not only does this provide opportunities to showcase expressive elements of the event, but it also gives rise to meet scenarios. That fourth week sees testing in the standing long jump, standing triple jump, double-double, overhead back shot toss, between the legs forward shot toss, 30-meter three-point stance, and 30-meter fly with a 20-meter acceleration. When we get into the specific prep and comp phase, we also do an intersquad short-approach jump competition.
I believe competition in practice is a must before you go ‘under the lights.’ Share on XI think all the tests give the athlete the confidence to see the progression provided by the training. The specific test that I see give rise to the most accurate depiction of the event is the short-approach jump. The test jumps are from 12 to 13 strides out because I find that the jumper can add on a foot and a half to two feet from there to what their full-approach jump would be. It gives the athlete a ballpark figure that gets them excited for things to come.
For example, I had a freshman that wasn’t understanding the concept of a competitive practice. I challenged him by saying what he would jump in testing would be two feet off from what a full approach would be. Previously, the athlete was jumping 21 feet from 12 strides in practice. His full-approach jumps in competition were a foot and a half more than his 12-stride marks. A week before Big Tens, the athlete jumped 23 feet 2 inches from 12 strides. He became the Big Ten champion a week later, with a jump of 25 feet 2 inches.
Tomorrow we’ll feature the next installment of this Jumps Roundtable series: “Approach Accuracy.”
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF