Nothing feels quite as bad as finding out all the hard work you did collecting speed data after testing was in vain. The feeling of wasting months or even a career using the wrong methodology is awful to experience. This article addresses the common mistakes even the best coaches don’t realize they make, reviews the research, and fixes for those errors.
If you care about getting athletes faster, you need to know how fast they are, and the ways to improve their speed the most effectively. The science of speed is not a clear roadmap, but the clues that are left behind are more than useful. It doesn’t matter if you have a $5 app or a $20k timing system, you must use the right procedures and the right speed models to ensure the accuracy of the data collected. The right protocols will not only ensure the numbers you collect are not fiction, but the right testing can also help guide coaches to making better decisions.
Application Errors of Current Science
The “art and science” of sports training, specifically with speed development, can be a little fuzzy. Coaches want to be on the cutting edge, but jumping too quickly to a perceived breakthrough is not wise unless we really know the stuff works. I have second-guessed everything for years, as the law of diminishing returns seems to rear its ugly head everywhere eventually, including sport science.
With all the advancements in sports performance science and technology, why do we still struggle to do the basics and apply the latest advancements? The reason is simple—science without logic and reasoning is not effective. Science opens our eyes, but experience using it in sports performance opens our minds. Just a little science is dangerous, and too much of it can be disastrous. Don’t blame science for poor transfer—it’s just that applying a few studies will never have the same impact as research outcomes, ever.
Science opens our eyes, while experience using it in sports performance opens our minds, says @spikesonly. Click To TweetLack of good science is just as problematic as using the wrong, or cherry-picked, research. Bias, out-of-context interpretation, and even poor research can all interfere with the use of the latest evidence. Education and experimentation take time, as an eight-week investigation into a similar variable will never replicate the needs of tackling a challenge in the real world. Don’t blame the scientist or scientific process, just accept that life is always in the way. (For years, I recommended the book Jurassic Park, as it’s a great parable of technology failing, and of how science isn’t always perfect.)
In no way am I being critical or worshipping the research and methodologies shared below, as we are all human. In the same article, I can praise a hero (be it a coach or a researcher) for being a genius in one area, while asking fair questions about the limits of another idea. Don’t get too emotional about what you do, as the goal of training is to help athletes reach their dreams, not feed the ego of your own intentions. I am open and share my mistakes; I am confident that you may be following the same path, and my goal is to have you rethink what you do and find a better way. Many great minds and better coaches shared their solutions and I have benefited, and the solutions below should help you do at least one thing better than before.
Mistake #1: Your Starting Protocol Is Wild
How valid is your starting procedure? A good timing protocol identifies when an athlete initiates a sprint, not trips a motion sensor. Most coaches do not have a setup that confidently identifies the true effort of the athlete; therefore, the market of systems for both coaches and researchers attempts to do damage control and estimate the start of the sprint or run. The result is a wide range of worthless to useful data that nobody can account for unless it’s explicit.
Without being a stickler for technique, but knowing how technology works, a small mistake in testing drastically limits the value of the information. I just hate seeing short sprint times posted on social media as evidence of athletes being fast from a training program, as most of the data extrapolates to be podium-like speed at the Olympics when it’s a group of team sport athletes. If you can nail down the starting protocol, the feedback benefits are tremendous as the athletes will know when they perform well, and when they do not.
Conventional timing equipment uses a light beam, touch pad, or pressure sensor. Each sensor has strengths and weaknesses when testing athletes, and for years some very high-profile organizations have inflated their numbers in testing without realizing that they are doing their athletes a disservice.
For example, rocking motions or countermovement starts are not detectable when a laser timing device is located near the starting line. A beam near the start is blind to the activities that occur before the body passes, so multiple sensors are necessary to ensure the athlete does not cheat or accidently game the system. Touch pads during three-point starts are not perfect, but attempting to “move late” with the hands and “push quick” is futile, as the sensor will still work as designed and eventually a fast and hard push will trip the hardware. The issue with most of the sensors is that a coach can call out an athlete better than the hardware can, creating tension between the athlete and the person testing.
The solution is to add a camera with a brief delay connected to a large flat screen, as well as to make sure the protocol forces a pause in the motion. Some programs use sound to trigger a start, but reaction time only adds potential confusion because anticipating a sound can lead to false starts—a problem track and field has not solved for thousands of years.
Starting when the athlete is ready removes the moving parts, as well as a layer of complication. Obviously, standing starts don’t use a touchpad, so a beam or wearable sensor is needed. A set of tennis balls or other markers adds more granularity to replicating the stance of an athlete, and recording simple body positions allows for even more precision in testing. My only wish is that research records every rep on video so we can see truly how motion triggers the hardware.
Mistake #2: The Rate of Acceleration Compromises Absolute Peak Readings
Maximal acceleration doesn’t necessarily equate to maximal speed later, so take note. For years, I have struggled to make the testing of maximal speed repeatable or valid. For example, I would include a detailed run-up distance of 30 meters for a 30-meter fly sprint. On paper that sounds great, as 10m segments from 30-60m may catch a great period of speed, but on average some athletes struggled to hit a great peak sprint velocity. Theoretically, in 100m races the faster athletes hit a peak velocity near 50-60m, and slow decay in their speed for the last 20-40m.
So, what is the problem? The core issue is many coaches test an athlete for 40m and this can capture an average speed from 30-40 meters, a distance that some in team sport are likely to peak out at. It’s convenient to get all the splits from 0-40m at 10-meter increments, but the measure is only late acceleration for those who have poor technical abilities. I still use data from 40m sprints with splits, but in the back of my mind I know an athlete is not demonstrating their true ability. The older and more talented the athlete, the more likely they are to hit a better peak velocity with a gradual acceleration that is smooth and controlled, rather than just gunning it. The super talented seem to accelerate well in any conditions, but the goal is to create an environment that reveals talent and ability, not just collects convenient information.
Create an environment that reveals talent and ability; don’t just collect convenient information, says @spikesonly. Click To TweetLearning from the horizontal jumps, smooth acceleration in the approach may be the golden ticket, as rhythm is vital to the success of a good performance. Rhythm is a little opaque when mentioned in passing, and the best way to evaluate the measure is to see it both with video and other technology. John Smith’s analogy of not disturbing the passengers when a plane takes off is elegant and true in regard to acceleration. Trust me, the research is scant, but poor technical form during high speed periods is usually the root of both poor development and poor testing validity.
The gradual acceleration study was awesome, as it tackled the elephant in the room with maximal velocity readings. An athlete may run faster when they are given the space to fully express their global absolute abilities without having to manage a maximal effort in acceleration. In that study, the researchers assessed speed and found that the athletes who were given the freedom to accelerate at a slower rate responded better with their peak readings. Therefore, a coach should consider how they have their athletes accelerate versus just hoping that profiling a short sprint is enough.
My only disagreement with the study is that, on average, I see a much bigger improvement in maximal speed with gradual “approaches”—nearly .03 seconds in a 10-meter fly. I fully believe in the data of the Young study; it’s just that different situations will likely result in different outcomes, as they explained in their conclusion. Longer repetitions of 20-30m don’t seem to hold the higher speed, likely because fatigue alarms the nervous system or similar. I have seen athletes make massive improvements in both maximal acceleration and peak sprint velocity with coaching, and this did not come from force-velocity profiling or other approaches. It’s not that I don’t believe in dissecting the data and using smarter loading concepts for athletes, but when an old-school coach gets involved and teaches better sprint technique without overloading the body directly, we need to rethink how athletes develop holistically.
The fix is simple: Allow for a longer acceleration zone and a more gradual middle phase transition to top speed. Maximal velocity blossoms under the right composure, and multitasking acceleration demands is sometimes an action that backfires. The late acceleration phase determines much of the success of the maximal velocity readings, and the ironic need for patience during fast motions is a difficult concept for team sport athletes or even new sprinters to grasp. Be warned: Longer sprints are far more taxing than shorter sprints, and they require more rest and are sensitive to fatigue. Even the best protocol for testing peak velocity is always going to be at the whim of an athlete being slightly unresponsive.
Mistake #3: You Forgot to Include Decay of Speed
How an athlete slows down during a sprint and during a workout is important. Deceleration is normal and truly understanding the rate of fatigue is enlightening. If a repetition is cut off with distance or volume without seeing fatigue, you might be missing out.
How an athlete slows down during a sprint and during a workout is important, says @spikesonly. Click To TweetIn my experience, the testing of a longer sprint in baseball was more valuable than testing in American football. Why? Sixty yards is just enough length to see who breaks down and why. Multiple repetitions are even more revealing, as the combination of higher volume and longer linear distances exaggerates errors at the end of the session. Those that are poorly prepared become exposed, as they can’t sustain momentum from the acceleration and are more prone to pulling.
A sign of a great hamstring or posterior chain program is not acceleration, it’s upright sprinting. Yes, nearly every leg muscle contributes to acceleration in some way, but what is special about upright running is that it is the posture used when fatigue is present even when fresh. You can do a bunch of short 20-meter sprints, but doing multiple longer reps is far harder.
The tradeoff with fatigue is that slower speeds program slower velocities, but more repetition potentially improves the conditioning to overcome barriers of speed enhancement. I wrote the velocity bands article to serve many purposes, but for training, it’s knowing the difference between acceptable fatigue and the risk of draining an athlete or running slow and tired. The velocity bands article is also about teaching the readers to peel back the onion skin layers of speed changes throughout the season, regardless of the training system a coach uses.
A complete understanding of speed, and sometimes more importantly the drop of speed, is very enlightening. It’s popular to stop speed training when an athlete is unable to hit a velocity or after a set number of sprints (volume), but more specificity such as how they slow down may be an opportunity. More research and more internal records by leading coaches are necessary.
A simple decision-making tree can be made from repeated sprint tests or just training conventionally and recording the times with timing gate splits or laser readings. The best lesson for nearly all athletes is to worry about getting faster first, then worry about maintaining it second. Just working on speed will improve repeated speed ability.
The more difficult question is when to address decay of speed, usually the same challenge of improving peak speed velocity. The five key numbers to think about are peak sprint velocity, distance to peak velocity, time to peak velocity, drop in speed relative to peak velocity, and the decay of the first and last sprints. If a field test for conditioning such as the 30-15 or Yo-Yo IR1 or IR2 is satisfactory, work on higher velocities and the speed reserve will indeed show up on the fitness tests. Only when you are hit with a speed barrier will specific conditioning matter, and two- to three-minute continuous runs can make average fitness athletes into fitness test record breakers.
Mistake #4: Low-Resolution Modeling Oversimplifies the Process
All models are wrong, but the better ones are more likely to be effective in delivering the goods. Modeling is a process of creating a repeatable framework of inputs and adjustments that shape a solid training plan. Simple models, such as a speed reserve plan for the 400m, work well. Ideal models have more inputs to add more resolution to the plan, and that means more information needs to be collected from monitoring and testing. I wish the process was easy, but all great endeavors require a lot of sweat, time, and energy.
The secret to building a better model is to know what variables make a difference and what is so trivial it’s chasing smoke. Unfortunately, this is difficult, as the research always seems to be conflicting or confirmation studies appear contradictory years later. The big difference between paying attention to details and marginal gains is that the former adds up to the spectacular and the other is just pretending. Doing the little things right works, but you have to know what shows up on the clock and what doesn’t disappear when the rubber hits the road.
To build a better model, know what variables make a difference and what is just chasing smoke, says @spikesonly. Click To TweetHere are some modeling challenges and considerations that should be reviewed before building a coaching system. While far from perfect from a mathematical model, my encounters with the legendary Wilbur Ross opened my eyes to the importance of finding a system that you can create and rely on. Coach Ross shared his modeling ideas in the late 1990s and here is what I learned:
- The zone drill was a way to overclock a hurdler and isn’t used as much today because of the practice of lowering and bringing in hurdles. Still, the reason athletes were successful years ago was due to his overspeed model.
- How much, how hard, and in what sequence is tricky. It requires years to develop a system, so don’t experiment unnecessarily—focus on doing what is known to work better.
- Anatomical (biomechanical) factors make general physics models look foolish when you are close to a genetic ceiling. Focus on how the body mechanically produces speed efficiently and find a way for the body to address simple time/distance challenges.
- Technical elements are tricky, as motor skill acquisition is more astrology than astronomy today. It’s not that the science isn’t clear, it’s just that the practice has failed to catch up.
Wilbur Ross’s book, The Hurdler’s Bible, doesn’t do justice to what the man knew, and it was interesting to see how his information rings true today even with the improvements in all the sciences of sport. Much of the information from Coach Ross reminded me of Quincy Jones, the world-renowned producer of talent and musical genius. While he is a gifted man with plenty of quotes from years of working with the best talents, he clearly said that seeing something before it happens is a gift and talent allows you to work backwards to see how to get there. Jones said it best in an interview years ago and I recommend watching the short clip. With that brief excerpt, Jones summarized the goal of coaching, and perhaps good modeling, with just a few words off the top of his head.
Mistake #5: Bad Interpretation Leads You Down the Wrong Rabbit Hole
After collecting the data, don’t jump to conclusions. When in doubt, doubt the data you have and remain detached from the information collected. The quest to create the next metric or method of training is either innovative or ego-driven. Allow the information to surface and don’t force a connection that isn’t really there.
After collecting the #data, don’t jump to any conclusions or force a connection that doesn’t exist, says @spikesonly. Click To TweetThe interpretation of the data is an honest process that reveals more truth with less emotional attachment when properly done. If you have a hypothesis that is too important to you, bias will corrupt the process. A pledge to the process of seeking truth versus making science that fits your goal is an unforgiving approach, but in the long run results are there.
Research often uses double beam testing, and the claim of single beam with processing correction has confused coaches. Freelap claims that the magnetic cloud is 10cm worth of error, leading to plus or minus .01 seconds, while the typical single beam of Brower timing is nearly 10 times that. Anyone experiencing electromagnetic interference may have an issue with Freelap timing, but that has only happened a few times with some very uncommon training facilities. Swift timing and the IVAR system uses double beam, along with Biorun from Norway. If you are using an app or video approach, the technology is highly accurate, but slightly labor-intensive. Knowing your electronic timing system’s technology is huge, as hardware determines confidence in what is actually happening.
When running through conventional timing gates, the arms and legs reach out early and can potentially trip the device, resulting in a faulty reading. No matter what statistical package or athlete management system you use to store your data, the records are only as good as the process and technology collected. Using a short acceleration under 30 meters for advanced athletes is not a valid approach for maximal or peak velocity readings, so starting from a shaky foundation of data is a problem.
I don’t have an issue with using single beam timing, as long as you don’t take one finding and assume that the measure is accurate enough to reward a performance. There is enough value in breadth of data to merit a trend in training, but only over a lot of measures. If a coach understands typical error and coefficient of variation, a single beam measurement can be useful.
Rectifying misinterpretation starts with recognizing the limitations of the data and how far you can extrapolate it. What is probable and possible isn’t a true measure—it’s just a better form of guesswork. Even modeling, as mentioned before, is limited. How many times have we seen athletes test poorly and perform great on the field, even in the sport of track?
As I mentioned in the “decay of speed” section earlier, the added value to testing and training with concrete approaches is that it identifies true issues that could limit performance. I have found it helpful to look at the data and ask where there are general abilities experiencing a similar relationship, and start from there. When physical abilities are sufficient, it becomes trial and error with lesser-known variables such as program design and other areas that are gray to sport science. Still, refer to straightforward research that does have merit, as much of the current literature is very applied and meaningful.
Learn to Be Disciplined
Collecting data is frustrating, and I don’t blame coaches for giving up and winging timing or reverting to a stopwatch. Trust me, I have experienced the “punch in the stomach” feeling when technology fails or you find out that what you have been doing is far from perfect.
Focus on getting better information so you can make wiser decisions here and now, says @spikesonly. Click To TweetOver the last few years, I have learned that a lot of what I did in the past is limited, even though it was considered best practice at the time. Don’t worry that, in a few years, the information you have will be obsolete or not interchangeable, just focus on getting better information so you can make wiser decisions. This article provides five very important modifications you can do immediately: The sooner you start, the quicker you will begin reaping the benefits.
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF