Whoever has the data is king. I’m by far not the first to write this, and the fact of the matter is that, in this data-driven age, it couldn’t be truer. The other thing that couldn’t be truer is that the king could be you.
With technology and information easier than ever to access, strength, performance, and sport coaches have zero excuse not to measure progress and track it over time. Athletes and consumers (our clients) are starting to expect it. If we, as coaches, do not begin delivering objective, meaningful data for them to show progress, we will be out of business and/or become the “behind the times” coach quicker than ever.
The challenge, of course, is not to just collect data for the sake of it, but to collect objective metrics that actually transfer to sport and that you can explain simply to the athlete. When you choose the right metrics, it not only serves as a motivator for the athlete, but also proof that your training is worth continuing to pay for and/or you deserve that promotion you’ve been gunning for.
Furthermore, if you own a gym or work as a supervisor to other coaches, the objective athlete results at the end of a training block serve as an amazing evaluation tool for you. Which coach’s program was most effective? Which coach’s athletes did not improve as they should have? It makes it easy to objectively assess how effective each coach’s training is.
The most difficult challenge as a supervisor, however, is that you likely need separate sets of metrics for athletes in different sports and/or demographics in order to evaluate your coaches fairly. For example, let’s say you set the expectation among all your coaches that you want all athletes to deadlift twice their body weight. Is this metric relevant in the same way to an athlete no matter their sport (golf vs. baseball vs. football)? What about their position (defensive back vs. pitcher vs. offensive lineman)?
You likely need to look at speed metrics with the deadlift at a lower relative load for a golfer than an offensive lineman (from the data we have seen). So, be sure you evaluate your coaches and their progress appropriately. Rewarding a coach for improving a metric that is not meaningful for competition performance is worse than not tracking at all.
Tracking objective metrics also opens the door to discuss what went right and what can be improved upon for the next training block objectively. If you do not require your coaches to test their athletes with metrics that transfer to sport, you’ll miss out on a massive opportunity for your organization and your athletes.
If you don’t require your coaches to test their athletes with metrics that transfer to sport, you’ll miss out on a massive opportunity for your organization and athletes. Share on XFrom a distance, the process looks extremely straightforward. Measure before the intervention, complete the intervention, and then measure after the intervention. Seems simple enough, right?
In your head right now, there are probably a few questions swirling around, like:
- How do you decide what to collect data on?
- What metrics apply to your sport?
- How do you go about collecting data?
- How do you know if what you are doing is worthwhile and going to be relevant?
- How much time does it take?
These are just a few of the questions that likely start to percolate in your mind when you think about the possibility of collecting data. If you take the five minutes to keep reading, I’ll answer them for you. It will change your career path, your athletes’ performance, and our field forever.
What’s the First Step?
The first step is to figure out what metric you want to test. For most people reading this, the easiest thing to do is take a look at some research, find a metric that correlates to sport-specific performance, and start tracking it. After you’ve collected some data, determine if you see the same findings as the researchers. Evaluate which athletes made bigger gains and try to figure out what other variables were at play. Making this time for critical thought and evaluation of athlete performance as a direct result of your training is a critical element in a coach’s professional growth.
Now, some of you reading this might think a lot of the research relevant to you doesn’t really tell you anything meaningful in terms of transference to sport. If that is the case, you get to ask the questions that you want and figure out the answer yourself. Do you think there is a better answer than you’ve been able to find out there? Do you think a specific theory that is dominant in your sport doesn’t exactly hold up?
For me, the first question I wanted to answer when I started out was, “Does golf fitness even work?” From there, it turned into seeing what physical markers would prove to be most important in improving swing speed. Since then, my questions have expanded in so many different directions relevant to golf performance that I have research and data collection planned out for the next 24 months—in my commercial training facility, not a research lab.
Take a second and think: It probably won’t take you long to come up with something in your sport that you think is awful and drives you up a wall every time you see it posted on social media. Is it core work, muscle activation, or perhaps training on BOSU balls? I know there’s something. There is no excuse in this day and age to be mad about it and bash others on social. Stop tweeting criticism and do something about it. Define your question, collect the data, and then share what you found with the world. This is a far more productive Twitter conversation that will move us forward.
If there’s something in your sport that bothers you, don’t bash it on social media—do something about it. Define your question, collect the data, and then share your findings. Share on XWhen I started in the “golf fitness” world, I had two choices. Follow the theories, poorly done research, and supposed science-backed ways of doing things (even though there was no readily shared data) and hope for the best, while complaining on Twitter that there wasn’t any good research out there. Or, figure out for myself if what I was doing was working and then continually ask more questions, test them objectively, and share what I found. I chose the latter, and I hope you do, too. Your data collection and research won’t be perfect (mine certainly is not), but it will move your industry forward in a way that is positive for everyone involved, especially your athletes.
How Do I Know It Will Be Worthwhile?
As I mentioned above, everyone today has a question they want to answer or a theory that they will gladly bash on social media. Taking the step to actually go ahead and collect the data—to objectively find out if what you believe is correct—is hard, but necessary. Often, the barrier stopping someone from taking this step is that they aren’t sure it will be worth their time.
What if you do all the work, actually collect all the data, and then find out you were wrong? Wouldn’t that mean you aren’t as smart as you thought, and won’t people laugh at you? Won’t that hurt your career?
First of all, if people pay enough attention to what you do to criticize and judge it, you are doing something right. Second, if your data collection is sound, and you do figure out you were wrong, that’s great! Particularly since the topic for which you chose to collect data was important to your field (assuming you did step 1 correctly), and you now have objective information to share and educate others in your space with.
Being afraid to be wrong is the single biggest flaw that coaches have in their thinking today. It is a terrible disease that stops them from actually collecting data and being willing to fail. You have to go into data collection with an open mind. You are not looking for the right answer to support what you think. You are looking for the truth, and it will often surprise you. At the end of the day, if you find truth, you’ve done more than 98% of the coaches out there in terms of furthering your field.
Go into data collection with an open mind. You’re not looking for the right answer to support what you think. You’re looking for the truth, and it will often surprise you. Share on XPersonally, one of our greatest findings to date happened when I was wrong. I wanted to test a reduced-volume overspeed protocol against a high-volume protocol from the most popular overspeed company in the golf space. I thought their protocols were extremely excessive in volume requirements. I felt that having a golfer swing almost 12,000 times a year in addition to their normal practice and play increased the risk of overuse injuries already rampant in golf. I was 100% confident I was right, and they were wrong.
When all was said and done, my team and I completed a six-week and then an eight-week follow-up study on the two protocols and found there was no difference in the swing speed gains produced by either protocol. They both produced the exact same results. At first look, I was initially disappointed, embarrassed, and depressed. I was wrong.
But then I realized what this meant. We had completed the first and only two studies looking at volume in overspeed training and had found that golfers could do 66% less work (9,000 fewer swings a year) and still see the same results. Wait—this was huge!
Out of a “no answer” to my question came the greatest answer possible. While I had thought the lower-volume protocol would get better results because athletes would not be as tired, it turned out my athletes could do less work and see the same results. This was a huge finding in the golf world, and I believe it has implications for other sports as well.
My hypothesis was wrong, and I had to eat my words a little bit, but the findings have changed the lives of thousands of golfers and hopefully saved even more from unnecessary injuries. I’d be happy to continue to be wrong like this the rest of my career.
So, in the end, if you pick a meaningful metric or variable to measure and track, it doesn’t matter what you find. The finding will be meaningful no matter what. You will either find that the metric helps your athletes, or it doesn’t. The answer in both these scenarios is hugely helpful.
How Do I Collect Data?
Now that you have your question figured out, how do you actually collect the data? The most critical step is to figure out your system. Once you have your data collection system figured out, it becomes a plug-and-play operation based on all the new questions that you have throughout the year.
The key to any system is consistency and quality. Let’s say you want to simply test how strong every athlete on your team is. There are 30 athletes, so you ask your assistant to help you test everyone. If you both give different directions and they have each athlete test to failure while you go based off RPE, your numbers will be useless. This probably seems like common sense, but the quality of your data set is everything. Protect it with your life.
This probably seems like common sense, but the quality of your data set is everything. Protect it with your life. Share on XFor the strength testing example above, you and/or your staff should give each athlete the same directions and cues. You should give them all the same number of warm-up sets, and there need to be predetermined objective rules to determining good reps versus bad reps, etc. You should be able to give your system to the high school kid running his friends through a training program, and he should be able to follow the directions to a “T.”
While this part can be a bit tedious, once it’s done, it’s done. This was my single biggest mistake early in my quest to figure out if “golf fitness” actually worked. I made the mistake of not standardizing all of my tests and having staff help me without establishing consistent and validated measurement systems.
While my current database is about 1,000 golfers large, it could have been 50% larger. I had to throw about 500 data points out due to three years of testing that I could not honestly say were done consistently. If I had kept those data points, it would have brought the integrity of my other 1,000 data points into question, not to mention that the conclusions we would be able to draw would be flawed.
After you are sure that your testing will be done consistently, you need to establish how it will be tracked. We use paper during the actual athlete testing (this allows us to get up to 10 athletes through more than 30 different metrics in under an hour) and then transfer those numbers to our database in Excel afterward. This is the system we designed to fit into our 60-minute training slot in our commercial setting. In different settings, you will likely need different formats. It doesn’t matter what that format is; it just matters that it works for you and is consistent.
Other than paper expenses, the cost of collecting and storing your data is pretty cheap. Every time you test, you will have to block an hour or so to enter the data (maybe longer, depending on the size of your collected sample), but you should consider it a critical part of creating your report card. The data doesn’t lie—interpretations might, but the absolute data does not. It will tell you very clearly (if you picked the right metrics) how you did as a coach and opportunities for improvement in the next training block.
Why Collecting Data Will Make You a Better Coach
There are two reasons why I believe testing, collecting data, and evaluating it has made my team and me better coaches. First, it forces us and our athletes to be accountable to what we call “test weeks.” Every athlete in the gym knows when test weeks are, and they know that if they slack off during training between test weeks, we will see it and call them on it.
For my coaches, it serves as their report card and an opportunity to assess how effective their training has been. It holds them accountable to objective data and to their athletes. If the athlete doesn’t improve, I require the coach to figure out why and explain it to the athlete while they review their results. For me, it serves as a quarterly check-in and evaluation of our programs as a whole to see where we can improve, where we are doing well, and how we want to continue to evolve our programs. Our data also serves as an additional quality control team member.
It is simple to see how the data collection can essentially turn into a part of your quarterly reviews for coaches. What is probably the most powerful, however, is that after you have collected data consistently for two years, you can start to see year-over-year and longitudinal changes occurring. This is where research lab studies pale in comparison to what the private sector and university team settings can produce.
Research lab studies pale in comparison to what the private sector and university team settings can produce because coaches can collect data on the same subject for multiple years. Share on XIt is near-impossible to get a subject to agree to be studied for multiple years. It is not that hard to convince an athlete that if they want to play their sport for many years to come, they need to train consistently. Because you track metrics that transfer to sport, and you improve those metrics every test week, the athlete sees the improvements, becomes more motivated and bought in, and continues to train with you. The athlete looks forward to the test weeks throughout the year and is happy to participate in them because they want to compete with themselves and their peers to see who improved the most. This is your opportunity to collect longitudinal changes over time (and eventually share the results with your field) that are near impossible to collect in traditional research labs.
As you collect the longitudinal data, you are evaluating the progress of the athletes as a whole, as well as each individual athlete over time, multiple times a year. It is hard to think that any coach who is in tune with their athletes’ objective progress over time would not get better by seeing trends and traits that respond better at different times to different stimuli.
Finally, collecting the data will make you a better coach because, inherently, you will have to teach other coaches who are not collecting data and evaluating it closely (the 98%) what it means and how they can improve their athletes. It is the age-old medical model of learning. Watch—Do—Teach.
How Many Data Points Do I Need?
This is a logistical question, and there will always be a criticism of any research or data set. Traditionally, you should shoot for having an “n” of 30 (30 subjects). That being said, countless studies with fewer than 30 subjects are published in peer review journals. The honest answer is you want as many data points as you can possibly get.
When we released our initial data set, we had about 300–400 data points. People came out in droves on social media and with personal email attacks telling us the sample size was too small, and it was irresponsible to release the information. The next sample we released was over 600 data points. The criticism only got louder.
We are now close to 1,000, and the number of people still saying it is too small is shrinking. The really cool thing, though, is that the number of athletes and coaches who reach out and visit us from around the world because we continue to challenge the status quo and produce usable, meaningful data is rising at an incredible rate. It is now a huge driver of networking and teaching for us.
If you collected the data honestly and are confident in the data set’s integrity, ignore the people trying to poke holes in your numbers and/or attacking you on social media. Share on XWork to collect as many data points as you can and then just report your honest findings. Especially if what your data shows runs counter to mainstream beliefs and how people make money, you will get people trying to poke holes in your numbers and even attack you on social media to some degree. If you collected the data honestly and are confident in the data set’s integrity, however, take the noise as a compliment that you are doing something right. They will quiet down eventually.
How Long Does It Take?
The answer to this question largely depends on how many athletes you work with within the sample that you want to look at. As an example, at Par4Success we have about 100 golf members on whom we are able to collect longitudinal data, and we have an additional 200 or so new golfers who come for assessments throughout the year. We have collected about 1,000 data points in three years with three test weeks per year. Extrapolate that to your current population, and hopefully that helps to answer that question for you.
Now if you wanted to look at a specific element of training, and you have 30 athletes that you could split into two groups, you could easily run a training study in six weeks to see what shakes out. This is what we have done while looking at eccentric flywheel training, overspeed training, and other topics.
My recommendation would be to look at doing a combination of both, and over the course of a few years, you will have quite a large database to draw conclusions from and help move your industry forward. Encourage others in your space to do follow-up or similar studies to further your initial findings even more.
How Do I Make Sure My Data Is Usable and Interpret It?
This simply goes back to the integrity of your data sample and making sure that you collect each data point in the same manner. At Par4Success, we have made training videos for every single test, and we require all new staff to watch them. Then we have a team review of all testing procedures before every single test week. This ensures that we test consistently and that our inter-rater reliability is solid.
In the beginning, when we cleaned up our data sets, we took the additional step of actually validating the inter- and intra-rater reliability for each test, but this is likely a bit of overkill for most of you reading this. Many of the tests you will likely perform (medicine ball throws, strength tests, vertical jump, etc.) probably already have published studies on these values.
The most time-consuming part of the process is the fun part: figuring out what your numbers show. Unless you have a stats and/or Excel expert on your team, you likely will need to find a friend or hire someone to help you with this. This is the most dangerous part of any data collection—the interpretation.
While the data never lies, the most dangerous part of any data collection is the interpretation. Make sure you have a stats expert help you with this. Share on XAs I mentioned above, the data never lies. However, you can run enough formulas and look at the numbers from enough angles to have the numbers tell you anything you’d like them to. Ever heard of “creative accounting” in Ponzi schemes and the financial crisis of 2008? You don’t want to get caught up in “creative data interpretation.” How do you know, if you look at “r” values for correlations, whether to select a one-tailed or two-tailed t-test? If you don’t know what that means and/or the answer, that is probably a good indication you need to find someone to help you with all the data interpretation.
When you do find a stats expert to help you out, you will be able to tell them your questions, your hypothesis, and which variables you want to see the relationships of. They will be able to give you the stats answer. This is probably the hardest time to stay objective and make sure that your emotions or preconceived assumptions don’t influence how you look at the numbers. If you can keep emotions and preferences out of it, you are well on your way to finding the truth of what your training did or did not do.
Great Tools to Collect Data
Below, I have listed some of the best tools that we currently use to track and measure progress in training sessions and over time, and what we have found them to be helpful for. There are lots of different ways to utilize these tools, and I am excited for the future information and data that will come from their use.
Exxentric kBox and kPulley
The great thing about these two tools is their direct connection to any device via Bluetooth. They enable you to look at all sorts of power and speed metrics and track them over time. You can identify when power output drops in sets and objectively look for changes throughout your training block and yearly with any athlete.
The kPulley proved to be a game-changer for our rotary athletes, increasing their swing speed by 150% compared to a control group that did not use it. I believe the opportunities to explore other areas of transference to sport are huge with these two devices.
Assess2Perform Ballistic Ball
This is an amazing tool that we use in training sessions for immediate feedback on athletes’ efforts with medicine ball work. We haven’t objectively measured the impact of using the Ballistic Ball versus a traditional medicine ball in training yet, but that is on the docket. We are also looking to draw correlations between velocity and power numbers with medicine ball tests to swing speed in golf.
Our initial database focused on the distance of the medicine ball thrown. We have determined that distance with the Ballistic Ball is comparable to a regular medicine ball of the same weight, so we are looking at other metrics as well. The opportunities here for golf and other sports are huge for future data collection and studies.
Push Band and Assess2Perform Bar Sensei
We are in the early stages of looking at velocity-based work with our athletes, and both of these bands offer great options for athletes and their training. Athletes can easily take both on the road, and the improvements in this space are moving fast. It is going to be fun to see where both of these tools go in the future in terms of what metrics they track, and which ones are the most meaningful. We are still early in our use of these bands, but definitely would recommend some of the articles written on these products by Carl Valle for comparisons and to figure out which band is right for you.
Barbells
Tried and true, use barbells to track your athletes’ compound lifts. How strong is strong enough for your athletes? This is data we are working on in the golf space, but is there a clear relative strength that exists for your sport? While these tools have been around forever, there are still niche opportunities in sports when looking for transference and how to continue to improve the way you prepare your athletes.
Others
Vertical jump tools, scales and tape measures, and a myriad of other tools are out there. Tracking changes in metrics and also determining relative power and strength numbers are great areas to look at for data tracking.
Collect the Data That Matters
At the end of the day, if you took nothing else from this article, please be sure to only collect data on things that move the needle on the field, the court, the course, or wherever your athletes play their sport. Don’t just track data for the sake of tracking.
If it doesn’t influence performance, stop tracking it and look at something else. Make sure that you don’t fall into the trap of “creative interpretation” of the data. Report what your numbers say and be reasonable with your conclusions. There’s no need to try to make every data point an industry-changing event.
Above all else, your curiosity and willingness to challenge the norms and fail while doing so will drive our industry, your career, and your athletes forward. But none of that will happen unless you commit to prioritizing the integrity of your data set. If you are able to do all this, your throne likely awaits…
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF