William of Ockham was a 14th century English philosopher who is famous for his ideas in the category of metaphysical nominalism. (Please, don’t stop reading. I swear there is some decent content here.) His most famous concept is “Entia non sunt multiplicanda praeter necessitate,” which in English means, “More things should not be used than are necessary.”
While there is no indication that he wrote that exact phrase, he is given credit for the idea of not lending mental bandwidth to something that is not “self-evident, known by experience, or proved by an authority.” These three rules result in a burden-of-proof spectrum ranging from easiest to most challenging to prove. When developing a program (whether you’re periodizing, individualizing, or choosing exercises) or making on-the-fly changes to a program (based on accumulated workloads, players’ daily psychological testing, or undulations based on feedback from VBT or RPE), it’s critical that what you implement is bucketed into one of these categories.Apply this burden-of-proof spectrum to help navigate data collection, program development, and on-the-fly changes to achieve the outcomes you want. Click To Tweet
If you’re unable to bundle your changes into one of these three categories, then there’s no point in implementing them and expecting the outcome you want. And don’t confuse this with the scientific process!
I would characterize self-evident as two things:
- Something obvious: “Hafthor Bjornsson is strong” (yep, that checks out).
- Something that a person knows about themselves: “My body doesn’t respond well to barbell back squatting.” Okay, then I probably won’t barbell back squat this athlete.
I would describe a self-evident change as something intuitive. Assume, for example, an athlete has a major force production imbalance with one of their legs. Intuitively, as a coach, I would recommend a decrease in bilateral lower body work with an increase in unilateral lower body work. Also, possibly some correctives if the loading pattern for their squat jump is askew. Those of you who are thinking, that’s pretty obvious, my guy—that’s the whole point. Self-evident changes need to be extremely obvious to justify being in this category.
For those of you asking: Why would I make an emphasis on correctives and unilateral training? The athletes I work with squat 2x bodyweight, they don’t need single-leg work. I’m not saying you should inundate an athlete only with corrective exercises to fix the imbalance and never lift heavy. If the athlete is strong, they should continue to train, but with a unilateral emphasis until the imbalance is more manageable.
There’s also something to the idea of a self-fulfilling prophecy when working with athletes, and this is what makes coaching tricky. Each athlete has their own psychological makeup that, at the very least, we need to consider. How an athlete appraises a situation is critical to understanding the hormonal cascade that will ensue. If an athlete believes something is good or bad for them, it probably is. This self-evident category is perhaps best used for athletes who are in-tune with their bodies and minds and know how they respond to demands; it’s not very applicable for youth and novice athletes.
Known by Experience
This category is similar to the self-evident classification regarding athletes’ experiences with their bodies. However, I would argue this is based more on a coach’s experience and their relationships with their athletes and can be applied more globally. I think of self-evident responses as a 1:1 ratio. Does the athlete have poor arm swing during a sprint? Work on front side mechanics; you probably won’t fix the problem as effectively if you don’t work on the exact issue. With knowing by experience, there are many different changes you can make based on one problem. The athlete isn’t sleeping well at night and has acquired a 12-hour sleep debt. What do you do? Well, if you don’t know your athlete’s nightly routine, this would be a good place to start. Another example is knowing that collegiate athletes are going to have a tremendous amount of stress during finals week. Having the wherewithal to program a de-load week or to schedule optional activities is knowing from experience.
Essentially, this means knowing that uncontrollable stressors—sport performance stress, school stress, family stress, financial stress, sleep stress—all affect athletes in a similar way as the stressors you program in the weight room or on the field. These uncontrollable stressors compound the acute stresses in your program because they are chronic and affect athletes continuously throughout the week(s). Knowing when individual athletes are more stressed (academic tests, a poor run of form, continuous weekly sleep debt, etc.) and responding appropriately based on your experience and your relationship with the athletes are what makes knowing by experience effective, though more challenging than self-evident responses.
Proved by an Authority
This category has the highest burden of proof and is mostly reserved for the hard sciences and using the scientific method to prove cause and effect. For example, HIT training can yield similar aerobic adaptation when compared to moderate-intensity training and can do so with less training volume (shameless plug). Having a proven, accurate, and reliable way to measure performance outcomes is the most effective way to achieve this category. For example, I wanted to improve my vertical jump, so I embedded more plyometrics and jump training into my program. My LBM and fat mass stayed the same, and my vertical jump increased. I can prove the program caused the change.
So, how do these philosophical tenets reflect where strength and conditioning coaches and sports scientists are now in the 21st century? Great question!
Our Data Obsession
Currently, our industry is obsessed with data, linear regression models, correlations, and statistical significance. We search for these things as if they will tell us exactly when our athletes will be sick or injured or what exercise will make inferior athletes better than the genetically gifted athletes. We attempt to find correlations with random data points to all good and bad outcomes (Good God! We’ve had ten soft tissue injuries on Tuesdays this year, therefore to limit soft tissue injuries, we shouldn’t train on Tuesdays). We don’t remember the first thing we learned in Stats 101: correlation does not imply causation. Now, before I get too carried away and make every sport science department in the country scream at their computer screens in disgust, data in athletic development is necessary.
However, we’ve been ignoring the simple principle that Ockham described over 700 years ago. In the same vein, Leonardo da Vinci said, “Simplicity is the ultimate sophistication.” The more screens, tests, and data collected during a year, the more noise there is, and the harder it can be to find the signal you’re looking for. The key is the data needs to have (data analytics people say it with me…) actionable insight. So how does this bring us to programming? Another excellent question.
The three most important programming steps:
- Data Collection. Accurate and reliable testing of athletes on specific characteristics that are important to the sport they play or the goals you want them to accomplish (these play a varying role, but are required by all sports).
- Sufficient mobility: ROM testing, mobility screens, etc.
- Adequate motor control: OH squat testing, dissociation testing, etc.
- Aerobic capacity: HR recovery testing, VO2 max, etc.
- General strength: Repetition max testing, force plate testing, etc.
- Power: Vertical jump testing, broad jump testing, force plate testing, etc.
- Speed: 40-yard dash, 5-10-5, etc.
- Periodized Program. A well thought out plan that varies based on the test results of each athlete.
- Attacking weaknesses while solidifying strengths
- Retesting. This ensures the program accomplishes the athletes’ goals.
- Our actionable insights and Ockham’s razor come into play here. If the data you collect does not change the way you program, coach, or interact with an athlete or if the changes you make don’t alter the data or outcome, is the data worth collecting?
- If the data you’re collecting causes a change in the way you program, then I would argue yes, it is worth collecting—it’s causing actionable changes to the way you’re programming for an athlete. However, if these changes do not show results, is the test viable for the athletic characteristic?
With these three programming steps in place, how can Ockham’s razor and actionable insight help us design effective programs?
Avoid Death by Data Collection
Just as Ockham described, “More things should not be used than are necessary.” So, if you’re collecting data on athletes and don’t take any actions after you collect data, what is the point of gathering it in the first place? For example, if you monitor an athlete’s power output on a force plate and don’t make changes to the program based on their data, why do you have them jump in the first place? Don’t just check the box to say that you monitor athletes. You’re wasting your time and, more importantly, the athlete’s time.
Attack Their Weakness and Solidify Their Strengths
In a well-designed program, each athlete will most likely perform similar movements with modifications based on training age, mobility deficiencies, and movement preferences. Particularly in professional sports, athletes do have a say in their program. If they don’t want to do the movement you have programmed, you need to explain why you included that exercise and then give them a choice on modifications that are similar to the original plan.
If an athlete needs to improve their force production based on force plate measures, there are hundreds of ways to accomplish this, and it becomes increasingly specific when you have other data points to consider for the athlete. Knowing by experience comes into play here. We need to consider many factors, and having the experience and the knowledge of the athlete’s future goal will be critical in the decision-making process. If this same athlete has an excellent rate of force development, adding additional mass could hinder their RFD. Therefore, it would be best if the athlete added as much LBM and as little FFM as possible. Making sure that the athlete is aware of the goal—and giving examples of how they can achieve that goal nutritionally—is critical. However, if the athlete doesn’t need an extremely high RFD (think of offensive lineman in football), adding any kind of mass takes precedence. The key is to monitor progress and change course when the data you collect indicates it’s time to do so.
Monitor Adaptations with a Test and Retest Method
As stated earlier, we need to test and retest athletes to ensure that desired adaptations are occurring. The testing methods you choose must be self-evident or proved by an authority. You can effectively program for teams based on the goals set forth by the on-field coaching staff and then individualize the program based on the desires and needs of each athlete. When you’re initially testing your athletes, the tests must be reliable and accurate. If changing body composition is a goal, make sure you use a properly calibrated machine (underwater weighing and Bod Pod are the gold standards). If the initial testing is wrong or if force plate numbers were inflated, there is no way to show changes resulting from the program. Or, even worse, you reveal that your athletes got worse because of some incorrect numbers you initially gathered. Being able to show that adaptations occurred not only proves that the programming was effective but also demonstrates your inherent value as a practitioner.
You display even more value when you show that you accomplished the overall goal for the team (increasing team power output via force plate measures, for example) while also improving individual imbalances measured by the force plate, all while improving cardiovascular fitness demonstrated via a fitness test. (I’ll take the bonus in a lump sum, thanks!)
As a coach, if you ever find yourself saying, “This is the way that we have always done it,” take a step back and listen. Take an unbiased look at what someone is suggesting. See what their evidence suggests and try to understand why they think a change could be helpful. The best way to implement Ockham’s razor in your programming is to use it as a lens to analyze your current program, “Know yourself and seek self-improvement.” See where you can make improvements as a coach, whether it’s the data collection process, the implementation of individual training, or showing adaptations to on-field staff or even athletes. No competitive athlete I’ve worked with has ever been disappointed by seeing improvements in the data.
As with anything, Bruce Lee’s quote “absorb what is useful, discard what is useless, add what is uniquely your own” remains true. If something is useful, use it. If a training method comes out that shows improvement in every athletic quality at the same time, you best believe I’m going try it out for myself. If something isn’t useful, discard it. If a test doesn’t measure what it says it measures, get rid of it—it’s a waste of time. Add what is uniquely your own. If an athlete believes something will improve their performance, let them have their placebo effect. Just remember, “More things should not be used than are necessary.”
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF