Science, Dogma, and Effective Practice in S&C
By Andrew Langford
Summary
This article explores the intersection of science, cognitive biases, and practical application in the field of strength and conditioning (S&C). It discusses the limitations of human perception and reasoning, the role of scientific principles in reducing bias, and the dangers of pseudoscience and dogma in the profession. By drawing on analogies and real-world examples, it encourages practitioners to adopt a rational, evidence-informed approach grounded in physiology and sound training principles. The piece concludes by emphasizing the importance of foundational scientific knowledge, critical thinking, and experiential learning in developing effective and ethical S&C practice.
Introduction
The practice of strength and conditioning presents us with a challenge. We must navigate the difficult path between understanding and utilising current facts and training methodologies, while recognising that much of what we do has no evidence base in the given population or specific environment we are working in.
For instance, a training program that effectively increases maximum strength in a group of collegiate football athletes may have limited transferability when applied to youth basketball players or elderly populations. Individual variations in training response, due to age, sport demands, physiology, and recovery, highlight the importance of interpreting specific evidence in context, rather than universally.
So how do we overcome such a conundrum? In this article, we will explore the common issues and propose a theoretical framework for best practice in the profession.
Human Fallibilities
“Our brains were designed to understand hunting and gathering, mating and child-rearing: a world of medium-sized objects moving in three dimensions at medium speeds.” This quote by Richard Dawkins beautifully summarises the inherent limitations built into us humans.
While the human body is fantastically, intricately, and complexly adapted for life on Earth, by its very nature it has difficulties fully understanding all aspects of the world.
Adaptation has built into us rules-of-thumb, or heuristics, that enable us to function more efficiently in the world. We are able to take in vast quantities of information through our senses and create a model of the world that allows us to operate relatively effectively, with little perceived effort.
For example, our stereoscopic vision receives photons from the outside world which stimulate neurons, and the brain then converts these signals into what we perceive as images. We don’t have to consciously think about synthesising the different images from each eye or filling in the gaps created by our blind spots. Instead, our brain automatically pieces together the information and presents us with a detailed representation, or model, of the world.
Most of the time, this is amazingly accurate and allows us to operate effectively. However, it’s not perfect. Consider, for example, the visual illusions we can observe, such as the Necker Cube. This simple drawing of two cubes almost magically seems to jump between different orientations. You can find many brilliant examples of these illusions online, all demonstrating how easily our visual senses can be deceived.
But what does this have to do with S&C?
Well, unfortunately—but perhaps unsurprisingly—our visual system is not the only aspect that can deceive us. The human brain is filled with cognitive biases that greatly affect our decision-making. The work of Nobel prize-winning psychologist Daniel Kahneman and his colleagues has detailed many of these biases and their effects, a number of which directly influence our effectiveness as S&C practitioners.
Below are several key biases from Kahneman’s work relevant to S&C practitioners, with brief explanations of their importance:
- Availability Bias: People overestimate the likelihood of events based on how easily examples come to mind.
For example, an S&C practitioner might overemphasise exercises or methods that have become popular and frequently appear on social media, forgetting about better evidence-based alternatives.
- Anchoring Bias: Initial information disproportionately influences decisions, even if irrelevant.
When assessing an athlete’s goals, the practitioner might anchor on a metric that the athlete mentions (e.g. to squat 400 Ibs) instead of considering the actual performance benefit of such an outcome.
- Confirmation Bias: Seeking or interpreting information to confirm pre-existing beliefs.
A practitioner might favour training methods they’re familiar with (e.g., Olympic lifting) and dismiss evidence supporting other approaches (e.g., loaded jumps and plyometrics) that could benefit the athlete more.
- Overconfidence Bias: Overestimating one’s knowledge or ability to predict outcomes.
An S&C coach might assume they can accurately predict an athlete’s progress or injury risk based on a flawed or incomplete model, leading to ineffective programming.
- Loss Aversion: People prioritise avoiding losses over future potential gains.
Sports coaches may resist changing ineffective routines because they fear losing current progress, even if a new program offers greater benefits. Practitioners need to address this fear with evidence to promote better outcomes.
The Value of Science
“The first principle is that you must not fool yourself—and you are the easiest person to fool.” Richard Feynman
Science is not just the facts that we know about the world. While these are of course important, equally important are the principles and values of science. The whole point of science, and the reason it is rightly elevated above other ‘ways of knowing’, is that it attempts to eliminate bias. Science attempts to discover what is really true.
The scientific enterprise is therefore structured to ensure that bias does not creep into our findings and decision-making. The scientific method is designed to guard against self-deception, against being misled by beliefs that are mere traditions, dogmas passed down, or assertions by authority. In other words, it protects against subjectivism.
This is why, in research, we ideally use double-blind trials. This is why we use peer-review in academic publication. And this is why we use control groups and advanced statistical methods for assessment.
This rigorous approach is how we gather reliable facts about the world. Based on these, we can employ critical thinking, logic, rationality, and reason to make inferences about novel situations.
It is true that science is sometimes criticised for being wrong or misleading, but this generally comes down to bad science, not science itself.
The scientific method is designed to guard against self-deception, against being misled by beliefs that are mere traditions, dogmas passed down, or assertions by authority. In other words, it protects against subjectivism, says… Share on XThe Threat of Pseudoscience and Dogma
One of the ways science gets misused is through the propagation of pseudoscience. This occurs when principles or facts from science are dubiously extrapolated to imply something unsupported. If this were merely a hypothesis, it would be open to testing and not necessarily problematic. But it becomes more insidious when pushed as 100% true, especially when complicated scientific concepts obscure what is actually happening.
A common example is the misuse of quantum theory, erroneously applied to unrelated areas using complicated quantum language to misleadingly explain them.
We see similar phenomena in sports performance all the time. A new training methodology, supplement, or device is proposed that promises seemingly magical benefits. The company proposing it then markets it as objective fact, quickly gaining a cult following that further propagates the myth.
Consider the rapid growth of whole-body vibration (WBV) training equipment in the early 2000s. Companies marketed these devices with unsubstantiated claims, such as “accelerated muscle activation,” “enhanced bone density,” and “equivalent strength gains to traditional resistance training in half the time.” The marketing materials were filled with pseudoscientific terminology about “reflexive muscle contractions” and “gravitational loading amplification.”
The proposed mechanism seemed vaguely plausible to those with a limited physiology background, and some early studies showed small acute increases in EMG activity during vibration exposure, which companies jumped on as proof of effectiveness.
However, when critically evaluated using our scientific principles, we soon notice it doesn’t all add up. The stretch reflex operates on a different time scale and mechanism than voluntary strength development. The high-frequency, low-amplitude contractions induced by vibrations have little resemblance to the coordinated, high-force contractions required for athletic performance. And when researchers conducted longer-term studies comparing WBV to traditional resistance training, the vibration groups consistently showed inferior strength and power gains.
Yet WBV gained widespread adoption in professional sport and fitness facilities, driven by the appeal of “high-tech” solutions and the promise of time-efficient training. Practitioners who might have been skeptical of other recovery modalities readily accepted vibration training because it seemed to align with established training principles, despite lacking robust evidence for performance enhancement.
This demonstrates how pseudoscience can infiltrate S&C when we fall for some of the previously mentioned biases inherent in humans.
A useful heuristic to apply to any hypothesis is that extraordinary claims should require extraordinary evidence. Based on Hume’s argument, when we have a strong underlying theory and rationale, we might accept a claim with modest evidence. However, if the claim contradicts current scientific understanding, we must demand much stronger evidence. This principle also aligns closely with Bayesian reasoning, providing a mathematical/statistical formulation that reflects this logic, and is often used in scientific research.
Much of what we see in the commercial fitness sphere strongly falls within the realm of pseudoscience. But I also fear that much of what we have adopted as standard practice in S&C may also be subject, although to a lesser extent, to this kind of dogma. For example, are Olympic lifts truly vastly superior to other training methodologies, or has the dogma become so embedded that questioning it is now almost heresy?
Cannonballs and Guided Missiles
I often like to present topics to practitioners in novel ways, as I find this to be the most effective tool to stimulate new thought. A good example of this is how we think about programming, and ultimately why effective programming is important. This is where my analogy of cannonballs and guided missiles comes in.
Cannonballs are projectiles designed to perform the task of hitting a target. Being a simple, round object that is crudely fired in the general direction of a target, its success is fairly limited.
A guided missile, on the other hand, is a finely-tuned and programmed projectile that can intricately weave and dart through the air, homing in on its eventual target.
As we know, guided missiles are far more effective at hitting the target.
We can use this analogy to think about the purpose and effectiveness of programming in S&C. If we choose to use a generic, cookie-cutter approach to programming, we are effectively employing the cannonball approach. The programme may work to some extent, but its overall effectiveness is limited.
In comparison, if we craft the programme to cater to the individual athlete and the given circumstances, we can vastly improve its effectiveness. Using this guided missile approach, we can fine-tune the exercises, sets, reps, and intensities to dynamically respond to the athlete and current environment.
More precisely, we are constantly assessing and refining which adaptation we need to work on, and how our programme is going to elicit a purposeful outcome. This approach subtly shifts the emphasis for the practitioner away from thinking too much about what the ideal programme should look like and how it meets the typical expectations of sports or textbooks. Instead, the focus becomes the adaptation.
The Adaptation’s Eye View: Unweaving Athletic Performance
Strength and conditioning has developed to the point where whole textbooks are written about how to programme for a given sport or activity. Complex programming and periodisation models are presented, often assuming inherent superiority over other methods. Perhaps more confusingly, there are now hundreds of technological tools that practitioners can utilise, each with hundreds of their own metrics to analyse performance. The problem with this is that practitioners can become blinded by the metrics. The majority of the measures that we are able to collect with technological tools are outcome measures that don’t tell us much about how those outcomes are actually being produced by the body.
Consider, for example, using force plates to obtain the rate of force development from an Olympic lift. We have no direct insight into what specific aspect of physiology is producing the outcome measure. We know that it is a complex summation of muscle actions, including concentric, eccentric, and isometric contractions of the calves, quads, hamstrings, and glutes, as well as upper body musculature.
What we need to do as practitioners is break things down into what specific physiological quality is being stressed, and therefore, what adaptation is likely to occur.
For example, if we want to achieve a very high rate of force development (RFD), we can do so through a variety of methods, including the isometric mid-thigh pull (IMTP), Olympic lifts, drop jumps, and other plyometrics. However, each of these methods achieves RFD through different physiological mechanisms.
For example, in a vertical jump: if one athlete achieves a height of 40 cm with a concentric-only squat jump, and another athlete achieves the same height on a drop jump, this would tell us very different things about the underlying physiological qualities.
We should therefore strive to analyse all movements and outcomes in terms of their foundational physiological qualities, which will then give us clues as to what adaptations they will cause and what exercises, sets, and reps may best achieve a given long-term goal.
Over time, an athlete thus becomes a statistical description of the demands placed upon them. To be sure this is true, consider the following experiment. Take 10 athletes and randomly assign them to two different training methodologies:
- One for maximum force production.
- One for aerobic capacity.
If we test the athletes before and after the intervention, we can be fairly confident that the athletes who received the max force training will have improved the function of their high-threshold motor units, whereas the aerobic training group will have improved their mitochondrial density.
This knowledge gives us the rationale for why we must view all of our training interventions primarily in terms of the adaptation we hope to elicit. If we cannot be sure of what our desired adaptation and associated performance outcome will be, then we probably should not be programming it.
...there are now hundreds of technological tools that practitioners can utilise, each with hundreds of their own metrics to analyse performance. The problem with this is that practitioners can become blinded by the metrics, says… Share on XEffective Modelling and Reductionism
Reductionism is often used as a derogatory term to criticise the use of science. It is a term often used to imply that all we want to do as scientists is break things down further and further into their constituent parts, and then explain complex wholes purely through these isolated components.
Of course, if we take this approach simplistically, we will quickly find that it does not give us the answers we are looking for. And so, the argument against the scientific enterprise appears to gain strength. However, nowhere within the principles of reductionism does it state that we must view things purely as the simple sum of their parts. Indeed, the beauty of effective science is to be able to break things down into their elements and then figure out the complex summation required to put them back together into a meaningful whole.
Viewed in this way, we should always aim to break things down into their constituent parts and then explain the whole in terms of how these parts interact: not simply as the sum of these parts.
The more complex the system we are analysing, the more challenging it becomes to piece the parts back together effectively. But that does not make it an aimless or fruitless task. On the contrary, this is the essence of effective modelling.
As with all models, our outcomes are only as good as the data we input. No matter how sophisticated or elegant the model, if we don’t use it wisely, the information it provides won’t be useful. It is therefore essential that whenever we are assessing something in S&C, we are mindful of what we are testing, and why we are testing it. We must shield ourselves from bias and ensure that our data collection methodologies are valid and reliable.
As S&C practitioners, everything we do is based on creating models. We analyse the sport, the athlete, the needs, and the goals, and then develop an associated intervention to suit those demands. Ultimately, the usefulness of any model is measured by its ability to make accurate predictions of the future. In S&C, this means making a prediction about how the athlete will develop.
In any real-world situation, this forecasting approach is inherently difficult. As we know from the example of weather forecasting, which is famously unreliable, so too are predictions about injury or sports performance outcomes. However, the more focused our desired outcome is—such as for a specific adaptation we hope to elicit—the more accurate our predictions can become.
For example, if we design a programme to increase maximal force output of the quadriceps and glutes, we could reasonably predict an increase in 1RM squat or IMTP performance. However, if we then infer that this will translate into an increase in striking power during a football kick, or long jump distance, our prediction becomes far less certain, as more interacting variables are now involved.
We must shield ourselves from bias and ensure that our data collection methodologies are valid and reliable, says @Langford_Andrew Share on XNavigating the Path of the Effective Practitioner: False Positives vs False Negatives
So, what is the best way to be an effective S&C practitioner?
My thesis is that—given the inherent subjective biases in humans—we need to utilise the facts, principles, and values of science in order to navigate the complex terrain of athletic performance.
To use the terminology of the statistician, we can consider ourselves as seeking the right balance between type 1 and type 2 errors.
- Type 1 errors, or false positives, occur when we think something has occurred, when in fact it hasn’t.
For example, consider an athlete who starts wearing compression garments during recovery periods and then sees an improvement in sprint performance. We may jump to the conclusion that the compression garment caused the results, instead of looking at other factors such as natural adaptation to training load, nutritional improvements, or even psychological placebo effects. Without rigorous, controlled evaluation, we risk adopting methods based on coincidental correlations rather than evidence-based causation. - Type 2 errors, or false negatives, occur when we think something didn’t make a difference when actually it did.
For instance, consider an athlete following a plyometric training programme to enhance explosive power. If we rely solely on a general performance metric such as sprint time, which doesn’t show immediate improvement due to external factors like fatigue or weather conditions, we might incorrectly conclude that plyometric training was ineffective. Yet, if we had directly measured neuromuscular outcomes such as increased peak power, we might have recognised genuine improvements caused by the intervention. Thus, overly simplistic or inappropriate assessments can lead to dismissing effective methods prematurely.
In essence, if we stray too far towards favouring either false positives or false negatives, our effectiveness as practitioners suffers.
Some situations will likely warrant more of a shift towards one end of the spectrum than others. For example, in a medical setting, we likely want to err on the side of caution, perhaps accepting more false negatives, to ensure that we don’t cause any undue injury or negative side effects.
On the other hand, at the very pinnacle of sport where the smallest of margins can mean the difference between winning and losing, we may have rationale to experiment a small amount with a soundly rationalised methodology that does not yet have research backing.
Of course, we want to know whether something truly works, and whether a particular method of training is effective. But unfortunately, as an applied science, S&C is often stuck in a grey area of ambiguity. For any given situation, we likely will not be able to find a peer-reviewed study telling us whether a particular protocol will work with our athlete in our unique circumstances.
So how do we navigate this?
The first step is to strengthen our foundational knowledge of science. The better our understanding of physics, chemistry, and biology, the better we are able to form rationales for how and why exercises and training programmes will cause adaptations to occur, and have specific performance outcomes.
We can then use our scientific principles and values to skeptically and rationally question and assess what we do. Then we can collect data, and analyse and evaluate in an unbiased way, determining whether we have been effective, and what the best future action should be.
This ‘first principles’ approach to S&C, allows us to reduce the limitations of our specific knowledge and inherent biases, and only make assumptions based on evidence and reason. And the more our assumptions are based on objective facts and reason, the less likely we are to go astray.
We also hope that through experience of working with different athletes, and in different environments, we will be able to develop almost an instinctive ability to do this. Indeed, when we see coaches who have decades of experience, it is likely that they either innately are very good at this process, or they have developed the required skills over time (or more likely both).
In today’s modern era, we can learn anything online for free, but inevitably, the ability to decide what information is good, and what information is bad can be a challenge. Additionally, motivation and staying on track can be difficult.
This is where formal education pathways are hugely valuable. A well-structured degree, such as those accredited by the IUSCA, can ensure that these principles and values of science are attained, along with the necessary scientific facts in the underlying sciences.
But as we’ve alluded to, knowledge is only part of the battle to becoming an effective practitioner. We also need experience, and the ability to put this into practice. Good internships and work experience opportunities can provide this. And again, the best degree programs can also do this.
Additionally, the challenge for any S&C practitioner is to develop the intellectual humility to question our own methods while maintaining the confidence to act decisively when needed. This requires us to:
- Continuously update our foundational knowledge in the underlying sciences.
- Create hypotheses based on this knowledge and the values of science.
- Actively seek out evidence, even if it goes against our preferred methods.
- Remain skeptical of extraordinary claims, especially our own.
- Collect meaningful data on our interventions and analyze it honestly.
- Accept that uncertainty is inherent in our field, not a weakness to overcome.
The framework presented in this article: understanding cognitive biases, applying scientific principles, thinking in terms of adaptations, and balancing false positives against false negatives, provides a roadmap for navigating this complexity. But ultimately, the quality of our practice depends on our commitment to ‘good science’ and our willingness to let evidence, rather than dogma, guide our decisions.
In a field where the stakes are high and the answers are often unclear; this approach offers the best path forward for both practitioner and athlete alike.