Introduction
Sports science is a relatively new discipline, but it can trace its roots back to Ancient Greece and Rome – the Greek philosopher Hippocrates, for example, was an early adopter to the benefits of exercise and nutrition, stating “eating alone will not keep a man well; he must also take exercise..”, along with “walking is man’s best medicine”. Over the years, sports science grew out of this early foundation; in 1927, the Harvard Fatigue laboratory was set up—a place commonly seen as the birthplace of exercise physiology within the US. After World War II, this early interest in physical performance grew into more structured research, and in the 1960s and 1970s a number of university kinesiology, biomechanics, and exercise physiology departments developed. Alongside this, Eastern Bloc countries such as Russia and East Germany set up applied sports science programmes which were directly tied to performance outcomes; whilst the legacy of these programs is troubling given the various abuses (including pharmacological) that were taking place, they nevertheless remain an example of an early nationalised sports science program. Moving into the modern-era, a number of sub-fields of sports science have developed, including sport psychology, motor learning, skill acquisition, performance analysis, and now areas such as data science, AI, and performance modelling.
Because sports science has really exploded in the last 30 years or so, the rapid progress has come with plenty of criticism—which, in some cases, is well deserved. Aspects such as low sample sizes continue to plague research, as well as concerns about the common participants in research—with most not being elite athletes or female, which harms the generalisability of any results. This is important for the end users of sports science research—athletes, coaches, and practitioners—who are often making decisions based on research which may not be applicable to them and their contexts. As such, being aware of the limitations of sports science research is crucial, a point made by Dr Chris Stevens in a recent editorial in the International Journal of Sports Physiology and Performance. A similar critique was published in the same journal back in 2021, and gives an overview of some of the issues (including statistical misunderstandings) that are common in applied sport science research.
Image: Female Athlete Performing a Hands on Hips Countermovement Jump on a VALD Force Deck
If you’re reading this article, then you’re already someone who is interested in how to apply sport science principles and research to your practice, as either an athlete, coach, practitioner, parent, or some combination of all four. What are some of the key issues we need to be aware of in this field, and how do we move forward? Fortunately for us, they were questions asked to a group of leading sports scientists in a journal article from the Strength and Conditioning Journal, which makes for interesting reading.
What are the Biggest Issues Facing Sports Science Today?
Some of the biggest issues highlighted by the panel included:
Technology Overuse and Misapplication
There has been a rapid growth in technological capabilities, with the ability to quickly and easily capture data. However, this rapid data capture demands interpretation—often from practitioners who lack the time, training, or support to do so effectively. Most don’t have access to a dedicated data scientist, or even someone with effective data skills; as a result, there is often a misapplication of technology and data, along with an overreliance on black-box algorithms, creating a flood of metrics with unclear relevance. As a discerning user of sports science, in a field flooded with new pieces of technology—GPS, force plates, velocity-based training tools, HRV wearables, dashboards, markerless motion capture, AI analytics, to name a few—it makes for sobering reading. This is especially true when we consider that much of the “cutting edge” technology within sports science is relatively easy to acquire (reasonable cost, widely marketed), superficially easy to operate (often plug and play solutions are available), and yet hard to meaningfully interpret (with many providing black box algorithms—meaning we often don’t fully know how the models make decisions).
As an example, it is possible to collect over 100 variables from a GPS package. Because this is relatively easy to capture, many people do—without an understanding of how they might use the data to inform decisions (and the limitations of the data itself), or what the data actually mean. This then leads to the misapplication of data; if we’re putting bad (or poorly understood) data into a decision, then the decision we make is going to be poor. Similarly, if you can collect 100 pieces of data from a piece of technology, but only 3 are relevant, then the massive data sets you are collecting become costly—in terms of analysis time, storage space, and clunky decision-making processes. Finally, there is a cultural tendency to think that having numbers equates to being scientific; in reality, in many cases practitioners don’t fully understand the data or data collection processes under which they are making decisions.
If we’re putting bad (or poorly understood) data into a decision, then the decision we make is going to be poor, says @craig100m Share on XThis means that some crucial skills for the sports scientist and coach of the future to develop include:
- Be clear on what to measure, and why (i.e., becoming an essentialist)—will the data I collect influence my decisions?
- Develop tool-task alignment—use data to answer the questions you have, as opposed to using the data you have to drive your questions.
- Be clear on the limitations on the data collected—it’s fine to use data, so long as you can understand it within context..
- Be able to rapidly analyse and interpret the data you do collect to provide actionable insights – upskilling in areas such as data literacy and technological validity.
Lots of Data – Poor Data Governance
With the increasing number of tools available to us, the sheer volume of data that can be collected is mindblowing. For example, a well-funded elite sporting team could be collecting on-field performance data through both video and GPS, gym-based data through force-plates and velocity based training measures, and athlete wellbeing data via HRV, RPE, sleep, nutrition logs, and various surveys. It’s tempting to think that more data equates to better decisions, but, unfortunately, the signal-to-noise ratio is often poor—as such, key insights get buried in irrelevant or low-quality metrics. This can slow down decision making (we have to spend longer going through the data), make it reactive (something pops up in the red zone and we have to do something about it), or make coming to a decision hard (paralysis by analysis). Alongside this, there are often only weak systems in place for managing, protecting, and using data at the organisational level. Typically, there are no clear policies for athlete consent to having data collected, ambiguity around who owns the data, as well as around data privacy and sharing. Ethically, it could also be argued that the over-collect of data without any clear purpose is troubling, as it develops a surveillance mindset. As highlighted in the article by Sophia Nimphius, 20 years ago we felt that data would solve our problems—now, our biggest problems are linked to the sheer volume of data, alongside its accuracy, cleanliness, and meaningfulness.
How can we improve in this area? Firstly, we need to be clear on what questions we need answers to, and then use technology which collects the data to answer those questions. This clarity means that only necessary data is collected. Then, we need athlete-centred data collection policies, which highlight consent, privacy, and purpose. For the sports scientist (and, to a lesser extent, the coach), upskilling in areas such as data hygiene, critical thinking, and applied decision making is important. Finally, an overall cultural shift from a data-first to a question-first approach is crucial.
Image: Track and Field Athlete Performing a Hands on Hips Countermovement Jump on a Hawkin Dynamics Dual Force Plate
Fragmented Communication
The number of disciplines within sports science continues to grow, and with them, so has the number of people in the performance team that surrounds the athlete. On paper, this is good—increased expertise across areas should enhance performance. However, for this to be true, then communication needs to be effective—and it rarely is. A key challenge here is that groups often have their own language (for example, a skill acquisition specialist may talk about ecological dynamics, which isn’t intuitive to coach/athlete); different priorities (a sports nutritionist may want to focus on an ergogenic aid, whilst a coach is more interested in reducing the risk of injury); and competing time pressures (the high performance manager might be needing some data for an end of year report, whilst the applied biomechanist is busy working with the athlete and coach). This can lead to communication silos, which makes it harder for everyone to be moving in the same direction.
How can we fix this? Shared language and mental models are helpful here; this requires clarity on how performance is improved, along with clarity on what key terms (e.g., power) mean to everyone. A focus on providing timely and relevant insights to athletes and coaches is also crucial. Finally, embedding sports science into coaching conversations ensures that everyone is working off the same page.
Embedding sports science into coaching conversations ensures that everyone is working off the same page, says @Craig100m Share on XWhere Next for Research?
The research article also outlines some key priority areas, as identified by the panel of experts, for future research. These include:
- Elite-athlete specific research – if a limitation of current sports science research is a lack of generalisability from research to practice, in part because the subjects in studies are not elite athletes, then the solution to this is simple (on paper) – do more research on elites. However, research in elite athletes is difficult; they are busy, hard to control, and difficult to subject to randomised controlled trials. However, there are solutions to this; we can utilize n=1 research designs, for example, or longitudinal case studies.
- Better integration of practice-based evidence – a key driver of effective research is the ability to answer the questions that matter; those that come from elite athletes and their coaches. This sounds obvious, but a lot of research is generated by universities who exist outside of elite sport. A solution is a merging of the two, with the ability to embed researchers and/or academics into the performance environment, allowing them to better understand—and therefore answer—the questions of those on the coalface.
- A more holistic understanding of long-term athlete development – the research that does exist in elite athletes typically is quite short-term in nature; it begins when they are already elite. We will be better placed to support future elite athletes if we have a more holistic understanding of their athlete journey, from pre-elite to retirement. This would allow the generation of training progression models based on a more in-depth mechanistic understanding of development, including psychosocial factors, which link strongly to the environment curated by the coach. In addition, evidence for a lot of LTAD frameworks is still underdeveloped, limiting the validity of such models.
- Research in under-represented groups – the vast majority of research in the field of sport science is conducted in males, and then applied to females. By increasing the number of female participants in research studies, as well as those from other under-represented groups, we can better support all athletes.
- Interdisciplinary study – with the increasing growth of sub-fields within sports science, each unique area has become increasingly specialised. However, in elite sport, problems are messy and complex; they tend to be interdisciplinary in nature. Taking a more interdisciplinary lens to sports science research should assist in being able to better answer these messy performance problems – and therefore supporting athletes and coaches to further enhance their performance.
From an applied perspective, the panel identified two key future trends:
- The rise in data science techniques should assist in being able to identify meaningful trends over time, supporting decisions and acting as an early warning system for emerging issues.
- A better understanding of, and ability to assess changes in, neuromuscular function as a result of either training or fatigue would be hugely beneficial to applied practitioners.
Image: Coach Looking Over Data and Trends Over Time for Athletes to Assist in Decision Making
Conclusion – What Should You Do With Your Data?
If you’re a coach working within an elite environment, then this article provides some key take-home messages that you can use to ensure you continue to be the best you can be. Elite sport is a strong user of sports science research, and so developing the knowledge and ability to critically analyse and interpret this research is important. In elite sport, research is only useful if it leads to better decisions. That requires a new mindset; one that prioritises questions before data, and values integration over siloed expertise. Coaches who develop data literacy, decision-making skills, and an interdisciplinary approach will not only keep up; they’ll lead.
Coaches who develop data literacy, decision-making skills, and an interdisciplinary approach will not only keep up; they’ll lead, says @craig100m Share on XUpskilling
In this article, I’ve highlighted a number of areas in which coaches and practitioners can upskill. Here are some of my favourite resources in these areas if you’re interested in learning more about these skills:
Data Hygiene
- Data Literacy Fundamentals (Coursera)
- Naked Statistics by Charles Wheelan
Critical Thinking
- The Demon-Haunted World by Carl Sagan
- Critical Thinking and Problem Solving (edX)
Applied Decision-Making
- How to Decide by Annie Duke
- Decision Making Masterclass
- Force Plates in Practice: What I Learned in my First Year by Jared Lovelace (Simplifaster)
- From the Inside Out: What Force Plate Testing Has Taught Me by Ryan McLaughlin (Simplifaster)
Data Ethics / Governance
- Getting Ahead of the Game – Australian Academy of Science
- Weapons of Math Destruction by Cathy O’Neil