Edward Lorenz was a famous American mathematician and meteorologist, and one of Lorenz’s research interests was attempting to predict both the weather and the longer-term climate. One day in 1961, Lorenz was in his lab carrying out calculations and working on a computer model aimed at predicting a weather system. This was tedious work, requiring him to input numbers from a printout for each of the 12 variables that the model contained. Lorenz had run these particular numbers before and was redoing the prediction model as a way to check it.
On the second run-through, Lorenz entered one value ever-so-slightly differently. In the original simulation, the number he used was 0.506127. In his new simulation, Lorenz rounded the number—almost imperceptibly—to 0.506. Lorenz then went down the hall to get a coffee, and when he returned, he was astounded by what he saw: the small rounding he had made had a dramatic effect on the weather scenario that played out in the prediction. The results of the second simulation—simulating two months’ worth of weather—were nothing like the first.
Lorenz was puzzled by this, assuming that there was an issue with the computer system. He checked the equipment but couldn’t spot anything obvious. Rechecking the data, Lorenz noticed that, early on in the simulation, the values from both simulations were the same. However, after about a week of the simulation, they started to differ—at first by just one unit after the decimal place, then two. The size of the disparity typically doubled every four days or so, creating a large difference in simulation outcome after two weeks.
This was a stark reminder to Lorenz that miniscule changes within a complex system (like the weather) could have large downstream effects. In later speeches—and in popular media—this would be dubbed “the butterfly effect,” whereby one flap of a butterfly’s wings could create a tiny change in the atmosphere that may be enough to alter the course of the weather forever. This concept led to the development of chaos theory, the study of complex systems that suggests that, despite appearing random, there are underlying patterns and interconnectedness between aspects within such systems.
From Typhoons to Torn ACLs
We see these theories applied in practice during hurricane season—predicting the path of a hurricane is important, as it allows governments to prepare residents within the path to evacuate if needed. This prediction is, of course, difficult. In large part this is because of chaos theory—the factors that influence the path of a hurricane include wind speed and direction, sea temperature, and humidity, and small changes in any one of these can have a large influence on the path of the hurricane.
As a result, predicting the path of a hurricane requires accurate projections for each of these variables, along with an understanding of how they interact. Scientists are getting pretty good at this:
- In 1954, agencies could only provide predictions up to one day in advance.
- By 1964, this had grown to three days.
- By 2001, hurricane tracks were predicted up to five days into the future.
In 2020, Hurricane Laura was first identified as a large mass of clouds off the west coast of Africa. Five days later, it was given its name, and three days after that, meteorologists predicted that it would hit land on August 27, at 2 a.m., in Cameron, Louisiana. On August 27, at 1 a.m. and less than a kilometer away from Cameron, Hurricane Laura did indeed make landfall.
Given the complexity involved in predicting the path of a hurricane, the accuracy of this prediction is quite remarkable. This is something scientists have been working on for decades—making small, incremental improvements that add up to major leaps forward. The question is: How have they done this?
This is something scientists have been working on for decades—making small, incremental improvements that add up to major leaps forward. The question is: How have they done this? Share on XBack in the 1970s, scientists relied on patterns seen in past hurricanes, essentially using prior performance as a predictor of future hurricane path. This was fairly useful—in the 1970s, meteorologists could typically predict the site of landfall for a hurricane to within around 500 miles—but not quite precise enough. Five hundred miles is quite a large margin for error, leading to a lot of people perhaps being unnecessarily warned.
Over time, meteorologists have begun to utilize more complex models that are dynamic in nature; the models change based on the data they receive. And they receive a lot of data, with more than 40 million different observations plugged into the models daily. This data is then used to create 50 different forecasts, in which the data is ever so slightly modified, allowing the scientists to understand the confidence of their predictions.
In late 2019, researchers from the U.S.—led by lead author Ben Stern—used this hurricane example to put some of the practices seen in sports performance under the microscope, as sports injuries are also complex and dynamic in nature. In a 2016 paper, published in the prestigious British Journal of Sports Medicine, the authors argued that simplifying complex problems into basic units is highly reductionist. This approach is useful for linear relationships (for example, exploring the relationship between smoking and lung cancer, where the more cigarettes you smoke, and for longer, the greater your risk); it less useful, however, for non-linear relationships, or relationships that are highly complex and multifactorial in nature.
Looking at ACL injuries, for example, we can see that the importance of a given risk factor differs between sports: in ballet dancers, fatigue is a key risk factor, while in basketball players, it is hip muscle weakness. But we can also expect to see variation between people in the same sport—Ballet Dancer A’s risk factors may be different than those of Ballet Dancer B. And yet, Stern and his coauthors wrote that we tend to completely ignore this in sport, instead focusing on, say, one “injury prediction test” and using this to inform future risk and interventions.
Meteorologists use 40 million data points daily to predict the path of a hurricane, while we might use one data point in a yearly injury screen to inform our practice over a 12-month period.
Meteorologists use 40 million data points daily to predict the path of a hurricane, while we might use one data point in a yearly injury screen to inform our practice over a 12-month period. Share on XThe Butterfly Effect and Injury Models
Stern and his colleagues instead suggest that we view the athlete in front of us as a highly complex human that is able to exhibit one of two separate states: a healthy state and an injured state. The athlete will constantly move toward one of these two end states—sometimes getting very close to the end destination (i.e., being injured)—but mostly being pushed or pulled in each direction.
The factors that push or pull an athlete in a given direction are broad and varied, and we should cast the net widely here: aspects such as stress, previous history, and non-sport workload all contribute to increasing or decreasing the risk of injury. Each of these factors can be subject to two competing factors:
- Stress (which is destabilizing).
- Accommodation (which is stabilizing).
The athlete is constantly balancing both stressful and accommodating factors; how well they are able to do this determines how likely they are to become injured. As this balance is highly dynamic and ever-changing, it’s easy to see how basing injury risk off just one test at one point in time is likely to prove highly ineffective.
A potential solution to make us better able to make an informed decision around injury risk for a given athlete on a given day is to collect data more frequently. This is what the meteorologists did when improving their hurricane path predictions. While we obviously can’t expect to collect 40 million data points per day, we should probably do better than one data point per year.
A potential solution to make us better able to make an informed decision around injury risk for a given athlete on a given day is to collect data more frequently, says @craig100m. Share on XIn team sports, this is perhaps a bit more common. GPS systems and heart rate monitors are in wide use, allowing performance staff to have more data at their fingertips to inform decisions.
This data collection doesn’t have to be costly or high tech: a now-seminal 2015 paper demonstrated how subjective, self-reported measures (for example, rating of perceived exertion, or mood) were highly sensitive to changes in training load, more so than objective measures such as blood sampling or heart rate. Even just a conversation with athletes as to how they’re feeling—and observing how they move during warm-ups—can provide a useful data point for understanding how the athlete is presenting on that day.
Data from a diverse range of potential injury determinants allows us to better understand the true injury risk of our athletes, so collecting information broadly is also important. In their paper, Stern and his colleagues recommend regular collection of self-report measures exploring aspects such as:
- Life stress, anxiety, and coping skills.
- An assessment of sleep quality and quantity.
- A nutrition log.
- Sport-specific performance tests (which can likely be embedded into a pre-session warm-up).
In a second paper, Stern and his colleagues introduce a second important concept related to complexity and performance: that of state dependence. In a system—such as the human body—where state dependence exists, the interaction between variables is not static. For example, if variable 1 increases variable 2 by 50% on one occasion, a change in a different variable, variable 3, over time may mean that, in the future, variable 1 only increases variable 2 by 10%. The size of this change can be large. Sometimes, improvements in the same variables may be:
- Positively correlated (i.e., improvements in one lead to an improvement in the other).
- Negatively correlated (i.e., improvements in one lead to a worsening in the other).
- Non-correlated (i.e., there is no relationship).
Whether these variables are correlated or not depends on the overall state of the system; the relationship between them depends on other variables.
Making the Connection with Your Athletes
When it comes to considering injury risk, we typically see that increases in psychosocial stress are associated with increased injury risk. But what happens if the athlete has well-established and effective coping mechanisms? A change in this variable changes the relationship between stress and injury.
This change is most likely stable, but there are much more transient changes in a variable that could modify this relationship; a few nights of poor sleep, for example, will likely increase the sensitivity of the athlete to stress, further increasing the risk of injury. This is a further reminder that a single piece of data that is collected can only serve as a snapshot of where the athlete is in time. What we need to know, given the complexity of humans, is how the data changes over time—having this knowledge will enable us to better understand what is truly happening in the athletes we work with.
A single piece of data that is collected can only serve as a snapshot of where the athlete is IN TIME. What we need to know, given the complexity of humans, is how the data changes OVER TIME. Share on XBased on the work of Stern and his colleagues—as well as the underlying principles of chaos theory—we can develop some rules of thumb when it comes to working with athletes.
- We need to consider athletes as complex beings; just because we’ve seen a relationship between two variables before doesn’t mean we will see that relationship again. Furthermore, we need to remember that a multitude of different factors likely influence any apparent relationships.
- We need to move away from static, one-shot measurements (for example, pre-season screenings) if we want to better understand complex aspects such as injury risk and performance improvements. This isn’t to say that pre-season screenings aren’t useful—they can identify key issues to work with—but having more frequent data collection allows for more regular updates around how our athletes are tracking.
- As a wide variety of factors influence how an athlete responds to training, or how likely they are to become injured, collecting a diverse range of data sources likely improves our ability to “predict”—or, at least, make informed decisions—around how athletes are responding.
As such, it’s better to collect information from a diverse range of sources (e.g., sleep, stress, perceived recovery, movement velocity) than from very similar sources. Humans are complex biological beings; we can do better than just reducing them to a single number.
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF