There is an ever-increasing use of—and reliance on—data within sport. This primarily takes two forms:
- Coaches and practitioners collecting and analyzing data to inform their decision-making processes.
- Coaches and support staff using published (or even unpublished) research to guide their decisions.
This is largely a good thing. Being data-informed (as opposed to data-driven) often means that we can make better decisions, which in turn means that the athletes we work with perform better.
There is a dangerous flip side to this, however. As outlined in the now classic How to Lie with Statistics, we can often manipulate the numbers to support whichever message we desire. This is fine if we have the time and expertise to thoroughly vet the data we’re working with, but in today’s time-poor world, we’re often so rushed that we don’t have time to think—and unfortunately, critical thinking skills are in short supply.
Fortunately for us, Tim Harford of Undercover Economist and Cautionary Tales fame has written The Data Detective (published in Europe as How to Make the World Add Up). In this book, he strives to give us the confidence to use the information we gather to accurately scrutinize the world around us and make better decisions, while also escaping the flawed logic, cognitive biases, and emotions that poison our ability to think clearly. Harford does this through a number of key rules that, while designed for the real world, can be highly applicable to us in sport. Given the increased prevalence of data-derived and -informed approaches in sport, its publication could not have been timelier.
Search Your Feelings
In the Cautionary Tales episode “The Art Forger, the Nazi, and the Pope,” Harford tells us the story of Abraham Bredius. Bredius was a Dutch art collector and historian who was well established as an expert on the paintings of Johannes Vermeer; indeed, Bredius made his name in the 1880s by identifying works wrongly credited to Vermeer. Bredius was highly effective at spotting fake paintings, and in 1937, he published a book identifying 200 paintings incorrectly attributed to Rembrandt. Shortly afterward, Bredius was approached by a lawyer, who sought his opinion on a newly discovered painting called Christ at Emmaus, which was thought to be a Vermeer. Bredius had no doubts; he declared the painting the masterpiece of Vermeer, his best work ever.
Only, it wasn’t. The painting was a forgery, painted only a few months before. Bredius, however, wasn’t the only one who had been fooled; soon after he identified it as a Vermeer, the painting was purchased by a museum for the modern-day equivalent of £10 million. With the benefit of hindsight, art experts today can clearly see that Christ at Emmaus is not a Vermeer, but almost a century ago, Bredius—the pre-eminent art expert of his day—was easily fooled.
The key question to ask ourselves here is why? How can someone with so much expertise be so easily fooled? The answer, writes Harford, is because he almost wanted to be; he let his feelings get in the way of his analysis, making an impartial decision hard to come by.
The lesson here for all of us to heed is that when analyzing data or receiving information, we need to be wary of our current feelings and opinions on the subject at hand, says @craig100m. Share on XThe lesson here for all of us to heed is that when analyzing data or receiving information, we need to be wary of our current feelings and opinions on the subject at hand. Harford writes “we often find ways to dismiss evidence we don’t like. And the opposite is true, too.” This is something termed motivated reasoning; a phenomenon where we use emotionally biased reasoning to make the decision we most desire, as opposed to that which is supported by the evidence. In essence, we believe what we want to believe.
I’ve made this mistake myself. When I was doing bobsled, a common training and testing method was to push a roll-bob—a metal frame on wheels—over a set distance, comprised of a run-in and a timed 30-meter section. One day, training at Loughborough, I broke all the existing national push records, which made me wonder whether I had measured the run-in distance correctly. In my head, there were two competing explanations for my performance:
- I was in very good shape.
- I had measured the distance incorrectly.
I went with the former because it’s what I wanted to believe. Sadly, I was wrong—instead of the 10-meter run-in distance, I had actually set it up for 15 meters.
Harford suggests that it isn’t necessary to become an emotionless processor of the information we receive, but merely acknowledge the role emotions play in how we understand and filter the information and take it into account when making our decisions. When I hold an opinion, I often ask myself “what information would I need to see to make me change my mind?” Asking myself this before I get the information sets a target, and if that target is hit, I have to update my viewpoint. This doesn’t always mean that I change my mind, but perhaps I hold a given opinion less strongly or search for more information that might make me better informed.
And so, to Harford’s first key point—when presented with a new piece of information:
- Stop and think.
- Examine your emotions.
- Notice if you’re straining toward a particular conclusion.
Ponder Your Personal Experience
Visualization is a common technique utilized by many athletes as a way of practicing performance. A recent meta-analysis demonstrated that across a number of studies exploring its effectiveness, visualization and practice enhanced performance to a greater extent than practice alone. Similar research was around when I was an athlete, so I started to utilize formal visualization training, setting aside a time each day to do so.
For me, it didn’t work—it just became another thing to add to the to-do list, and I found it really hard to get a positive outcome from the time I was spending.
Here, what I was experiencing as an individual was contrasting with what the research suggested I should experience.
This leads us to Harford’s second rule: we need to consider whether the information we’re receiving correlates with our personal experiences, and if it doesn’t, we can explore the underlying data more closely. It’s clear to see how this could be the case in sport; we collect a lot of data related to wellness, for example. Some of this data suggest that the athlete is in a state where they can train and perform optimally, but as the coach, we can see that they are struggling. We might find it hard to quantify why we feel this.
Potentially, it’s something in the way they’re moving in their warm-up, their overall mood and affect, or something they’ve said. This is the value of the coach’s eye, whereby we gain inherent knowledge through years of experience, allowing us to identify patterns and make decisions. This is in line with Harford’s suggestion: when our experiences don’t correlate with the data, we need to look for the underlying reasons why.
This becomes increasingly important when we consider Goodhart’s Law, which states that when a measure becomes a target, it ceases to be a good measure.
Again, I have plenty of examples of this from my own career. In 2010, I became increasingly concerned that my performances were beginning to regress, so I started to carry out much more testing within training. I insisted on using timing gates for everything and measuring bar speed in my major lifts. My belief was that if I performed well in these tests, I’d perform well on the track. I did perform well in these training tests. I did not perform well on the track, having my worst season since I was 17. And this wasn’t all that unexpected based on how I had been feeling; training was a struggle, and I was increasingly fatigued.
Had I placed more weight on my feelings than on the data, perhaps I’d have had more easy days and been more recovered. The flip of this is also true—many times before a personal best performance, I’ve had a bad training session which has been reflected in my testing data. Rather than focus on the measure, I was instead able to focus on my feelings and experience; I knew I was in good overall shape, so I wasn’t too concerned. The key point here is that any data we collect, outside of actual competition performance, is just a proxy—and so we should consider it as such.
The key point is that any data we collect, outside of actual competition performance, is just a proxy—and so we should consider it as such, says @craig100m. Share on XIf we understand the statistics, we actually understand little; we need to be curious about what we experience in the real world as opposed to just on a spreadsheet. Or, as Harford puts it, we need to combine the “bird’s-eye view” of data with the “worm’s-eye view” of experience.
Avoid Premature Enumeration
When we receive a piece of data, it’s crucial that we ask ourselves what it actually means. In my training data example, I was conflating performance in a single test of a single performance aspect with competition performance, which isn’t the case at all.
This becomes increasingly important when we work with data that could be considered binary. Take sports injuries as an example: either you’re injured or you’re not. But we can define “injury” differently. If I have a niggle that is affecting, but not stopping, my ability to train, I might not consider myself injured, while another athlete would.
This becomes important when we’re interpreting injury data between two athletes, training groups, or sporting teams: have they all defined the outcome in the same way or is there ambiguity in a given term? Similarly, the word injury is vague: an athlete who misses a day’s worth of training with a sore foot and an athlete who misses a whole year following surgery could both be said to have experienced an injury—but the magnitude of the differences is huge.
Harford terms this premature enumeration—where we rush to use numbers (or information) before we really understand what it’s supposed to mean. In the increasingly complex world of elite sport, it can often be difficult not just to quantify what is happening, but also to define it—paying attention to both things will make us better able to utilize the data we’re provided with.
Step Back and Enjoy the View
If we collected sprint testing data every training session, we would see a lot of variation between sessions. This could be linked to a variety of aspects: accumulated fatigue, normal daily variation, the presence of training partners, motivation on a given day—the list goes on. Within a given training block—such as a week—we might see a general trend emerging; perhaps we are getting faster or slower.
Regular collection of data encourages us to focus on micro-trends; we become fixated on what we’ve achieved that week. Sometimes, it might be better to take a longer view. If we view our sprint testing data over a year period, the trend is much less sensitive to small, day-to-day variations, giving us a truer overall sense of where we are. If our sprint time is typically tracking downward over this extended period, the training we’re doing is effective. As such, being able to take a step back and view the data collected within the overall context in which we’re operating (which, in athletics, is yearly seasons) likely yields more informative insights.
It’s easy to become hyper-focused on a single metric or a short time period, but sports performance is a long game, and viewing things through this lens enables better decisions to be made. Share on XUpon getting information or data, Harford suggests taking a step back, adopting a wider lens of analysis, and attempting to put the information in the context of the bigger picture. It’s easy to become hyper-focused on a single metric or a short time period, but sports performance is a long game, and viewing things through this lens enables better decisions to be made.
Get the Back Story
If you toss a fair, standard coin, there is a 1 in 1,024 chance of getting 10 heads in a row. Here’s a video of Derren Brown, the British illusionist, achieving it. This isn’t a trick, per se; Brown does actually achieve the 10 consecutive heads from 10 tosses—it’s just that it took him nine hours of continuous coin tosses to achieve this run of 10. This is an important point: things that have a low probability can happen when you carry out the action many times. Something that has a one in a million probability of occurring likely will happen at some point during a million attempts.
Remembering this is very important when it comes to placing the results of scientific studies into context. I’ve written about this before, when I explored p-hacking and HARKing as drivers of the replication crisis. As a massive oversimplification, scientific researchers often use something called a p-value to determine whether a result is significant or not. What the p-value tells us is the probability of achieving a result this extreme and the null hypothesis being true—essentially, the probability of saying there is an effect when there isn’t.
The typical p-value cut off is 0.05, which means that if p = 0.05, there is a 5% chance of us getting the results we see and there actually being no effect. For any individual study, this is quite a low threshold, but 5% represents a 1 in 20 chance. That means that, for 20 studies in a given area, we would expect one study to report an effect being present when there actually isn’t one.
This is generally fine when there are lots of studies on a given topic; we can look at the overall literature base, see what most studies find, and then make up our minds. It becomes more of an issue when there are fewer studies in this area. If there are only two studies, with conflicting results, it can be hard to understand which study is “correct.”
This becomes even more of an issue when publication bias comes into play. Publication bias is where studies that report an effect are much more likely to be published than those that report no effect. This is problematic, because if we have 20 studies, 19 showing no effect, and 1 showing an effect, the 1 showing an effect may well be published, while the others wouldn’t be.
Harford presents an example that demonstrates the ridiculousness of this situation. In 2011, esteemed psychologist Daryl Bem published a paper that demonstrated humans could predict the future. This is very interesting but also highly implausible; you and I both know that humans cannot predict the future. Bem’s methods were replicated in other studies, all of which failed to demonstrate evidence for humans being able to predict the future.
However, the journal that published the original research refused to publish these studies, stating that it did not publish replications. There is no suggestion that Bem fabricated his results, merely that his findings were chance occurrences that we would expect to happen less than 5% of the time—but, of course, if you refuse to publish what normally happens, what happens 5% of the time looks true!
Ask Who Is Missing
One of the first articles I ever wrote, back in 2015, was on understanding sports science for coaches and trainers. In that article, I wrote “the results of a study are only applicable to the type of people recruited to the study.” A common goal of research is to find results that are generalizable: if we take the intervention used in a given study and apply it to the broader population, do we get the same results?
Within sports science, this is a bit trickier. Arguably, we don’t want generalizable results, because we don’t work with the general population—we work with elite athletes. Such athletes, by their very definition, are rare, and this creates a problem for sports science researchers: how do you convince an elite athlete to try something that might not make them better, just for your study? This is obviously very difficult (I speak from experience), which is why most sports performance research is either observational in nature, or if interventions are utilized, the participants are often either university students or recreational or sub-elite athletes. This is important to keep in mind when evaluating information—are the participants in the research you’re utilizing similar to the ones you’re working with?
This is important to keep in mind when evaluating information—are the participants in the research you’re utilizing similar to the ones you’re working with?, says @craig100m. Share on XThis is even more of an issue if you coach women. A 2019 paper identified that fewer than 20% of all sports science research papers included any female participants and fewer than 5% were comprised exclusively of females. Again, this causes us issues in taking the results of a given study into practice, because men and women are vastly different.
This is why Harford’s sixth rule—ask who is missing—is especially important within the sports performance research sphere, because the answer could well be the very people we’re most interested in. This doesn’t mean we have to discard all sports performance research, but as discussed above, we use science, and scientific research, as a method of informing our practice; it’s a starting point from which we experiment and adapt, as opposed to the answer to all our questions.
Demand Transparency When the Computer Says “No”
We live in the era of big data, and we use progressively more complex methods of analyzing this data to get answers that inform our decisions. An increasingly common approach is the use of machine learning, where we “teach” a computer to provide answers based on the information we put in. This creates an issue: we often get answers, but we don’t understand why the computer came up with these answers—which makes it very difficult for us to spot when a mistake has been made.
A great example of this comes from when researchers taught a machine learning model to differentiate between dogs and wolves in photographs. To do this, researchers show the computer a picture of either animal, allow it to guess, and then tell it whether it is right or wrong. Over time, the computer “learns” how to differentiate dogs from wolves.
In this case, the researchers could reverse-engineer how the computer was deciding between dog and wolf: if the picture had a white background (i.e., snow), the computer labelled the animal as a wolf; if it didn’t, it labelled it as a dog. This is a clever way of the computer being able to distinguish between dog and wolf when presented with photographs, but it’s obviously not very useful in the real world—the solution the computer comes up with is not broadly applicable, because what it has “learned” is essentially to cheat.
This is becoming an increasingly big problem in sports performance due to the use of “black box” algorithms in proprietary equipment/software, such as those that measure training load. As companies develop products that promise to predict certain outcomes, they naturally want to protect their invention—as such, they don’t let us know how their models come up with a given answer, or why. This means we can’t evaluate them for ourselves; we must either blindly trust the machine or ignore it.
For reasonably mundane decisions, such as whether an athlete should train today or not, that’s mostly fine—but you can see how this could become an issue when it comes to selection. How can you explain to a player why they’re not selected if you don’t understand it yourself? What if the computer is wrong?
This is the crux of Harford’s seventh rule: do not simply trust that algorithms do a better job than humans. But similarly, also recognize that humans have their own biases and mental shortcuts—just because an algorithm is flawed doesn’t mean a human would do better. The key here is to strike a middle ground—understand that both computers and humans have bias, keep an open mind, and try to critically appraise all information you receive before acting on it.
Remember That Misinformation Can Be Beautiful Too
One of my favorite books is Information is Beautiful, by the graphic journalist David McCandless. As per the title, information can be beautiful—but so can misinformation. A cleverly designed infographic or catchy framing of a piece of information can draw us in, even if the underlying data and facts are not accurate or are being misrepresented. The ease of sharing this information and infographics on social media, in turn, allows this misinformation or incorrect information to spread, and before we know it, the wrong message is being accepted.
A cleverly designed infographic or catchy framing of a piece of information can draw us in, even if the underlying data and facts are not accurate or are being misrepresented, says @craig100m. Share on XHarford has some key guidelines for us to keep in mind when it comes to interpreting beautiful visualizations:
- Check our emotions: A well-designed graphic can invoke an emotional response, so acknowledging and being aware of this is crucial in our interpretation of what we’re seeing.
- Check we understand what the infographic purports to show. Is context being given? Can you access the source data?
- While it isn’t the case with many of the infographics used from those in sports performance, we must consider that someone might be trying to persuade us to think in a certain way, and given their meme-like qualities, infographics are almost uniquely effective at doing this.
Keep an Open Mind
Changing our mind is uncomfortable, particularly if we have forcibly set out our stall (declared our intention) previously. But the best thinkers update their beliefs based on the evidence—academic and real world—that they receive. This links into the work of Phil Tetlock, popularized in Superforecasting (which is a High Performance Library topic for another day).
Tetlock’s research suggests there are two key types of people: hedgehogs and foxes. Hedgehogs typically have one big idea, which they use to explain the world around them. They are a specialist in that area of knowledge, are very confident in their predictions, don’t consider counterarguments, and don’t like to change their predictions once made. Foxes, on the other hand, tend to be multidisciplinary thinkers. They have broad knowledge of different areas, and they are more open to criticism or alternative views, more cautious in their predictions, and more likely to update their predictions as they get more information.
Harford writes that very often we make mistakes not because the information we need is not available, but because we refuse to change our mind based on this information, says @craig100m. Share on XFoxes are much better at making predictions about the future because they’re consistently updating their mental model of the world as they receive new information. Being willing to change our mind, ultimately, makes us better thinkers. In summing up this chapter, Harford writes that very often we make mistakes not because the information we need is not available, but because we refuse to change our mind based on this information.
The Golden Rule – Be Curious
Harford’s final rule, the one that supersedes all his other “commandments,” is the golden one: Be curious. Look deeper and ask questions; undertake active inquiry as opposed to blind acceptance; seek out wider sources of information.
By following the key rules outlined by Harford in The Data Detective, we can all become more informed consumers of information and data, allowing us to make better-informed decisions to support the performance of the athletes and teams we work with. Given the abundance of information out there today, these are important lessons—and something we should all aim to get better at.
Since you’re here…
…we have a small favor to ask. More people are reading SimpliFaster than ever, and each week we bring you compelling content from coaches, sport scientists, and physiotherapists who are devoted to building better athletes. Please take a moment to share the articles on social media, engage the authors with questions and comments below, and link to articles when appropriate if you have a blog or participate on forums of related topics. — SF
What an incredible piece of writing. Could be, should be used in every day life. Change your mind when given good information.