Lay the groundwork for a high school program with one of the most respected high school strength coaches in the country. In this week’s Friday Five, Scott Meier, a strength coach with experience in both physical education and sports performance, reviews what it takes to run a thriving high school strength and conditioning program from the inside out.
Data collection has been a part of the daily routine of strength and conditioning coaches since well before the invention of software-based spreadsheets. There has long been a focus on quantifying heights, times, distances, weights, and much more within a strength and conditioning program. Regardless of programming paradigm, the ability to objectively define improvements in performance is a central need of strength and conditioning coaches.
While handwritten records have largely progressed to a digital format, the process and procedures involved in testing and recording data have not experienced the same evolution. The logistics of data collection and recording in a large team environment where supervision and coaching take precedence creates many issues for strength coaches. The result of these actions creates an issue: How sure are we that the data we are recording in strength and conditioning, practice, competition, nutrition, rehabilitation, wellness, and recovery is accurate and reliable?
The quality of information we have available limits our ability as coaches to make truly informed decisions. To better understand this issue, it is worth diving into the field of measurement in research, statistics, and analytics to gain a mastery of foundational elements affecting the quality of the information you collect on athlete performance.
Your Methods Matter
Whether you are using advanced technologies, a pen and pad, or a simple questionnaire, the method to your measurements matters. The measurement of physical performance metrics such as speed, time, weight lifted, length, height, acceleration, distances covered, heart rate responses, and heart rate recovery is a crucial element when assessing activity and performance in many team-athletic populations. Although measurement is only one of the many aspects that strength coaches, sport scientists, and all allied health staff must give attention to, its impact has the potential to directly affect nearly every area of performance. Given this reality, it becomes paramount to establish a systematic approach when considering measurement of physical performance. Because of the objective and quantitative nature of the data associated with human performance in sports, a more classical view of measurement may be appropriate:
Measurement: The assignment of numbers to objects or events according to rules.
While the majority of information collected on performance is highly quantitative in nature, there are instances where critical analysis and understanding of the data collected becomes more subjective. For example, this may happen when practitioners evaluate the total distance of sprinting performed by an athlete during a single practice session, and subsequently try to establish a relative indicator of the intensity of that practice session.
There are many subjective factors that might affect the athlete’s perception of the session RPE that may not match intensity scales used by practitioners. Some of these factors include: the daily readiness of the athlete to physically perform, the physical preparedness levels of the athlete, the content of the drills performed, the nature of instruction given to the athlete, peer and authoritative feedback, and atmospheric conditions. In such cases, a second definition of measurement is applicable:
Measurement: The process of linking abstract concepts to empirical indicants.
In this instance, consider the abstract concept as the relative intensity level of the practice session given by practitioners to describe the physical effects on the individual athlete. Even in cases where more qualitative analysis of information occurs, a systematic approach to measurement is still necessary.
As an operational definition of measurement is established, it is necessary to discuss the desirable qualities of the measurement process and instrumentation. A primary quality lies within the idea of reliability, or the extent to which the experiment, test, or measuring procedure will yield the same results on repeated trials. Reliability is an element concerning the consistency and repeatability of the measurements performed by both technology measuring performance, such as GPS, accelerometers, and HR monitors, as well as manual measurements taken by practitioners, including RPE scales, ROM and orthopedic testing, sprint timing, and weightlifting maxes.
Proper data interpretation will ultimately determine the success of the application process, where collected information affects future decision-making when planning activities. However, the reliability of the tools and methods is a primary factor that affects the entire data analysis and application process for practitioners. Because of this factor, attention must be directed to issues that negatively affect the reliability of measurements and create measurement errors.The reliability of tools and methods affects the entire data analysis and application process. Click To Tweet
Practitioners must accept that measurement error will almost always be present during the data collection process, despite efforts to minimize their effects. Practitioners must outline a procedure that reduces non-random error, or error due to a systematic biasing effect on measurement instruments. Non-random error for practitioners using high-tech tools may be due to the calibration of GPS units, connectivity of HRM to skin, an untrained intern incorrectly administering an Omegawave scan, or incorrect positioning of barbell velocity equipment. With low-tech interventions, non-random error could be due to improper wording of a question on a recovery survey, or peer influence on task knowledge.
Conversely, random error is inversely related to the reliability of the measurement instrumentation and is associated with unknown or unpredictable changes. An example of random error for practitioners using high-tech tools are an HRM losing contact with a player’s skin because of contact with another player during a drill, GPS satellite spacing affecting local connectivity to athlete units, batteries dying in an electronic timing unit, or other equipment breaking during testing. There are few ways to predict exactly when and how it might happen, but practitioners can expect random error to happen at some point of the measurement process.
It is important to understand that measurements taken by tools such as GPS, accelerometers, tendo units, and HRM will yield an observed score for a particular aspect, such as peak speed, heart rate ranges, distances covered, or acceleration, in addition to many other parameters. Observed scores can also be taken when coaches manually time sprints, measure jump heights, or record the results of a maximum weightlifting test. The observed scores obtained by the instrumentation and tester are composed of a true score and an error score:
Observed Score = True Score + Error Score
The duty of practitioners is to minimize the error score within the measurement process by identifying factors that can be controlled. The true score many never be fully known; however, a structured and systematic approach to measurement of human performance may allow a clearer picture of the true score to be realized. When accounting for sources of error, consider the following elements:
Consider the daily readiness of the individual athlete being monitored. If an athlete has incurred any level of acute or cumulative fatigue from prior practice or training sessions, the readings of that day may not give an accurate representation of normal performance standards.
Additionally, the motivation, mood, or intentions of an individual may cause errors in measurement readings. If an athlete feels they must prove themselves in any matter while being monitored or measured, they may give an effort above and beyond a normal daily expenditure. This resulting data would not give an accurate reflection of normal practice or the training parameters of that individual athlete.
Previous practice, prior experiences, or a lack of either may also account for errors in measurement. If an athlete is experiencing a drill or activity for the first time, the resulting physical performance may not be a true indicator of the athlete’s actual ability level. The readings would then reflect a portion of the learning and adaptation process rather than the athlete’s true ability level within the activity. An example of this is the monitoring of maximum speed during a particular drill during practice where athletes are learning a new skill or technique. While a maximum speed reading will be obtained from the drill, it would not reflect the true ability of the athlete at that particular activity due to a lack of practice and a lack of prior experience with the drill.
Regarding the actual activity being monitored, clarity of directions for the activity being introduced will affect performance outcomes. If the athletes are supposed to run at a specific pace during a conditioning drill, but deviate from that speed because of unclear instructions, the interpretation of the information collected is based on a false premise of uniformity in running speeds. This element is also present in various movement screenings where uniformity of instruction may alter the performance of one athlete to the next, regardless of their capabilities.
After administering instructions, it is also important to monitor how well the procedures are followed. Consider giving additional emphasis to the quality and clarity of initial instruction; external feedback may ensure that athletes fall within the desired constructs of the activity. In order to perform repeated sprints within certain heart rate ranges while monitoring recovery markers, speed, or time, athletes must adhere to specific start-and-stop prompts so they don’t create an error in the measurement of the specifically designed task.
Consider also the uniformity and type of feedback and directions administered by practitioners during activities—i.e., when a group of practitioners are present during an activity and offer conflicting coaching or motivational reinforcement to the athlete or group of athletes being monitored. This may materialize with variances in measurement of the activity performed, such as varying ranges of speed, heart rates, or scoring on a movement screen.
When observing, scoring, or classifying data measured by high-tech tools and manual measurements alike, the competence or experience of the scorers and observers will also affect reliability. The practitioners must possess content knowledge of the physical activity performed, which necessitates an understanding of physiology, mechanics, technique, and tactics. Additionally, they would need experience with the proper use and care of the equipment, as well as in monitoring and classifying the parameters being measured during the activity.When assessing data measurements, the competence of the practitioners will also affect reliability. Click To Tweet
If the practitioners in charge of live monitoring or post-activity data classification lack competency regarding population norms of the parameters being measured, view any interpretation offered by those practitioners with caution. For example, practitioners unfamiliar with the potential for error in readings of speed, heart rate, or human movement may portray erroneous readings as being accurate and valid.
These errors can often compound, as is evident using heart rate monitors, for example. The subsequent caloric expenditure that is partially based on heart rate would also be incorrect if issues arise and are not identified. The practitioner reporting these figures to the sports nutritionist would be presenting figures that are inaccurately high because of erroneous readings. This is only one example of the importance of having observers and scorers that are both experienced and competent.
The scorer’s attention and dedication is also important, as oversights during the data collection and classification process will lead to erroneous readings. During real-time data collection in practice, training, or testing, scoring error may occur if the tools being used experience technical difficulties from uncharged batteries, excessive position alteration of straps and harnesses supporting the equipment, a change in location of the activity away from the predetermined activity space, or the unwillingness, in some cases, of the athlete to wear devices at all times during the activity.
Post-activity data summary and classification would require the practitioner to upload and review information measured during the activity. During the review, outlying values of all parameters must be identified to determine if they might be erroneous scores or if they are a reliable and accurate measurement. Making this determination requires content knowledge and measurement experience, as well as attention to the activity as it was measured in real time.
Tools like GPS, HR monitors, accelerometers, bar velocity units, and other items used to measure physical performance can also act as a source of error during the measurement process. Basic calibration of both software and hardware features of the equipment can ultimately affect the measurement and scoring process, which in turn would affect classification of the activity. Initial calibration of software and hardware is a cornerstone of the set-up and installation procedure, and it is also necessary to insure that these parameters have been maintained during the course of normal use.
As the equipment is used daily, its actual setup on the athletes being monitored may differ if intertester reliability is low. Essentially, different practitioners setting up the equipment on the same athletes may not obtain the same scores for the same tests. Because of this, uniformity of instruction and technique of the allocation and fitting of the equipment is of paramount concern during this process. Any deviation from the adopted instructions and techniques may contribute to variances and the collection of unreliable information. A consistent and objective approach to the allocation and equipment-fitting process is a necessity for practitioners.
The Takeaways: Steps to Enhance Reliability
When approaching the measurement process, the strength coaches, sport scientists, and all allied health and performance staff must acknowledge that, to enhance reliability of the entire process of measurement, efforts must be focused on measuring on an individual basis. As attention is given to each individual athlete and each individual data collection session, practitioners should strive to enhance consistency of their efforts during each data collection process and apply it to each successive measurement in the same manner.Practitioners should strive to achieve consistency of effort during each data collection session. Click To Tweet
When focusing on the methodology used to enhance reliability, the standardization of procedures and protocols becomes the foundational focus of the process. Include all practitioners in group training sessions where working knowledge and mastery of the equipment, software, and technique is gained. As methods for implementation are developed, a written form of a standard operational procedure manual may be utilized as a step-by-step reference for the setup and use of the specific equipment being implemented by the practitioner. These written instructions can serve as the basis for:
Preparing the physical environment for data collection:
- Determine the physical area of activity to establish the proper setup of live- monitoring stations, test recorders, spotters, and equipment, according to manufacturer specifications. All equipment must be placed at safe distances from the athletes and activity to avoid collisions. For low-tech monitoring, the practitioner should be stationed in areas close enough to safely monitor all aspects of the activity.
- Think of the needs of the tools you are using. For example, if practitioners can choose a space for training activities, an area outside that is away from tall structures may be the most ideal setting, as technologies like GPS units rely on connectivity with satellites that may be affected at times during outdoor training activities.
Preparing the subject for data collection:
- Educate the athlete on what they need to do when wearing or using specific equipment. The athlete must understand that they are to work and practice in exactly the same manner as they have in the past, unless specifically directed otherwise by a coach. The testing and monitoring should not change their normal effort!
- Establish a dialogue with the athlete to help them understand the importance and benefit of assessing their performance. If any portion of this process becomes a burden to the athlete, their motivation to continue to adhere to the process will be diminished if they do not see a benefit. Give strong emphasis to benefits of measurement and direct dialogue away from all negative notions of assessment. In some cases, measurement of performance fosters competition between athletes. Use this to your (and their) advantage.
- If the fitting of equipment is involved in your measurement or assessment process, give the athlete a simple reference for keeping the equipment secured during activity. A trained staff member should not only administer the fitting, but should also give instruction to the athlete as to how the equipment should stay in place and steps the athlete can take to address fitting issues. This applies to equipment that athletes wear, like GPS or HR monitors, as well as equipment the athletes might use, such as timers, jump mats, or bar velocity measuring units.
- Ensure that the dialogue between practitioners and athletes is in relatable and understandable terms for the athlete. We can’t expect all athletes to have a mastery of Hz sampling rates in GPS monitors, or positioning of triaxial accelerometers as being algorithm-specific. What we can communicate to them is that these tools are also found on their cellphone, like a map application for directions, or the way their phone screens flip when they tilt their phone.
- Communicate to the athlete that practitioners will be present on the field or around the weight room during training or practice to assist with any equipment issues.
- Provide opportunities to foster autonomy when outfitting or setting up equipment in order to enhance motivation to participate in the measurement process. Getting the athlete involved in this process facilities a feeling of ownership over their results.
- Give simple and relatable feedback to the athlete based upon measured performances. This engages the athlete with the process and further encourages a sense of ownership over performance metrics and accomplishment during activity.
Performing outfitting, adjustments, and collections of equipment:
- Outfit all units on athletes or equipment in the weight room according to manufacturer guidelines and standard operation procedures prior to activity, giving attention to the way that other equipment, such as sport specific padding, specialized braces, or other equipment, fits around your devices.
- Ensure each piece of equipment is powered on according to manufacturer guidelines.
- Allocate staff members to be present during activities to assist athletes with all adjustments of equipment.
- Make efforts to collect individually worn pieces of equipment from each athlete in person post activity to allow for the communication of any equipment issues.
- Give attention to manufacturer guidelines in regard to keeping units powered on or turned off post activity and prior to data upload.
Live monitoring of activity:
- Determine which staff members will observe activity, be available for equipment adjustments, and manage live data monitoring devices.
- Identify parameters to be evaluated in real-time monitoring, as well as staff members that are to be notified regarding changes in the defined parameters.
- Define markers of activity by creating manual or automated time markers corresponding to specific drills or periods of practice and training.
Practices for data uploading, input, and analysis:
- Ensure that all units have been collected post activity and they remained powered on according to manufacturer guidelines.
- Clean units prior to data upload, or according to manufacturer recommendations. If required, connect each unit into uploading stations and ensure that the uploading dock itself is connected to a power source. This is especially true for equipment like heart rate monitors, accelerometers, and GPS units.
- If using technology specific to one athlete, like a GPS unit or heart rate monitor, check in the specific software package supplied by the manufacturer and identify all units as being assigned to the correct athlete prior to confirming the data upload process.
- If necessary, have a staff member present during the actual upload process to ensure the upload occurs without interruption from software updates or computer error.
- After the data input or upload from the session is complete, normalize activity parameters to reflect the actual time and events of the activities performed. Additionally, manually review data to determine if outlying data points are present and attributable to error in measurement.
- Summarize information in accordance with the desired application of data. (This is the subject of a future blog—stay tuned!) Formulate a summary of activities using either offerings from existing manufacturer software or manually derived means such as an Excel spreadsheet.
- Send a digital or hard copy of data reports to necessary program members from sports performance staff, sports medicine staff, team nutritionists, and sports coaching staff.
Regardless of the type of technology used, these concepts will allow for the data collected from athletes to paint the most accurate and reliable picture of performance as possible. Data reliability is a central issue impacting our ability as coaches and sport scientists to make informed decisions, and is essential as part of the foundation of performance analysis in sport.
- Carmines, E. & Zeller, R. (1979). Reliability and Validity Assessment. Thousand Oaks, CA: SAGE Publications, Inc.