One question that I have always asked, but seems to be increasingly relevant, is: "from a practical standpoint, why are we collecting physiological data?"
http://upload.wikimedia.org/wikipedia/commons/a/a4/Socrates_Louvre.jpg |
There are obvious answers to this question, such as "to track progress" or "to prescribe exercise," and whilst those answers are correct, they are missing the nuances of the question. For the sake of this argument, I will be considering data collection in the context of heart rate, but it could be just as easily applied to any biomarker.
We use data in sport all the time. You can't turn on the television without being bombarded by statistics of your favorite sports stars or teams. You can't go to a live game without overhearing the person behind you talking about pass completion percentages or some other measure of physical performance. Even from a basic level, keeping score in any game or sport is the crux of competition. We are applying mathematics to physical performances to determine whether one person or team is 'better' than another.
But what would happen if say, our ability to determine a high-jumpers 'score' had extreme levels of error associated with it. Instead of measuring a successfully cleared height of 180.1 cm, we instead measured it at 178 cm ± 4 cm. The next jumper then jumped an actual height of 179 cm, but we were only able to record it as 176.8 ± 5 cm. How would we ever determine who jumped higher? The precision of measurement is necessary to make these distinctions.
http://news.bbcimg.co.uk/media/images/62214000/jpg/_62214065_chicherovaa.jpg |
Then consider a situation where we are unable to directly measure the jumped height, and instead are going to approximate the height they jumped based on the magnitude of force we recorded on a force plate from their take-off leg. Now, we have some mathematical models that can calculate the height cleared, but must make some assumptions such as direction of force, speed of approach, air resistance temperature, distance from the bar, neural coordination, force transfer, technical ability of the jumper etc. Once all of these variable have been normalized, we use that formula to calculate the winner. This method would be rightly rejected by the competitive community and be labeled as 'only an estimation.'
This situation is obviously hyperbole, but it goes to illustrate my point. Measurement in exercise science must be precise, as well as recognize the limitations of the measurement based on the assumptions being made.
So what about heart rate (HR)? Well, most HR monitors are considered to have a relatively high level of accuracy in detecting the actual rate of cardiac contractility. However only yesterday I read an article of a high-profile coach using HR data to monitor athletes and was using age-predicted maximum HR to determine relative training zones. Furthermore, I doubt they carefully controlled the method of measuring resting HR, adding further error to the calculation. Or are they using basal HR? How many times are they measuring HR? What is their criteria for accepting to rejecting recorded measurements? What is the criteria for ensuring a normalized state for the subject when taking measurements? How are they measuring maximum HR? Is it equation derived or testing derived? If derived from a maximal test, how do they know it was maximal? Perhaps it was only peak HR and under different conditions or stimulus, the heart is actually capable of a higher rate of contractility?
http://cf.ltkcdn.net/exercise/images/std/118921-400x300-Heart_rate.jpg |
We also need to address whether the thing we are measuring is the variable of interest. As with our high jump example above, we were measuring force generation, when what we were really interested in was the height that was jumped. Why are we measuring HR? Is it because we are interested in how many times the heart is beating in any given minute of training? Perhaps, but more likely, we are interested in aerobic workload (VO2). Assuming this, we are now trying to predict VO2 based on HR responses. In most situations, we assume a linear response between VO2 and HR with an increasing workload, but is this actually the case? For everyone? Regardless of training status or development? Is this a reasonable assumption or are there caveats that need to be considered?
Why does this matter? Well, most coaches or staff are interested in biomarkers or physiological measuring as a method of improving training or periodization strategies. Often this step forward in training is made with elite athletes, where even small physical improvements can be significant for performances.
When we are working with such small margins, every source of error we introduce into the measurement, or every assumption we make regarding our method of measurement makes our conclusions about the data less and less meaningful. There comes a point where we are willing to accept a certain degree of error so we can utilize the data and improve our training strategies, but everyone involved needs to be made aware of the limitations of every measurement so concrete decisions or opinions aren't made based on unreliable data.
I am urging coaches and athletes who are using data to track physical performance to become intimately familiar with the sources of error and assumptions made with every mode of measurement, and consider all of those aspects before making training recommendations. Training recommendations based on poor data could be less than optimal or even worse, dangerous for an athlete. This is something we should all do our best to avoid.
No comments:
Post a Comment