Solid research is produced over a long period of time with validation and verification of standards. When using tools there must be a set of numbers that validate what data is being collected – in short to make sure the data is “good”. This has been a problem with many things in the concussion realm, most notably with computerized concussion testing. However last night I received an email with an abstract regarding the Head Impact Telemetry system or HITs.
Before we go further you will need to familiarize yourself with a couple of statistical terms: absolute error and root-mean square error.
Absolute Error is the amount of physical error in a measurement, period. The example I found was when using a ruler on the metric side the absolute error of that device is +/- 1mm.
Root-Mean Square Error is a frequently used measure of the differences between values predicted by a model or an estimator and the values actually observed. This measure is used to compile the deflection of errors in predictions and is good summation of accuracy, which only holds true for a particular variable not between variables. In other words RMSE shows us how accurate the data is compared to its model/validation. If this number is high it can show that either the model was incorrect or that the data was compiled incorrectly.
Appearing in the online version of the Journal of Biomechanics researchers from Wayne State (one of the notable places for head impact testing) found that a difference in helmet size on the Hybrid III head model has called into question the validity of the HIT system (abstract);
On-field measurement of head impacts has relied on the Head Impact Telemetry (HIT) System, which uses helmet mounted accelerometers to determine linear and angular head accelerations. HIT is used in youth and collegiate football to assess the frequency and severity of helmet impacts. This paper evaluates the accuracy of HIT for individual head impacts. Most HIT validations used a medium helmet on a Hybrid III head. However, the appropriate helmet is large based on the Hybrid III head circumference (58 cm) and manufacturer’s fitting instructions. An instrumented skull cap was used to measure the pressure between the head of football players (n=63) and their helmet. The average pressure with a large helmet on the Hybrid III was comparable to the average pressure from helmets used by players. A medium helmet on the Hybrid III produced average pressures greater than the 99th percentile volunteer pressure level. Linear impactor tests were conducted using a large and medium helmet on the Hybrid III. Testing was conducted by two independent laboratories. HIT data were compared to data from the Hybrid III equipped with a 3-2-2-2 accelerometer array. The absolute and root mean square error (RMSE) for HIT were computed for each impact (n=90). Fifty-five percent (n=49) had an absolute error greater than 15% while the RMSE was 59.1% for peak linear acceleration.
As you have read above even though the Hybrid III’s size should have called for a large helmet the validation tests for HITs was done with a medium helmet, producing greater pressures on the head, therefore possibly interfering with the impact sensing. And when the researchers re-ran the tests with the proper helmet on it produced significantly different outcomes for the same exact blows.
This throws the data collection on the HITs in flux, if the “baseline” numbers used to calibrate and normalize the tool are not accurate to the end result use (the actual pressure of the helmet on a player’s head) then all the data produced is flawed.
Does this mean the tool is not useful, no. What this boils down to is the information provided by the tool is skewed and not reflective of actual use, meaning the numbers produced are not applicable to in vivo situations… That is if this research is valid itself.
I reached out do different researchers in the concussion field that have had experience with the HIT system looking for comment. As you can imagine there was some umbrage with what has been published in the above article. One researcher commented (cannot use their name at this time due to policies); “That paper is very bad science, and the entire test matrix was poorly constructed. If you want a place to start, read the very strange acknowledgement section…”
So I did some digging on the acknowledgement section – which to me seemed innocuous – but I came up with a frame of reference that perhaps gets to the above quoted researchers point. It was a paper, written in response to another paper, by Steve Rowson and Stefan Duma. The gist of it was Virginia Tech (Rowson & Duma) were disagreeing with the paper written by Albert King in regards to the STAR rating and in it was the note of how the HIT was perceived flawed by King. This flaw (of STAR and HITs) pointed out by King was based on a thesis not peer-reviewed research.
Those of you that can get a hold of the entire article “On the accuracy of the Head Impact Telemetry (HIT) system used in football helmets” by Jadischke, Viano, Dau, King & McCarthy (2013) can read and decipher for yourselves (I cannot reproduce more than the abstract here). This will play out, undoubtedly, with counter papers, finger-pointing, and dissection of the purported science; which all goes to my overall point about laboratory research and where we are in this ever evolving concussion conundrum.
The timing of this article along with the NOCSAE statement on third-party/aftermarket additions to helmets makes this article worth noting and thinking about.