HITs Takes a Hit… Maybe

1 Aug

Solid research is produced over a long period of time with validation and verification of standards.  When using tools there must be a set of numbers that validate what data is being collected – in short to make sure the data is “good”.  This has been a problem with many things in the concussion realm, most notably with computerized concussion testing.  However last night I received an email with an abstract regarding the Head Impact Telemetry system or HITs.

Before we go further you will need to familiarize yourself with a couple of statistical terms: absolute error and root-mean square error.

Absolute Error is the amount of physical error in a measurement, period.  The example I found was when using a ruler on the metric side the absolute error of that device is +/- 1mm.

Root-Mean Square Error is a frequently used measure of the differences between values predicted by a model or an estimator and the values actually observed.  This measure is used to compile the deflection of errors in predictions and is good summation of accuracy, which only holds true for a particular variable not between variables.  In other words RMSE shows us how accurate the data is compared to its model/validation.  If this number is high it can show that either the model was incorrect or that the data was compiled incorrectly.

Appearing in the online version of the Journal of Biomechanics researchers from Wayne State (one of the notable places for head impact testing) found that a difference in helmet size on the Hybrid III head model has called into question the validity of the HIT system (abstract);

On-field measurement of head impacts has relied on the Head Impact Telemetry (HIT) System, which uses helmet mounted accelerometers to determine linear and angular head accelerations. HIT is used in youth and collegiate football to assess the frequency and severity of helmet impacts. This paper evaluates the accuracy of HIT for individual head impacts. Most HIT validations used a medium helmet on a Hybrid III head. However, the appropriate helmet is large based on the Hybrid III head circumference (58 cm) and manufacturer’s fitting instructions. An instrumented skull cap was used to measure the pressure between the head of football players (n=63) and their helmet. The average pressure with a large helmet on the Hybrid III was comparable to the average pressure from helmets used by players. A medium helmet on the Hybrid III produced average pressures greater than the 99th percentile volunteer pressure level. Linear impactor tests were conducted using a large and medium helmet on the Hybrid III. Testing was conducted by two independent laboratories. HIT data were compared to data from the Hybrid III equipped with a 3-2-2-2 accelerometer array. The absolute and root mean square error (RMSE) for HIT were computed for each impact (n=90). Fifty-five percent (n=49) had an absolute error greater than 15% while the RMSE was 59.1% for peak linear acceleration.

As you have read above even though the Hybrid III’s size should have called for a large helmet the validation tests for HITs was done with a medium helmet, producing greater pressures on the head, therefore possibly interfering with the impact sensing.  And when the researchers re-ran the tests with the proper helmet on it produced significantly different outcomes for the same exact blows.

This throws the data collection on the HITs in flux, if the “baseline” numbers used to calibrate and normalize the tool are not accurate to the end result use (the actual pressure of the helmet on a player’s head) then all the data produced is flawed.

Does this mean the tool is not useful, no.  What this boils down to is the information provided by the tool is skewed and not reflective of actual use, meaning the numbers produced are not applicable to in vivo situations…  That is if this research is valid itself.

I reached out do different researchers in the concussion field that have had experience with the HIT system looking for comment.  As you can imagine there was some umbrage with what has been published in the above article.  One researcher commented (cannot use their name at this time due to policies); “That paper is very bad science, and the entire test matrix was poorly constructed.  If you want a place to start, read the very strange acknowledgement section…”

So I did some digging on the acknowledgement section – which to me seemed innocuous – but I came up with a frame of reference that perhaps gets to the above quoted researchers point.  It was a paper, written in response to another paper, by Steve Rowson and Stefan Duma.  The gist of it was Virginia Tech (Rowson & Duma) were disagreeing with the paper written by Albert King in regards to the STAR rating and in it was the note of how the HIT was perceived flawed by King.  This flaw (of STAR and HITs) pointed out by King was based on a thesis not peer-reviewed research.

Those of you that can get a hold of the entire article “On the accuracy of the Head Impact Telemetry (HIT) system used in football helmets” by Jadischke, Viano, Dau, King & McCarthy (2013) can read and decipher for yourselves (I cannot reproduce more than the abstract here).  This will play out, undoubtedly, with counter papers, finger-pointing, and dissection of the purported science; which all goes to my overall point about laboratory research and where we are in this ever evolving concussion conundrum.

The timing of this article along with the NOCSAE statement on third-party/aftermarket additions to helmets makes this article worth noting and thinking about.

About these ads

5 Responses to “HITs Takes a Hit… Maybe”

  1. joe bloggs August 1, 2013 at 10:21 #

    I am no fan of helmet sensors or star ratings.

    Lots of engineers have had issues with sensors whether used in civilian or military environments due to the lack of consistency and clinically useful outcomes. The lab validation has now been brought into question. My greater concern was real world validations that I always found lacking. Does the sensor drift over time? How does one ensure proper calibration? Are the readings stable and repeatable under different environmental conditions? Do the models of the readings actually inform us about what is going on in the brain? I could go on. Perhaps better and more refined sensors coupled to better models tested in field conditions might generate more ecologically valid inferences. On the other hand, one must start somewhere. We once measured length by the distance from a king’s elbow to his wrist. We do better now.

    Rating systems lull parents into the belief that 5 star helmet will protect their child for concussion. It is lab based, does not simulate real conditions, and it does not account for maintenance or fit. Nonetheless, Duma made an attempt to independently rationalize a bogus system based on who could buy the most research and market their helmet regardless of its underpinning. Think of Riddell/UPMC laughably stupid Revolution Research. It was only stupid because it the public believed the prattle. It made NFL partners Riddell and ImPact (owners Collins, Lovell and Maroon) lots of money.

    It may appear that I am not a Duma fan. I don’t know him and I question some of the promotion. Nonetheless, his recently released study that seriously questioned NFL manufactured research by Kevin Guskiewicz and Mickey Collins that claimed your kids wouldn’t be safe on the football field unless they started tackle early was really good work. Some of this NOCSAE blow back and Wayne State (read Viano formerly of disgraced NFL m-TBI committee and still on NFL sponsored panels with Richard Ellenbogen) might just be the NFL putting the wood to a researcher who does not simply mouth edicts generated by Greg Aiello and Jeff Pash from Park Avenue.

    • concerned mom August 1, 2013 at 11:37 #

      I understand your concerns about sensors, but as imperfect as they may be, they at least provide some indication of the number of times players are hit in the head throughout a season. I’ve wondered if helmet sensors miss hits to the body that result in whiplash, but thought the development of mouth guard sensors could help answer that question going forward. Guess I would boil down my view on sensors to whether or not it’s better to have potentially flawed data on hits and impacts or no data at all (as long as the flawed data isn’t used to provide a false sense of security, suspect we’re better off with it than without it).

      I also understand your criticism of the Star rating system, but think Duma’s work help expose the convenient relationship between helmet manufacturers and NOCSEA (doesn’t seem as though the pass/fail system based on standards set decades ago was that difficult to meet & that some may have been reluctant to change it due to liability concerns – MomsTEAM put out a very thought provoking article on this). I think Duma even mentioned concerns about the conflicts of interest within NOCSEA during an interview last year.

      He’s obviously not the only researcher who has used HITS, and I can’t help but wonder why questions are being raised about it now, after it’s been around for such a long period of time.

      Speaking of timing, it seems as though this season was set to be a big year for some of the new helmet sensor manufacturers. Now, people may fear using any add-on sensors based on the NOCSEA release. The release might even put some start-up sensor companies out of business. Parents may not have access to affordable sensors which would allow them to have some idea about the number and force of impacts their kids sustain at practice/play.

      • Michael Hopper, ATC August 1, 2013 at 13:36 #

        Research is research. None of it can be fully validated nor can It be fully discredited. The rating system put out by VT may not be perfect, but is a place to start. What concerned me last night was reading how any aftermarket item would invalidate NOCSAE certification because that is awfully vague. There was speculation that included things such as chinstraps as well which I’m sure many players switch out.

      • Dustin Fink August 1, 2013 at 15:22 #

        mike, think people and aftermarket companies are freaking out about the NOCSAE statement… Anything that does not change the helmet or helmet padding does not count. This would include chinstraps, mouth guards and skull caps. And as we have found out if these companies want to use their products they only have to get them tested…

      • Concussion_Sci August 1, 2013 at 13:51 #

        I was at a conference at PSU last year when Dr. King presented much of the research that is published here. During the question period Dr. Duma stood up and denounced the research and Dr. King in a very unprofessional manner. I understand why Dr. Duma would not be a fan of Dr. King’s research and some of his concerns of bias are valid, but the manner in which he confronted Dr. King was rude and disrespectful. Jadischke’s research brings up valid criticism of the HITS system on many fronts, and it seems that Dr. Duma considers it a personal attack. Helmeted sensors and HIT system specifically have many validity issues which the present paper addresses. In my opinion, mouthguard or mastoid patch sensors (behind the ear) are more valid and have less liability issues for NOCSAE.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 7,139 other followers

%d bloggers like this: