New Sports Performance Technologies: The 4 Key Questions Teams Need to Ask

Published

November 5, 2017

I’ve had some thoughts on this topic for awhile, and then this article from a couple scientists with, I believe, the San Antonio Spurs came out in the Strength and Conditioning Journal. It does a great job at summarizing the key issues, but I wanted to try and distill it down to a 4-question framework that I use when evaluating a new sports performance technology for teams that I consult for.

So here are the four simple but broad questions that I ask with any new technology, in the order in which I ask them. I’ll use a single example of a hypothetical portable tool that measures forces acting on the shoulder joint of baseball pitchers to illustrate my four questions.

  1. “Does this technology measure anything?” This can be roughly mapped to the concept of reliability in psychometrics. Is your new tool consistently measuring something or just giving you noise? Ways to check this include repeated testing on the same player (e.g. for neurocognitive tools where the player’s underlying cognitive skills shouldn’t change massively day-to-day) or having multiple players run the same drill while checking for consistent results (e.g. a wearable that purports to measure steps, jumps, or other movements).

If we have a device that purports to measure shoulder force, we can first check if it gives reliable readings for a series of fastballs of similar velocities from the same pitcher. If the answer is YES, then you can a*sk…

  1. “Does this technology measure what you hope it’s measuring and/or what it purports to measure?” This roughly corresponds to statistical validity. If you have a tool measuring reaction time or trajectory tracking, how can you make sure it’s measuring that and not something else? Can you check recorded jump heights from accelerometers against external height measures? Does an “energy expended” measure exhibit within-player correlation with ratings of perceived exertion (RPE)?

Sticking with our hypothetical shoulder tool, we could ask if its data correlates well with a gold standard measurement of shoulder forces such as a full biomechanical analysis. If the answer is YES

  1. “Can what this technology measures translate potentially to on-field value (i.e. wins)?” This is probably the most complex question and can require a lot of time and patience to properly test. Given the timeframe in which teams operate it may not be possible to get a clear answer to this question, but we should still do our best. What it boils down to is this: Devices give you data, but getting insights from this data is a totally separate question. Getting actionable insights is even harder. Making sure those actionable insights translate to more winning is harder still.

Say our new portable force measurement tool reliably and correctly measures the forces acting on a pitcher’s shoulder joint. That’s great. Now what can that information do for us? Could we potentially alter a pitcher’s mechanics to make him less injury prone? Can we catch small biomechanical changes to identify small health issues before they turn into big ones? Can we watch the forces build up in real time to better predict pitcher fatigue and on-field effectiveness, which might change our bullpen usage? If the answer is YES to at least something…

  1. “Is your organization positioned to translate this potential into wins?” This is arguably the most important question and probably where I’ve seen teams make the most mistakes. Investing in a technology is one thing, but having the internal champions to use the technology regularly and the administrative and player buy-in to translate your insights to actual on-field improvements is critical. You need a plan for how you’re going to deploy any new tool which includes: who’s going to be using it? Who’s going to be responsible for its regular and continued use? How will you translate its data into communicable and actionable insights? Is your organization going to be willing to use these insights?

Sticking with our shoulder tool, let’s say the tool, remarkably, is reliable and accurate. Let’s say even more remarkably you’ve managed to develop a plan to use insights from the data to actually make your team win more. Unfortunately, that device still has no value for you if everyone involved in its success – in our case maybe the manager, pitching coach, pitchers, trainers, S&C staff, and analytics team – isn’t committed to making it happen.

Sports teams buying new technologies with no internal champion or usage plan. Source.

These questions are not specific to the shoulder tool that I outlined. You can ask them of any sports performance technology. For example, a GPS chip: 1. Does it accurately measure and calculate a guy’s straight-line speed? Does it correctly identify sprints, decelerations, and directional changes? 2. Can you use the pre-calculated measures it spits out or the raw data underlying them to calculate a relevant workload or skill metric in your sport? 3. Can those metrics translate to on-field success? For example, can workload be modified to improve fitness and minimize fatigue and injuries? Or can you better identify players with the instincts to be where they need to be on the field above and beyond traditional scouting? 4. Will your organization actually do that?

Another example, cognitive testing tool: 1. Does it yield similar scores day after day in the same guy, or is it so sensitive to a hangover or how much sleep a guy got you can’t trust it? 2. If it purports to measure, say, reaction time, does it measure that in a way that’s relevant for your sport? 3. Could you use it to make decisions about which guys to draft or sign, or could you identify a weakness and design drills to improve a player’s skills? 4. Will you?

And if you ever need any help asking or answering these questions, I’m always happy to talk.