Editor’s note: In the first part of his article, Ian Cooper explained how banks and pro sports teams tend toward behaving low risk with their analytics. Here, he explains why that may not always be the best idea.

In a highly physical sport like hockey, which has an 82-game season, a tight playoff schedule and playoff games that could theoretically go on until the end of time, fatigue plays a huge role. As a result, teams that have deep lineups tend to fare better, particularly in the postseason.

Fielding a winning hockey team isn’t just about finding the best players or giving lots of minutes to a handful of superstars; it’s about managing a complicated equation that maximizes a team’s overall performance, during every minute played. The best hockey coaches understand not only who their best players are in the abstract, but they also have a firm understanding of when linemates, opponents or a “fresh set of legs” might make a lesser overall talent perform better.

Putting that thinking into practice is harder than it sounds because, like the bank I was working for almost 20 years ago, the only data we have is biased.  (Read “Experimenting With Unbiased Analytics for Winners: Part 1” for Ian’s background bank story about building statistical models for customer behavior and profitability.)

So, for example, when my colleagues in the Department of Hockey Analytics and I declared our unbridled affection for Benoit Pouliot, who at the time was earning a “paltry” $1.3 million on a one-year contract, we based our view on Pouliot’s performance with limited ice time.

Benoit Pouliot during a pre-game skate as a Boston Bruin. Photo credit: Wikimedia Commons, Meowwcat.

Benoit Pouliot during a pre-game skate as a Boston Bruin. Photo credit: Wikimedia Commons, Meowwcat.

Pouliot was very productive with the minutes he had and seemed to display a good balance of offensive and defensive play. This led us to wonder what he might do if given a bigger role.

We weren’t alone here—others in the hockey analytics community were seeing the same thing—and the Edmonton Oilers obviously saw Pouliot’s value as well, signing him this summer to a five-year, $20 million contract.

But here’s the problem. We really don’t know what might happen if Pouliot had been given more ice time and tougher assignments (his linemates were actually pretty good last year). It’s possible his performance would remain strong, but it’s also possible it might taper off, perhaps even precipitously. After all, it’s one thing to do well when your legs are relatively fresh, other guys are drawing the opponent’s toughest player and you get the easy assignments. Being “the guy” is a different story.

By signing Pouliot to a big contract, the Oilers are essentially engaging in an expensive experiment. But is there a cheaper way to perform that experiment?

In my view, there is.

As was the case with my bank client, this would require a team to embrace experimentation and short-term risk. For example, rather than assume your highest scorers are the best players or even try to guess at which guys you’re underutilizing, a team could pick a number of games during the season at random (to avoid the possibility of drawing too many weak or strong opponents, home games versus away games, etc.) and assign roles and ice time at random.

So, the fourth line might become the first; penalty killers might suddenly find themselves on the power play; shutdown players might find themselves in an offensive role.

I’ll admit the examples I’m proposing, while interesting to people who toil away in the world of observational data, are unlikely to be practical in the real world of professional sports. Banks have millions of customers, so randomly experimenting with a few thousand isn’t out of the question. Pro sports teams don’t have the luxury of millions of games to play with. Still, even a more conservative approach to experimentation would yield benefits. The important part is for teams to set up experiments where the outcome isn’t known in advance, which means the tinkering shouldn’t be designed solely to validate “hunches.”

Taking this kind of agnostic approach would help many teams settle essential questions—in a way that isn’t biased by a coach’s preconceptions or by the complicated political relationships that exist in all sports franchises—of who their best players are, which ones are simply getting the benefit of better opportunities, and where the point of inflection is at which a better player shouldn’t be skating due to fatigue or other factors.

Each team would have what it wanted to experiment with, determined by risk tolerance as much as anything else, but as long as a team truly embraced unbiased experimentation, it could learn a great deal of new information.

Intentionally putting games at risk is not for the faint of heart, nor is it for a team that thinks it will be “on the playoff bubble,” but if done properly, such an approach could yield valuable insights at little cost. For an elite team that is essentially assured a playoff berth, such as Boston or Chicago, stepping back from a “must win every game” mentality to one that puts some games in play in order to learn key information during the regular season may in the end position them for bigger returns in the playoffs.

Editor’s note: Read “Experimenting With Unbiased Analytics for Winners: Part 1” for more on Ian’s background in building statistical models and what that has to do with sports analytics.