When you think of science and scientists you may well think of someone with crazy hair creating monsters from their household pets in their basements. As far as high performance sport goes though, science is the bedrock that helps inform our decisions as coaches, athletes and sports engineers.

During the global Covid-19 pandemic, scientists even became media celebrities as their straight talking, data reinforced approach took front and centre at a time when the future for everyone seemed so uncertain.

Things sure have changed though - it wasn’t that long ago that scientists were being burned at the stake as heretics during the Renaissance. Now even cat food adverts on TV proclaim that ‘eight out of ten cats preferred it’ as some kind of scientific authority on the jelly and gravy preferences of our feline friends.

Back in the world of sport, we athletes often assume that for every set of wheels, shoes or clothing we buy, we'll eventually accumulate enough performance gains that we're able to bend the laws of time so much that we'll start a race next month and finish it last Wednesday...

Scientific findings aren’t always right

Now here’s the thing, science isn’t an absolute or a factual statement of certainty. People often assume it is and part of the reason that science and scientists are fronted in the media is because it seemingly acts to draw credibility and a bottom line to a claim or a viewpoint. However, what you may not realise is that science is actually a method, not an answer.

The UK Science Council defines science as:

The pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.

One of my favourite examples of dodgy science I often show to my students is how a Harvard law student called Tyler Vigen was once able to show the folly of bad data analysis. He illustrated that the number of people annually drowning in swimming pools correlated extremely well with the films that actor Nicholas Cage had appeared in. Watch ‘Con Air’ and swim laps at your peril...

Image Credit: Emma Pallant-Browne ©

Science isn’t always right and in some cases can be questioned further down the road. For example, back in 2005 physiologist Ed Coyle published an eye catching journal paper that proposed one of Lance Armstrong’s big performance gains between his World Championship winning physiology of 1993 and his ascension to Tour de France dominance from 1999 was an increase in efficiency.

When challenged by other scientists later, Coyle eventually admitted he’d made a ‘minor’ error in one of the key calculations and the work was eventually ridiculed – particularly when Armstrong’s performance-enhancing drug taking came to light.

Essentially though, with any scientific study you lay out the groundwork for what you want to find out, state your methods, share the data and then draw meaningful conclusions with enough disclosure that could allow others to replicate what you’ve done or attempt to build on it in the future.

Ultimately, evidence based on real world application is what you’re looking for when you see claims of extraordinary performance enhancement with a new product.

Beware training fads

Then there's the forum-rage inducing subject of training fads. For example, if you frequent enough triathlon and cycling websites, you’ll occasionally see scientific studies get warped, bent and reshaped as ‘magic bullet’ training methods that we then try amidst our bid for self-improvement.

This happened a few years back when Tabata’s seminal work on the benefits of short duration, high intensity interval sessions become a new craze in the fitness industry. As a result, people attempted to replace hours of hard yards with just five minutes of frantic exercise by focusing too much on the session and not enough on the plan.

More recently, there’ been a rush of interest in the ‘80/20’ model of endurance training. This was proposed in an interesting journal paper by author Stephen Seiler and proposed that 80% of endurance athletes training sessions should be low intensity, whereas 20% should be performed at a high intensity.

Image Credit: Dave Blow ©

The work itself was merely a retrospective evaluation of elite athletes training logs but this has now seen a large scale misunderstanding of the differences between correlation and causation (think Nicholas Cage again) and, as a result, many athletes have gone down the proverbial rabbit hole and changed their training without understanding the detail and limitations of the original study.

Still, all I’ve done so far is moan and say how misunderstood it all is. That’s not terribly helpful to you, so in the space of around 500 words I want to try and tell you what it took me 10 years, a large consumption of biscuits and an excellent book called ‘Bad Science’ by Ben Goldacre to get my head around.

How to make real-world conclusions from scientific research

Due to excellent online databases like Google Scholar, you can now access a vast array of sports science studies. This information is mainly free and as Sir Francis Bacon once said, “knowledge itself is power”.

Granted, these can read drier than a packet of Jacobs Crackers, but may give you ideas to try in your own training.

So, what should you look for when reviewing such studies?

Firstly, I always read the abstract and conclusion of the paper first. Pick out what the aims and objectives of the study are and make sure you have an idea of what the scientists set out to do in the first place. Look at the type, number and size of the studies participants. Are they the same gender, age or level of experience as yourself? If so, great. If not, the findings may not transfer as well for you, if at all. Plus, if the number of participants was small, the findings are still useful but it may not extrapolate to a larger population.

Then take a look at the study’s intervention time. Was the experiment run over one visit? A week? Six months? If the intervention wasn’t long, don’t make the assumption that the proposed gains will keep coming beyond that timeframe or consider the possibility that other methods may actually yield better results, given enough time to do so.

Remember, a good scientist always states the limitations of their study. Read the paper’s discussion section to see what their study couldn’t do or didn’t cover.

Finally, in a weird academic twist of irony, here’s a good paper about how to read a paper.

The power of critical thinking

Ultimately, most of this is not about learning to be a science sceptic but instead to become what’s known as a ‘critical thinker’. You read the claims, expand your knowledge but look for the limitations. If a study says that drinking cherry juice will improve your marathon run by 10 minutes, have a good hard read before you start running for the fruit juice aisle of your local supermarket.

Remember, science is not an absolute – I once spent a few months strengthening my lungs with an inspiratory trainer based upon some well undertaken and well cited scientific studies. At the end of it, I could blow balloons up like a champion but my cycling didn’t improve at all.

Image Credit: Dr Bryce Dyer ©

Then, there's the subject of data manipulation. Mark Twain famously once paraphrased British Prime Minister Benjamin Disraeli and stated that “there are three kinds of lies - lies, damned lies and statistics”.

Statistics can be used to reinforce a conclusion but in some cases act as pseudoscience for a marketing team. For example, if we see a brand proclaiming its new bike is the fastest out there, don’t just accept the drag reduction they give at face value. They should also disclose how the testing was done, as well as how accurate, precise and variable the data was.

Ideally, there should be error bars or a +/- value of the data to help show there was more than one test run conducted and to help show the statistical significance between the different types that were tested. Companies often test things differently to ensure their product comes out on top, but reputable brands often share their process too. Good science is ultimately a bit more thorough than just being one of those happy ‘eight out of ten cats’.

Finally, when faced with experts or scientists, determine their credibility by ascertaining their credentials, field of expertise, motivations and disclosure of any conflicts of interest. If it’s me standing up and talking to you about the merits of snake oil, don’t believe a word of it.

Further reading