Skip to main contentSkip to navigationSkip to navigation
Uncertainty

Five sigma and all that

This article is more than 9 years old

Why do particle physicists demand 99.9999% certainty before they believe a new discovery? And what do you do if you can’t be that sure?

Even when I’m close to overwhelmed with other work, I try to check the arxiv once a day, to scan through the hot new releases in high energy physics experiment and phenomenology. It’s fun, seeing what people are doing, and usually stress-free (unless I’m working on a paper myself that I am anxious might be scooped). I even do it on holiday sometimes, though this is “discouraged” by my family.

The notes released this way are mostly new research paper, simultaneously submitted to journals for review, or write-ups of talks at various conferences.

Many conference write-ups are pretty pointless rehashes of existing papers and known results, and they often appear so long after the conference in question that even if the talk itself contained exciting new physics, the results will be well-known by the time the proceedings emerge. Every now and then though, there’s a valuable exception, when someone who is an expert in a particular area of science takes the opportunity to collect their overall impression of the state-of-play. These contributions can take the form of entertaining rants, didactic treatises or high-minded visions, and for my taste (in contrast to a real scientific paper), the more colour and personality exhibited, the better.

Louis Lyons, a world expert in the application of statistical science to particle physics, produced one of these last week. It’s a very economical (six pages) skip through a series of issues which have taken up far too many hours of discussion time within and between big particle physics collaborations, and it is full of choice quotes.

Amongst the issues he zips through is the interpretation of “p-values”. These are a way of quantifying the probability that a given observation is consistent with a given hypothesis. There is a tempting confusion to invert these; for instance if a p-value says “the probability of these data arising, given that the Standard Model of particle physics is correct, is very small”, that’s fair enough, but that is absolutely not the same thing as saying “given these data, the probability of the Standard Model being correct is very small”. I tried to describe this issue in this article about medical screening. Louis’ article also goes for a medical analogy:

If anyone still believes that P(A|B) = P(B|A) [probability of A given B = probability of B given A], remind them that the probability of being pregnant, given that the person is female, is ∼3%, while the probability of being female, given that they are pregnant, is considerably larger.

Another of Lyons’ themes in the article is to address the rather arbitrary “five sigma” (5σ) significance traditionally required by particle physicists to claim a discovery of a new particle or some other new effect. This was the threshold passed by the Higgs boson on 4 July 2012, and so while it has always been a big deal within the field of particle physics, it is now a bit more widely known. It corresponds to about a one-in-2million chance that your result is just noise*, which seems a bit excessive and sometimes makes statisticians snigger. The justification for this value is a nice mixture of pragmatism and rigour.

The first reason given is

History: There are many cases of 3σ and 4σ effects that have disappeared with more data.

True, that.

Another reason is the so-called ‘Look elsewhere effect’, which is a rather fuzzy way of trying to account for the fact that if you make many measurements, there are likely to be some outliers - likely to be some unlikely events, as it were. This has the unwelcome effect of introducing some kind of need for judgement, with more than a whiff of subjectivity, since someone has to decide what is the “elsewhere” that has been studied. Is it a range mass values in some distribution? Or is it all particle physics experiments ever that might have shown up a weird result? Or is it all experiments ever, whatever the field? Sometimes the answer seems obvious, other not so.

The third reason is that it is an attempt to stay well clear of the difficult-to-quantify impact of systematic uncertainties. See here for some discussion of those.

The final reason comes back to the p-value business above, via Bayes’ theorem, of which more in a future post. Lyons says that 5σ incorporates what he calls a ‘Subconscious Bayes factor’, or alternatively an attempt to quantify the statement that ‘Extraordinary claims require extraordinary evidence.’

I confess that in my early in my career as a physicist I was rather cynical about sophisticated statistical tools, being of the opinion that “if any of this makes a difference, just get more data”. That is, if you do enough experiments, the confidence level will be so high that the exact statistical treatment you use to evaluate it is irrelevant. This is probably why I never paid enough attention to Louis’ lecture course that I sat through when I was a research student.

That is a fair enough standpoint if you have the luxury of unlimited data. However, in a race (for example the race between the Tevatron, ATLAS and CMS to find out whether the Higgs Boson existed), a better statistical treatment can give you the edge over your rivals.

In other circumstances, more data may not be easily available and guidance might be urgently needed. In particle physics scenarios with incomplete or inconclusive data, the balance of probabilities will influence the direction of major experimental (and theoretical, though they are cheaper) efforts, and so understanding the available statistical treatments is well worth the investment. Outside of physics, the decisions being made could have more direct impact; medical diagnoses or political policies could hang in the balance.

In these cases, there’s a need to be clear-eyed about the limitations and advantages of the statistical treatment, wonder what is the “elsewhere” you are looking at, and accept that your level of certainty may never feasibly be 5σ. In fact, if the claims being made aren’t extraordinary, one-in-2million may indeed be overkill, as well being unobtainable. And you have to factor in the consequences of acting, or failing to act, based on the best evidence available - evidence that should include a good statistical treatment of the data.

* meaning, a one in two-million chance that the data would be like this assuming the null hypothesis (in this case, no Higgs boson). Or, even more precisely, this. I realise I fell into the the very trap I was trying to describe...

Jon Butterworth’s book, Smashing Physics, is out now. Some interesting events where you might be able to hear him talk about it etc are listed here. Also, Twitter.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed