When researchers engage in questionable data analysis, there can be deadly consequences. For instance, consider a clinical trial in which a number of test subjects have a heart attack, but that number falls below the accepted threshold of statistical significance, P<0.05.
As Neurobonkers points out today, there are often important numbers "hidden underneath this indication of statistical significance." So how do we sniff out studies that might be manipulating data because, say, the entity conducting the test has a vested interest in the outcome?
You need to look for a pattern. If a disproportionate number of results are close to the P<0.05 threshold, that would suggest there might be something rotten in Denmark. At that point, the raw data of the study in question should be requested and assessed for patterns that indicate p-hacking. This method was developed by Uri Simonhson, who has used the technique to uncover a growing number of cases of research fraud.