P-Value Controversy

P-Value Controversy

P-Value Controversy

P-Value Controversy,one of the most common statistical measures that accompanies prediction results is being misused, its significance is becoming over-hyped and maybe its over-representation is leading to its demise, yes readers we are talking about the p-value.

For those of you who are new to the concept of p-value, let us begin right away, but before we dive into p-value and its implications we need to understand it’s reason to exist, that is Hypothesis Testing.

P-Value Controversy
P-Value Controversy

Suppose as an environmentally conscious individual, you need to figure out whether carbon dioxide is directly responsible for global warming, what you do is, start with a NULL hypothesis stating that carbon dioxide IS NOT responsible for global warming (essentially the opposite of what you assume to be true), and an ALTERNATE hypothesis stating that carbon dioxide is directly responsible for global warming.Well now, you collect some randomly sampled data from a population, run experiments and build models on the data, and then run statistical tests (like a t-test or ANOVA) to figure out just how accurate your results are, from the above explanation we can reasonably guess that the null hypothesis is crafted to fail, and the statistical measure that decides whether or not the null hypothesis gets rejected is the p-value.

Now that we understand what p-value is, we need to know WHY it matters so much? A p-value result is almost universally included in all research papers, and the value has three major implications:

1. Any result below 0.05 is considered a statistically significant result that is globally recognised and accepted, and the research project may be used as a base for other studies.

2. Any result above 0.05 is considered insufficient to effectively reject the null hypothesis and confirm our alternate version.

3. And results on the 0.05 margin are thought of as going either way, that is, it depends on the reader to figure out whether the experiment was conducted without bias, and to take a call on whether he/she subjectively would like to use the final results.

Well you see, that’s not quite true, most respected journals and publications, would dare not publish any results with a p-value of over 0.05, that means that any study that was done on any subject, no matter how difficult the data collection was, or how sparse the data was, to begin with, needs to get its p-value under 0.05, this, as one might suspect, festers manipulation, researchers in the field of medicine, environmental studies, social reforms and other fields of science where getting data is innately difficult are forced to alter/ tweak their findings to fit within the expected threshold or risk representation of their coveted findings.

As the above-unspoken truth generates mixed feelings in you and me, so is the condition of the data science community with many showing disdain on the over-hyped representation of p-value within scholarly articles, there is varied opinion amongst different camps of researchers, and a general consensus on acceptance of the present state or a possible solution seems difficult if not improbable at the moment.

Though dilemma is the word of the hour, two main groups stand out from others with clear views on how to proceed forward. The first group is in favour of dropping the p-value completely from technical papers and articles, having succumbed to the grave that was dug by the very people who now oppose it.

Another group rises in this dire situation with a solution , and as it is with change, it takes time to be accepted and seep into people’s minds, their proposed solution in this scenario is to alter the threshold by which we bifurcate our results.

According to the new threshold, a finding with a p-value of less than 0.005 will be considered statistically significant, all results within the 0.005 to 0.05 range will be considered indicative of some relationship between the predictor variables, but not enough to be labelled ‘statistically significant’, their inference will be left to the readers to examine the original test/experiment and then decide, if they wish to proceed with the results in their own analyses, and as is, any result with a probability value of above 0.05 will be rejected.

Though, it is not nearly as desirable a solution as many would have preferred or felt comfortable with, it is a start in some direction, whether the chosen direction, leads to fruition is left unto time to decide, and maybe in the future with some data we may as well be able to perform our own tests to confirm, with our knowledge of p-value, whether this was a change in vain or whether they were successfully able to reject the null hypothesis.

P-value controversy remains as dilemma .Researchers  have still not figured out the exact solution.Hope ,in near future scientist comes up with some new methods to remove the p-value controversy.

About the Author:

Piyush Daga is a data science fanatic, and a firm believer that predicting the future with certainty is a fluke, but the closest you can get to doing so, is with the right questions on the correct data.

 

Back to http://www.freesocialanalytics.com/facebook-graph-api/

Post Author: Ruchi

Leave a Reply