13. Frequentist vs Bayesian#

13.1. Reviewing P-values#

What is the p value telling you?

Answer

\(P(data | H_0)\) or the probability of the observed data given that null hypothesis is true.[1]

13.2. Bayes’ theorem#

In elementary probability, you can switch the order of conditional probability by a simple expression:

\(P(A|B) = P(B|A) P(A) / P(B)\)

Bayes thought about what this means with regards to data and measurements. So far, we have be working with \(P(D|F)\). What if we want to switch that to \(P(F|D)\)? What does this mean, both mathematically and conceptually?

\(P(F|D) = P(D|F) P(F) / P(D)\)

What is the probability of the data anyway? Let’s make this proportional, since most model fitting tools just need to know that maximum/minimum locations and not the actual value.

\(P(F|D) \propto P(D|F) P(F)\)

Note

Here I inject my own oppinion. For science, trust your data! You have measured it, so assume it is true. I see Bayesian statistics as a perfect match for science. There are a few odd assumptions that you must make in a Baysian framework. So let’s keep ivestigating the difference between frequentist and Bayesian statistics.

13.3. Detailed examples#

“Frequentism and Bayesianism: A Python-driven Primer”

We specifically covered Nuisance Parameters and the difference between Confidence Intervals vs Credible Regions.

13.4. Further Reading#