How are Bayes factors actually Bayesian?

by Benny Wenz   Last Updated February 11, 2019 09:19 AM

I have been doing some linear model analyses involving Bayes factors lately and I have two probably very basic questions:

1) As far as I understand, a Bayes factor is simply a likelihood ratio, i.e.

p(data|M1)/p(data|M2).

But that's not really Bayesian inference, is it? Since the whole point of Bayesian inference is to convert p(data|model) to p(model|data)? Sure, people argue that given equal prior probabilities of both models, the above equation is equivalent to p(M1|data)/p(M2|data), but still it seems to me like the Bayes factor approach is missing the whole point of Bayesian inference. Especially since the really cool thing about Bayesian modeling is that I can have both prior and posterior distributions for each model coefficient, I feel like Bayes factor model comparison is falling short of the power of Bayesian models...?

2) How is it possible, in the first place, that Bayes factors, being based on (unpenalized) likelihood, can favor the null model? Shouldn't the likelihood always increase with more complex models, i.e. with the non-null model?

Hope some of you can shed a little light on my mind. Cheers, Benny



Related Questions


Updated March 03, 2017 15:19 PM

Updated July 11, 2016 08:08 AM

Updated October 06, 2016 08:08 AM

Updated June 01, 2017 18:19 PM

Updated April 20, 2017 18:19 PM