I spend a fair bit of time whining about how actuaries basically ignore the overfitting problem when building their models. So I was pretty pumped to see an actuary address this issue head on for stochastic reserving (pdf pg 29). I haven’t studied stochastic reserving models, but I was disappointed with this:
Let’s start with a collection of normal distributions with the mean, µ, being uniformly distributed between 100 and 200. The standard deviation, σ,is uniformly distributed between 25 and 50. Pick a random parameter set (µ,σ) from this collection. Then pick a random training sample, x1, x2, x3, x4 and a random holdout sample, x5, from a normal distribution with mean µ and standard deviation σ.
Dude… assuming normal?
I expect a common response to these simulations would be that the Bayesian models assumed I “knew” the correct prior distribution. My, typically Bayesian, response to that would be that if your prior distribution truly reflects your prior beliefs, you should believe the posterior result.
An immensely unsatisfying response. I get that it is frustrating for people to criticize fundamental assumptions behind somewhat sophisticated techniques. If you believe that overparametarization is real, the story goes, you must also reject the normal assumption and therefore you must reject a TON more theory on stochastic reserving, not just this little part. Fair enough, I say.
But what an incredibly weak defense.