Would you like all your posteriors in one plot?
In response to a DM question, here we practice a few different ways you can combine the posterior samples from your Bayesian models into a single plot.
I mainly post about data analysis and applied statistics stuff, usually in R. Frequent topics include Bayesian statistics, causal inference, multilevel models, and statistical power.
Written by A. Solomon Kurz
In response to a DM question, here we practice a few different ways you can combine the posterior samples from your Bayesian models into a single plot.
In many instances, partial pooling leads to better estimates than taking simple averages will, a finding sometimes called Stein’s Paradox. In 1977, Efron and Morris published a great paper discussing the phenomenon. In this post, I’ll walk out Efron and Morris’s baseball example and then link it to contemporary Bayesian multilevel models.
There’s more than one way to fit a Bayesian correlation in brms. Here we explore a few.
\(t\)
)In this post, we’ll show how Student’s t-distribution can produce better correlation estimates when your data have outliers. As is often the case, we’ll do so as Bayesians.
\(t\)
-DistributionThe purpose of this post is to demonstrate the advantages of the Student’s t-distribution for regression with outliers, particularly within a Bayesian framework.
You too can make sideways Gaussian density curves within the tidyverse. Here’s how.
This is an early draft of my first attempt at explaining the connection between meta-analyses and the Bayesian multilevel model. Enjoy!
Stop using fee-based meditation apps! Let me tell you why.
The purpose of this post is to give readers a sense of how I used bookdown to make my first ebooks. I propose there are three fundamental skill sets you need basic fluency in before playing with bookdown: (a) R and R Studio, (b) scripts and R Markdown files, and (c) Git and GitHub.