What? One of Tristan Mahr’s recent Twitter threads almost broke my brain.
wait when people talk about treating overdispersion by using random effects, they sometimes put a random intercept on each row?
Context In one of my recent Twitter posts, I got pissy and complained about a vague power-analysis statement I saw while reviewing a manuscript submitted to a scientific journal.
Context Someone recently posted a thread on the Stan forums asking how one might make item-characteristic curve (ICC) and item-information curve (IIC) plots for an item-response theory (IRT) model fit with brms.
tl;dr When your MCMC chains look a mess, you might have to manually set your initial values. If you’re a fancy pants, you can use a custom function.
Orientation This post is the second and final installment of a two-part series. In the first post, we explored how one might compute an effect size for two-group experimental data with only \(2\) time points.
Background This post is the first installment of a two-part series. The impetus is a project at work. A colleague had longitudinal data for participants in two experimental groups, which they examined with a multilevel growth model of the kind we’ll explore in the next post.
Purpose In the contemporary longitudinal data analysis literature, 2-timepoint data (a.k.a. pre/post data) get a bad wrap. Singer and Willett (2003, p. 10) described 2-timepoint data as only “marginally better” than cross-sectional data and Rogosa et al.
tl;dr When you have a time-varying covariate you’d like to add to a multilevel growth model, it’s important to break that variable into two. One part of the variable will account for within-person variation.
A colleague reached out to me earlier this week with a plotting question. They had fit a series of Bayesian models, all containing a common parameter of interest. They knew how to plot their focal parameter one model at a time, but were stumped on how to combine the plots across models into a seamless whole.
[edited on January 18, 2021]
tl;dr Sometimes a mathematical result is strikingly contrary to generally held belief even though an obviously valid proof is given. Charles Stein of Stanford University discovered such a paradox in statistics in 1955.