\[
\begin{align*}
\alpha_{ij} &\mid \lambda_{ij}, c_{ij} \sim \lambda_{ij} \, \text{Normal}(0, \tau_{ij}^2) + (1 - \lambda_{ij}) \, \delta_0 \\[0.5em]
\lambda_{ij} &\sim \text{Bern}(\theta) \\[0.5em]
\theta &\sim \text{Beta}(1, 1) \\[0.5em]
\tau_{ij}^2 &\sim \text{Inverse-Gamma}(1/2, s/2) \\[0.5em]
\Sigma &\sim \text{Inverse-Wishart}(S, \nu)
\end{align*}
\]
- where \(s = 1/2\), \(S = I\), and \(\nu = p\)
- Benefits:
- Yields posterior over all possible graphs (structure learning)
- Yields graded evidence in favor or against \(\mathcal{H}_0: \alpha_{ij} = 0\)
- Model-averages: \(p(\alpha_{ij} \mid y) = \sum_{k=1}^{p^2} p(\alpha_{ij} \mid y, \mathcal{M}_k) \cdot p(\mathcal{M}_k \mid y) \hspace{1em} \text{vs.} \hspace{1em} p(\alpha_{ij} \mid y, \mathcal{M})\)
- Downsides:
- Computationally very expensive