This package provides functions for diagnostic and prognostic meta-analyses. It estimates univariate, bivariate and multivariate models, and allows the aggregation of previously published prediction models with new data.
This package is used to predict the discrete class labels based on a selected subset of high-dimensional features, such as expression levels of genes. The data are modeled with a hierarchical Bayesian models using heavy-tailed t distributions as priors. When a large number of features are available, one may like to select only a subset of features to use, typically those features strongly correlated with the response in training cases. Such a feature selection procedure is however invalid since the relationship between the response and the features has be exaggerated by feature selection. This package provides a way to avoid this bias and yield better-calibrated predictions for future cases when one uses F-statistic to select features.
Bayesian variable selection, model choice, and regularized estimation for (spatial) generalized additive mixed regression models via SSVS with spike-and-slab priors.
Bayesian quantile regression using the asymmetric Laplace distribution, both continuous as well as binary dependent variables are supported. The package consists of implementations of the methods of Yu & Moyeed (2001), Benoit & Van den Poel (2012) and Al-Hamzawi, Yu & Benoit (2012). To speed up the calculations, the Markov Chain Monte Carlo core of all algorithms is programmed in Fortran and called from R.
A package including several functions related to the Potts models.
The bayespack package provides an R interface to the Fortran BAYESPACK integration routines written by Alan Genz. Given an unnormalized posterior distribution, the BAYESPACK routines approximate the normalization constant and the mean and covariance of the posterior. Posterior means of additional functions of the parameters can be calculated as well.
Objective Bayes Variable Selection in Linear Regression and Probit models.
The package contains two variants of Bayesian Bootstrap Predictive Mean Matching to multiply impute missing data. The first variant is a variable-by-variable imputation combining sequential regression and Predictive Mean Matching (PMM) that has been extended for unordered categorical data. The Bayesian Bootstrap allows for generating approximately proper multiple imputations. The second variant is also based on PMM, but the focus is on imputing several variables at the same time. The suggestion is to use this variant, if the missing-data pattern resembles a data fusion situation, or any other missing-by-design pattern, where several variables have identical missing-data patterns. Both variants can be run as 'single imputation' versions, in case the analysis objective is of a purely descriptive nature.
Laplace's Demon is a complete environment for Bayesian inference.
Bayesian bandwidth estimation for Nadaraya-Watson type multivariate kernel regression with Gaussian error density