Selection properties of Type II maximum likelihood (empirical bayes) linear models with individual variance components for predictors

Research output: Contribution to journalArticleAcademicpeer-review

10 Citations (Scopus)

Abstract

Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine learning community by the famous textbook of Bishop (2006). RVM assigns individual precisions to weights of predictors which are then estimated by maximizing the marginal likelihood (type II ML or empirical Bayes). We investigated the selection properties of RVM both analytically and by experiments in a regression setting. We show analytically that RVM selects predictors when the absolute z-ratio (|least squares estimate|/standard error) exceeds 1 in the case of orthogonal predictors and, for M = 2, that this still holds true for correlated predictors when the other z-ratio is large. RVM selects the stronger of two highly correlated predictors. In experiments with real and simulated data, RVM is outcompeted by other popular regularization methods (LASSO and/or PLS) in terms of the prediction performance. We conclude that Type II ML is not the general answer in high dimensional prediction problems. In extensions of RVM to obtain stronger selection, improper priors (based on the inverse gamma family) have been assigned to the inverse precisions (variances) with parameters estimated by penalized marginal likelihood. We critically assess this approach and suggest a proper variance prior related to the Beta distribution which gives similar selection and shrinkage properties and allows a fully Bayesian treatment.
Original languageEnglish
Pages (from-to)1205-1212
JournalPattern Recognition Letters
Volume33
Issue number9
DOIs
Publication statusPublished - 2012

Keywords

  • gene-expression data
  • variable selection
  • elastic net
  • regression
  • regularization
  • shrinkage
  • chemometrics
  • networks
  • genome
  • lasso

Fingerprint

Dive into the research topics of 'Selection properties of Type II maximum likelihood (empirical bayes) linear models with individual variance components for predictors'. Together they form a unique fingerprint.

Cite this