Dr. Nathan Wycoff (Massive Data Institute, Georgetown Univ.)

10/10/2022

Abstract: 

l1 penalties have the useful property of inducing sparsity in the optimization problems they define without rendering them intractable, but the cost of this is bias: lasso and related methods tend to underestimate magnitudes of coefficients.  On the other hand, global-local priors like the Horseshoe allow for the regularization strength to be learned parameter-by-parameter.  In this presentation, we'll discuss marriage of such adaptive penalties with nonsmooth penalties, beginning with the novel aspects of the implied optimization problem.  We'll then introduce the Sparse Bayesian Lasso, which brings the qualitative properties of the Lasso to Variational Bayesian Inference.  We discuss future work in the form of low-rank penalties for matrices as well as applications to human forced migration in Iraq and Ukraine.