Abstract: Stein's method is a powerful technique for deriving fundamental theoretical results on approximating and bounding distances between probability measures, such as central limit theorem. Recently, it was found that the key ideas in Stein's method, despite being designed as a pure theoretical technique, can serve as a basis for developing practical and scalable computational methods for learning and using large scale, intractable probabilistic models. We will give an overview for these developments of Stein's method in machine learning, focusing on two important tools: 1) kernel Stein discrepancy (KSD), which provides a computational tool for approximating and evaluating (via goodness-of-fit test) distributions with intractable normalization constants, and 2) Stein variational gradient descent (SVGD), which is a deterministic sampling algorithm for finding concise particle-based approximation to intractable distributions that combines the advantages of Monte Carlo, variational inference and numerical quadrature methods.
Bio: Qiang Liu is an assistant professor of computer science at UT Austin. He works on the intersection of statistical inference and machine learning & artificial intelligence.