A brief overview of statistical learning theory
|Lecturer:||Michael Muehlebach (MPI-IS)|
|Time:||– (Zurich time)|
|Notes:||Click here to download!|
|Recording:||Click here to view! (only for ETH members)|
Abstract:The lecture will summarize the main ideas of statistical learning theory. We will revisit the standard generalization bounds that characterize the difference between true and empirical risk. We will critically discuss the underlying assumptions and show examples where these are violated. We will also discuss the dependence of the bounds on the number of parameters, which is important for understanding the success of overparametrization in today’s machine learning practice.
- von Luxburg, U., & Schölkopf, B. (2011). Statistical Learning Theory: Models, Concepts, and Results. In: Handbook of the History of Logic, Volume 10: Inductive Logic (pp. 651–706). Amsterdam: Elsevier. DOI: 10.1016/b978-0-444-52936-7.50016-1.