On this page, you will find an overview of the lectures and the materials that we plan to cover in the seminar. For each week, we also list some recommended preparatory reading that you should read before the lecture.

In case you want to add the lectures to your calendar, we are providing an ICS file to which you can subscribe (e.g., FileNew Calendar Subscription in the default Calendar app on macOS).

— Lecture 0 —

# Introduction

 Lecturer: Bernhard Schölkopf (MPI-IS), Michael Muehlebach (MPI-IS) Date: September 22, 2021 Time: 17:30–18:00 (Zurich time)

#### Abstract:

Brief meeting, discussion of course schedule, exam.

— Lecture 1 —

# A brief overview of statistical learning theory

#### Abstract:

The lecture will summarize the main ideas of statistical learning theory. We will revisit the standard generalization bounds that characterize the difference between true and empirical risk. We will critically discuss the underlying assumptions and show examples where these are violated. We will also discuss the dependence of the bounds on the number of parameters, which is important for understanding the success of overparametrization in today’s machine learning practice.

• von Luxburg, U., & Schölkopf, B. (2011). Statistical Learning Theory: Models, Concepts, and Results. In: Handbook of the History of Logic, Volume 10: Inductive Logic (pp. 651–706). Amsterdam: Elsevier. DOI: 10.1016/b978-0-444-52936-7.50016-1.

— Lecture 2 —

# A brief overview of game theory

#### Abstract:

The lecture will summarize key ideas in game theory. Game theory provides a means for modelling interactions between machine learning algorithms and their environment. We will revisit zero-sum games and von Neumann’s minimax theorem and introduce the concept of Nash equilibria. We will then discuss repeated games and adaptive decision-making algorithms (follow the leader, follow a random leader, multiplicative weights).

• Karlin, A. R., & Peres, Y. (2017). Game theory, alive. American Mathematical Society. ISBN: 978-1-4704-1982-0. PDF version available online. [Chapter 2: Section 2.1–2.3; Chapter 18: Section 18.1–18.3]

— Lecture 3 —

# Introduction to Causality

#### Abstract:

The two fields of machine learning and graphical causality arose and developed separately. However, there is now cross-pollination and increasing interest in both fields to benefit from the advances of the other. In the present paper, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, the discovery of high-level causal variables from low-level observations.

• Schölkopf, B. et al. (2021). Towards causal representation learning. arXiv:2102.11107.

— Lecture 4 —

# New results in adaptive decision-making

#### Abstract:

This lecture will provide an introduction to (non-statistical) online learning and multi-armed bandits. We will discuss the multiplicative weights algorithm Hedge, and its partial information counterpart EXP3, as well as some applications to learning in games.

• Hazan, E. (2019). Introduction to Online Convex Optimization. arXiv:1909.05207v1. [Chapter 6.2]
• Sessa, P. G., et al. (2019). No-Regret Learning in Unknown Games with Correlated Payoffs. Available online.

— Lecture 5 —

# A brief overview of dynamics and control

 Lecturer: Michael Muehlebach (MPI-IS) Date: October 27, 2021 Time: 16:15–18:00 (Zurich time)

#### Abstract:

The lecture will summarize the basics of dynamical systems and control theory. We will discuss discrete-time and continuous-time dynamical systems, introduce the concept of equilibria and Lyapunov stability. An important aspect of the lecture will be to emphasize the difference between noise and structural (epistemic) uncertainty and show how uncertainty can be reduced with feedback. We will also discuss connections to game theory and generalization (Lecture 1).

• Strogatz, S. H. (2015). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. 2nd edition. Boca Raton: CRC Press. DOI: 10.1201/9780429492563. [Chapter 2]

— Lecture 6 —

# TBA

 Lecturer: Chris Russell (Amazon) Date: November 3, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 7 —

# TBA

 Lecturer: Constantinos Daskalakis (MIT) Date: November 10, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 8 —

# TBA

 Lecturer: Nathan Kutz (University of Washington) Date: November 17, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 9 —

# TBA

 Lecturer: Georg Martius (MPI-IS) Date: November 24, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 10 —

# TBA

 Lecturer: Dominik Janzing (Amazon) Date: December 1, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 11 —

# TBA

 Lecturer: Lester Mackey (Stanford, Microsoft Research New England) Date: December 8, 2021 Time: 16:15–18:00 (Zurich time)

TBA

— Lecture 12 —

# TBA

 Lecturer: Manuel Gomez Rodriguez (MPI-SWS) Date: December 15, 2021 Time: 16:15–18:00 (Zurich time)

TBA