Today we’re talking to Matt Levine. Matt is a PhD student in computing and mathematical sciences at Caltech, and he focuses on improving the prediction and inference of physical systems by blending together both mechanistic modeling and machine learning. This episode is one of my favorites: we go p...
Today we’re chatting with Junhong Shen, a PhD student at Carnegie Mellon. Junhong and her team are working on the generalizability of NAS algorithms across a diverse set of tasks. Today we'll be talking about DASH, a NAS algorithm that takes diversity of tasks at its center. In order to implem...
Today we're speaking with Marius Lindauer and it is certainly one of my favorite episodes! As you’ll hear, Marius is full of ideas for where AutoML systems can and should go. These ideas are crystallized in a blog-post, published here: https://www.automl.org/rethinking-automl-advancing-from-a-...
Today Ankush Garg is talking to Mehdi Bahrami about his recent project: BERT-Sort. BERT-Sort is an example of how large language models can add useful context to tabular datasets, and to AutoML systems. Mehdi is a Member of Research Staff at Fujitsu and, as he describes, he began using AutoML syste...
In today's episode, we’re talking to Lars Kothoff about the fascinating origin story of AutoML (as he sees it), and how it emerged from the SAT community. While talking to many of you, it became clear that this origin story is one that a lot of people have some vague sense about, but not a ver...
Today we’re talking to Cedric Kulbach about online learning, the challenges of doing it properly, why it is so promising, how it’s connected to evolutionary strategies, and recent advances in the field that can help to unlock these promises. We then discuss the close connection between online learn...