There are so many great foundation models in many different domains - but how do you choose one for your specific problem? And how can you best finetune it? Sebastian Pineda has an answer: Quicktune can help select the best model and tune it for specific use cases. Listen to find out when this will ...
Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get...
AutoML can be a tool for good, but there are pitfalls along the way. Rahul Sharma and David Selby tell us about how AutoML systems can be used to give us false impressions about explainability metrics of ML systems - maliciously, but also on accident. While this episode isn't talking about a new exc...
In today's episode, we're introducing the very special Theresa Eimer to the show. Theresa will be taking over the hosting of many of the future episodes. Theresa has already recorded multiple episodes and we are stoked to air those shortly.We also spend a few moments explaining my relative absence i...
Today we're talking with Nick Erickson from AutoGluon.We discuss AutoGluon's fascinating origin story, its unique point of view, the science and engineering behind some of its unique contributions, Amazon's Machine Learning University, AutoGluon's multi-layer stack ensembler in all its detail, their...
Today we're talking with Joseph Giovanelli about his work on integrating logic and argumentation into AutoML systems.Joseph is a PhD student at the University of Bologna. He was more recently in Hannover working on ethics and fairness with Marius’ team.The paper he published presents his framework, ...