Designing algorithms by hand is hard, so Chris Lu and Matthew Jackson talk about how to meta-learn them for reinforcement learning. Many of the concepts in this episode are interesting to meta-learning approaches as a whole, though: "how expressive can we be and still perform well?", "how can we get...
AutoML can be a tool for good, but there are pitfalls along the way. Rahul Sharma and David Selby tell us about how AutoML systems can be used to give us false impressions about explainability metrics of ML systems - maliciously, but also on accident. While this episode isn't talking about a new exc...
In today's episode, we're introducing the very special Theresa Eimer to the show. Theresa will be taking over the hosting of many of the future episodes. Theresa has already recorded multiple episodes and we are stoked to air those shortly.We also spend a few moments explaining my relative absence i...
Today we're talking with Nick Erickson from AutoGluon.We discuss AutoGluon's fascinating origin story, its unique point of view, the science and engineering behind some of its unique contributions, Amazon's Machine Learning University, AutoGluon's multi-layer stack ensembler in all its detail, their...
Today we're talking with Joseph Giovanelli about his work on integrating logic and argumentation into AutoML systems.Joseph is a PhD student at the University of Bologna. He was more recently in Hannover working on ethics and fairness with Marius’ team.The paper he published presents his framework, ...
Today we're talking with Caitlin Owen, a post-doc at the University of Otago about her work on error decomposition.She recently published a paper titled "Towards Explainable AutoML Using Error Decomposition" about how a more granular view of the components of error can lead the construction of bette...