Today we're talking with Nick Erickson from AutoGluon.
We discuss AutoGluon's fascinating origin story, its unique point of view, the science and engineering behind some of its unique contributions, Amazon's Machine Learning University, AutoGluon's multi-layer stack ensembler in all its detail, their feature preprocessing pipeline, their feature type inference, their adaptive approach to early stopping, controlling for inference speeds, the different multi-modal architectures, the ML culture at Amazon, the unique challenges of time series, the role of competitions, the decision to reject hyperparameter optimization, benchmarking in AutoML, what the research community can do to help industry along, AutoGluon's relationship with pre-trained tabular models like Tab-PFN, whether the rise of LLMs is likely to affect AutoGluon, what's stopping more people from adopting AutoML solutions, AutoGluon Cloud, the dream and reality of an auto-benchmarking tool, how to contribute to their project, and many, many other topics.
This was one of my favorite episodes. Nick, thank you for joining!
You can follow Nick on Twitter here: @innixma.
And you can follow AutoGluon on GitHub here: https://github.com/autogluon.
Some more resources on AutoGluon:
- The original AutoGluon Paper: "AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data": https://arxiv.org/abs/2003.06505
- AutoML Fall School 2022 AutoGluon presentation, a good way to understand the philosophy behind AutoGluon: https://www.youtube.com/watch?v=VAAITEds-28
- AutoGluon multi-modal paper: https://dl.acm.org/doi/abs/10.1145/3534678.3542616