Triple Descent and a Fine-Grained Bias-Variance Decomposition
Speaker: Jeffrey Pennington, Google Brain Title: Triple Descent and a Fine-Grained Bias-Variance Decomposition Abstract: Classical learning theory suggests that the optimal generalization performance of a machine learning model should occur at an intermediate model complexity, striking a balance between simpler models that exhibit high bias and more complex models that exhibit high variance of the […]
Generalization bounds for rational self-supervised learning algorithms, or “Understanding generalizations requires rethinking deep learning”
https://youtu.be/aVB1qFPeEmo Speakers: Boaz Barak and Yamini Bansal, Harvard University Dept. of Computer Science Title: Generalization bounds for rational self-supervised learning algorithms, or "Understanding generalizations requires rethinking deep learning" Abstract: The generalization gap of a learning algorithm is the expected difference between its performance on the training data and its performance on fresh unseen test samples. […]