Underspecification in ML
13 Jan 2021Researchers at Google published a paper on the randomness of Deep Learning. Based on the initial weights, different trained models will be obtained. Theys show empirically that despite all these models being equivalent during training, their real-life performance can be dramatically different.
MIT Tech Review published a blog post about it
[ML
deeplearning
statistics
]