[ICLR 2022 Workshop] ML Evaluation Standards

Follow the full discussion on Reddit.
The field of ML is undergoing massive growth, and it is becoming apparent that it may be in need of self-reflection to ensure that efforts are directed towards real progress. More recently, there is an increasing number of papers at top conferences on the topic of ML evaluation, which show evidence of non-reliable findings and unsupported empirical claims in several subfields including computer vision, recommender systems, reinforcement learning, natural language processing, hyperparameter optimization and more. Such papers highlight the need for more scientific rigour and careful evaluation, both by researchers themselves and the reviewers. Thus, it seems that these are discussions that researchers are interested in having, while it is yet unclear what is the best path forward. To this end, we are organizing a workshop on "Setting ML Evaluation Standards to Accelerate Progress" at ICLR 2022.

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter