Anyone think that the evaluation of the meta-learning approaches for few-shot classification is not very reasonable?

Follow the full discussion on Reddit.
Meta-learning for few-shot classification (N-way-K-shot) usually uses the same number of query examples for both training and testing. For example, in a 5-way-1-shot classification task over the miniImageNet dataset, during the training phase, there are 1 example per class in the support set and 15 examples per class in the query set. During the testing phase, it's the same. But to be realistic, shouldn't we use more query examples for evaluation? Of course I know the results will not be as good-looking as current ones. Moreover, the setting of the ways & shots during the training phase seems not rigorous, either.

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter