Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
In RPG video games, the practice of min-maxing is basically focusing on only one stat while ignoring everything else. Borrowing from this concept, I've developed a framework to optimize the accuracy for smaller LLMs for NLP tasks by imparting knowledge from a larger model to a smaller model through just prompting. The inspiration for this stems from how nuanced prompt labeling can be, especially when we need to account for limitations of smaller models in terms of following directions and understanding. The biggest roadblocks are:
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.