PyTorch-LIT - Infer Large Models That Don't Even Fit in Main Memory

Follow the full discussion on Reddit.
Deep learning models are rapidly growing in terms of size and complexity, and inference on end devices is becoming impossible. GPT-J with 6B parameters, for example, only requires 24 GB of RAM in full-precision mode to be ready for execution, which may be impossible in most systems; even a powerful GPU like RTX 2060 with 6 GB of memory can't even contain GPT-J in half-precision mode, making direct inference impossible.

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter