Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
Hey guys, we have an internal tool that preps our models for inference by compiling it to Onnx/TensorRT and quantizing it to I8/FP16. It also benchmarks them for accuracy loss and latency. It's kinda like github actions for your model. We are considering releasing it as it's standalone product, would anyone be interested?
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.