Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
we at Plumerai built the fastest and most memory-efficient deep learning inference software for Arm Cortex-M microcontrollers. It has 40% lower latency and requires 42% less RAM than TensorFlow Lite for Microcontrollers with Arm’s CMSIS-NN kernels while retaining the same accuracy. It also greatly outperforms any other deep learning inference software for Arm Cortex-M.
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.