Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
There are so many cool visualization techniques for CNNs out there. But, the code implementations of these techniques in a lot of repositories/ libraries seems to be limited to very few models such as VGG or AlexNet. I'm sure all of us would love to visualize and understand our own models. So, I created timm-vis, a library using which you can visualize your image classification models with just a few function calls. So far, I've implemented filter and activation visualizations, maximally activated patches, saliency maps, synthetic image generation, adversarial attacks, feature inversion and deep dream. If you're interested in the project and want to try these methods out on your own models, I encourage you to go through details.ipynb in the repository. I'd love to hear your thoughts, feedback and suggestions.
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.