Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
I am trying to build an ML model to categorize various spellings and additional characters to words they are supposed to be. For example, Chese or ch3se to Cheese. But these are very custom so I can't use a transformer or pre-processing. What I am struggling with, is that the model is very accurate as high as 97% but when I save it the model performs very poorly. I know it is not an overfitting issue, and I have narrowed it down to the feature hashing. I have about 9000 variations on my trainings data set, and I can't do features beyond maybe 800 until it processes over 12 hours and I kill it before it finishes. Any help would be appreciated, I need to somehow be able to capture all of the data from the training set more efficiently I believe. Please ask any clarifying questions too. I have done this with both RFC and Tensorflow and have the same results. Thanks!
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.