Weights and biases approach -1 or 1

Follow the full discussion on Reddit.
Hi, ive been desinging my own C code to make neural networks. Mainly for a project and to mess around. It works pretty well but i find that the weights and biases of the finished network are practically all 1, -1 or thereabouts. Is this normal? Again, the network works pretty well, it just seems a bit weird to me. Im using it to predict hand-written characters. Im using sigmoid as my activation function (tried RELU but didnt predict too well, maybe its because I train with a pretty small sample pool?). Weights and biases are randomly initialized with values between -1 and 1 and are capped during training so that they dont exceed -1 or 1. Im thinking this might be the issue. Problem is that if i dont cap them like this, they start to grow and grow the closer you are to the first hidden layer. The last hidden layer would have weights and biases between -1 and 1, but the others started to get bigger and bigger, so i ended up capping it like that to solve it. I believe this is common practice, but maybe the way im capping them is the problem (essentially if a weight or bias exceeds -1 or 1 after having been corrected, i equal it to -1 or 1 respectively).Im not sure what other info i should be providing, as far as i know its a pretty basic neural network, not doing anything too fancy.

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter