Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
Right now I have a 10,000 sample set that were originally 24x24 pixels, I scaled them up to 256x256. The quality of the outputs seems to be deteriorating after day 3 or 4. The arguments listed in run_training.py are as follows: dataset, data_dir, result_dir, resume_pkl (all directory and path args), num_gpus (set by hardware), mirror_augment (which I have set to false and don't see a need to turn on), metrics argument I have off. This leaves config, gamma, total_kimg, image_snapshot_ticks, network_snapshot_ticks as arguments I might want to change. I understand the last three might just be settings for how often I'd like outputs, am I correct in believing they do not fundamentally alter the learning rate of either neural net? I think I may need to play with the config, gamma, settings. Issues I am getting: mode collapse, new textures losing fidelity to the training set. Any help with fine tuning would be great!
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.