Reward function as a way to represent multiple targets

Follow the full discussion on Reddit.
I've been assigned at work a problem with multiple targets, and I've been thinking about what's the best to design a model that would optimize towards all these targets. An idea that occurred me is to create a reward function that would "encapsulate" all these targets in such a way where, the higher the reward, the better the outcome is for all the targets.

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter