Comments
There's unfortunately not much to read here yet...
Follow the full discussion on Reddit.
Transformer-based encoder-decoder models have become indispensable for seq2seq tasks such as summarization and translation. Recently, there has been a lot of research on different pre-training objectives for transformer-based encoder-decoder models, e.g. T5, Bart, Pegasus, ProphetNet, Marge, etc. However, the model architecture has stayed largely the same.
There's unfortunately not much to read here yet...
Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.
Discover the best guides, books, papers and news in Machine Learning, once per week.