[Adapt] [Seminar] Masked Sequence to Sequence Pre-training for Language Generation

张盛瑶 sophie_zhang at sjtu.edu.cn
Wed May 29 09:40:14 CST 2019


Dear Adapters,
    In this week, I will give you the talk about the paper “MASS: Masked Sequence to Sequence Pre-training for Language Generation” which is accepted by ICML in 2019. In this paper, the authors proposed the MAsked Sequence to Sequence pre-training (MASS) for encoder-decoder based language generation. Different from previous pre-trained and fine-turning models such as BERT and GPT which implement only one encoder or decoder, MASS can jointly train the encoder and decoder and has better performance on language generation tasks. Especially, MASS achieves state-of-the-art accuracy (37.5 in terms of BLEU score) on the unsupervised English-French translation.



Time: Wed 5:00pm
Venue: SEIEE 3-414

Best regards, 
Sophie


More information about the Adapt mailing list