[Adapt] [Seminar][Making Pre-trained Language Models Better Few-shot Learners]

黄姗姗 798508656 at qq.com
Wed Dec 1 07:46:50 CST 2021


Hi Adapters,

I’ll introduce an ACL (2021) paper by Danqi Chen's group: Making Pre-trained Language Models Better Few-shot Learners.  Inspired by GPT3, this paper present LM-BFF—better few-shot fine-tuning of language models—a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples. If you are interested with prompt learning or few-shot learning, I hope you can learn something from this talk.




Time: Wed 4:00pm

Venue: 3-414 (tentative)

Best,
Shanshan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.sjtu.edu.cn/pipermail/adapt/attachments/20211201/d72f5b88/attachment.htm>


More information about the Adapt mailing list