[Adapt] [seminar] Parameter-efficient Fine-tuning For Pre-trained Language Models

任思宇 rsy0702 at 163.com
Wed Nov 30 09:01:08 CST 2022


Hi Adapters,


Large language models with hundreds of millions of parameters have demonstrated superb performance across a wide variety of natural language processing tasks. However, the tremendous amount of parameters also brings challenges to training and serving in resource-constrained environments. Parameter-efficient fine-tuning(PEFT) is a relatively new and timely research area in the NLP community that aims to alleviate such inefficiency. In today's talk, I will present an introduction to some representative PEFT methods and see how model efficiency is improved.
Hope you find this talk useful, especially when your computation budget is limited.


Time: Wed 4:00 pm
Venue: SEIEE 3-414
Best Regards,
Roy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.sjtu.edu.cn/pipermail/adapt/attachments/20221130/16843ed3/attachment.htm>


More information about the Adapt mailing list