[Adapt] [Seminar]

英卡尔·波拉提 enkarrrr at sjtu.edu.cn
Tue May 9 20:25:43 CST 2023


Hi, Adapters

This week I will give a talk about few-shot prompt order sensitivity. When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, finetuned, large, pretrained language models. The order in which the samples are provided can make the difference between near state-of-the-art and random guess performance.
In this talk, I will introduce this phenomenon in detail, and how to use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, to identify performant prompts.

Hope you will enjoy it,
Enkar


More information about the Adapt mailing list