[Adapt] Seminar]Annotation Artifacts in Natural Language Inference Data
黄姗姗
798508656 at qq.com
Tue Nov 12 21:42:11 CST 2019
Hi Adapters,
Nowadays, full network pre-training, like BERT, has led to a series of breakthroughs in language representation learning. Especially for some reading comprehension tasks, the deep models perform better than human. Does the machine already surpassed human with reasoning ability or only get the tricks in multiple choice questions?
In this seminar, I will introduce some research about annotation artifacts in datasets.
Related papers:
Probing Neural Network Comprehension of Natural Language Arguments
Annotation Artifacts in Natural Language Inference Data
Tackling the Story Ending Biases in The Story Cloze Test
Time: Wed 4:30pm
Venue: SEIEE 3-414
Best regards,
Shanshan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.sjtu.edu.cn/pipermail/adapt/attachments/20191112/b191894b/attachment.html>
More information about the Adapt
mailing list