[Adapt] [Seminar] Probe the syntactic and semantic abilities of Language Models

廖千姿 liaoqz at sjtu.edu.cn
Tue Nov 5 22:36:17 CST 2019


Hi Adapters, 

Models like RNN and Transformer have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. 

In this seminar, I want to present several papers to investigate to what extent they learn to track abstract hierarchical syntactic structure and if they understand the semantic meaning of the sentence. They probe the syntactic and semantic abilities of models by linking the inner workings of a neural language model to linguistic theory, providing an impressive fusion between linguistics and research on neural networks. 


Related papers: 
[ https://arxiv.org/abs/1803.11138 | Colorless Green Recurrent Networks Dream Hierarchically ] 
[ https://arxiv.org/abs/1909.00111 | Quantity doesn't buy quality syntax with neural language models ] 
[ https://arxiv.org/abs/1808.10627 | Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items ] 


Hope you can gain a fresh perspective after the talk :)) 


Time: Wed 4:30pm 

Venue: SEIEE 3-414 



Best regards, 

Eve 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.sjtu.edu.cn/pipermail/adapt/attachments/20191105/8e3625f0/attachment.html>


More information about the Adapt mailing list