[Adapt] [seminar] [Constructing Multi-Modal Dialogue Dataset byReplacing Text with Semantically Relevant Images]

王宇飞 arthur-w at sjtu.edu.cn
Wed Dec 22 10:54:02 CST 2021


Hi Adapters, 

I'll introduce an ACL(2021) paper Constructing Multi-Modal Dialogue Dataset by
Replacing Text with Semantically Relevant Images. In this paper, they build a new multi-modal dialogue dataset using existing dialogue and image captioning datasets. I will try to explain how they create such a dataset and how they evaluate it.Hope you will enjoy this.

Time: 4pm Wednesday 
Venus: SEIEE 3-414

Best regards,
Arthur


More information about the Adapt mailing list