[Adapt] [Seminar] What Does it Mean for a Language Model to Preserve Privacy

xjc0365 xjc0365 at sjtu.edu.cn
Wed Apr 12 10:52:25 CST 2023


Hi Adapters,

Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. People start to pay more attention to privacy issues after ChatGPT emerges these days.

Today I’m going to talk about some privacy risks of language model for the training data, and introduce some techniques to preserve privacy. According to the paper “What Does it Mean for a Language Model to Preserve Privacy?”, I’ll also explain why it’s still challenging to preserve privacy in language models since there is a mismatch between the narrow assumptions made by popular data protection techniques, and the broadness of natural language and of privacy as a social norm.

Hope you find this talk interesting!

Best regards,
Jingchun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.sjtu.edu.cn/pipermail/adapt/attachments/20230412/64e35e1f/attachment.htm>


More information about the Adapt mailing list