[Adapt] Evaluating Conversational Recommender Systems via User Simulation

李子通 autsky_jadek at sjtu.edu.cn
Wed Nov 16 10:11:27 CST 2022


Hi Adapters, 

Conversational information access is an emerging research area. Currently, human evaluation is used for end-to-end system evaluation, which is both very time and resource intensive at scale, and thus becomes a bottleneck of progress. I will talk about paper "Evaluating Conversational Recommender Systems via User Simulation". As an alternative, the authors propose automated evaluation by means of simulating users. Their user simulator aims to generate responses that a real human would give by considering both individual preferences and the general flow of interaction with the system. They evaluate our simulation approach on an item recommendation task by comparing three existing con- versational recommender systems. They show that preference modeling and task-specific interaction models both contribute to more realistic simulations, and can help achieve high correlation between automatic evaluation measures and manual human assessments.

I hope my talk makes you feel interesting and helpful.

Time: Wed 4:00pm
Venue: SEIEE 3-414
Best wishes.
Zitong


More information about the Adapt mailing list