The talk starts at 12:15.
Please note that due to COVID-19, the participants can watch the streamed talk on Teams with a link (below).
Speaker: Qinghua (Sylvia) Liu
Location: Teams: Click here to join the meeting
Title: Pseudo-Mallows for Preference Learning and Personalized Recommendation
Abstract We propose the Pseudo-Mallows model as a distribution over the set of all permutations of n items, to approximate the posterior distribution with a Mallows likelihood. This is important for a recommender system to learn personal preferences from highly incomplete data provided by the users. The Pseudo-Mallows distribution is a product of univariate discrete Mallows-like distributions, constrained to remain in the space of permutations. It depends on the order of the n items used to determine the factorization sequence.
In a variational setting, we optimise the variational order parameter by minimising a marginalized KL divergence. We propose an approximate algorithm for this discrete optimization, and conjecture a certain form of the optimal variational order that depends on the data. Empirical evidence and some theory supports our conjecture. Sampling from the Pseudo-Mallows model allows fast preference learning, compared to alternative MCMC based options, when the data exists in forms of partial rankings of the items or clicking on some items.
Through simulation studies and an offline real life data case study, we demonstrate that the Pseudo-Mallows model learns personal preferences rather well and makes recommendations much more efficiently, while maintaining similar accuracy compared to the MCMC Bayesian Mallows method.