You can also find my articles on my Google Google Scholar profile.

Mitigating the Filter Bubble while Maintaining Relevance: Targeted Diversification with VAE-based Recommender Systems
[ pdf ] [ Presentation ] [ code ]

Zhaolin Gao, Tianshu Shen, Zheda Mai, Mohamed Reda Bouadjenek, Isaac Waller, Ashton Anderson, Ron Bodkin, and Scott Sanner. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-22), Online, hosted by Madrid, Spain, 2022. (20% acceptance rate)

Online recommendation systems are prone to create filter bubbles, whereby users are only recommended content narrowly aligned with their historical interests. In the case of media recommendation, this can reinforce political polarization by recommending topical content (e.g., on the economy) at one extreme end of the political spectrum even though this topic has broad coverage from multiple political viewpoints that would provide a more balanced and informed perspective for the user. Historically, Maximal Marginal Relevance (MMR) has been used to diversify result lists and even mitigate filter bubbles, but suffers from three key drawbacks: (1) MMR directly sacrifices relevance for diversity, (2) MMR typically diversifies across all content and not just targeted dimensions (e.g., political polarization), and (3) MMR is inefficient in practice due to the need to compute pairwise similarities between recommended items. To simultaneously address these limitations, we propose a novel methodology that trains Concept Activation Vectors (CAVs) for targeted topical dimensions (e.g., political polarization). We then modulate the latent embeddings of user preferences in a state-of-the-art VAE-based recommender system to diversify along the targeted dimension while preserving topical relevance across orthogonal dimensions. Our experiments show that our Targeted Diversification VAE-based Collaborative Filtering (TDVAE-CF) methodology better preserves relevance of content to user preferences across a range of diversification levels in comparison to both untargeted and targeted variations of Maximum Marginal Relevance (MMR); TD-VAE-CF is also much more computationally efficient than the post-hoc re-ranking approach of MMR..


Distributional Contrastive Embedding for Clarification-based Conversational Critiquing
[ pdf ] [ Presentation ] [ code ]

Shen, T.; Mai, Z.; Wu, G.; and Sanner, S. In Proceedings of the 31th international conference on the topic of the World Wide Web (WWW-22), Online, hosted by Lyon, France, 2022. (17.7% acceptance rate)

Managing uncertainty in preferences is core to creating the next generation of conversational recommender systems (CRS). However, an often-overlooked element of conversational interaction is the role of clarification. Users are notoriously noisy at revealing their preferences, and a common error is being unnecessarily specific, e.g., suggesting "chicken fingers" when a restaurant with a "kids menu" was the intended preference. Correcting such errors requires reasoning about the level of generality and specificity of preferences and verifying that the user has expressed the correct level of generality. To this end, we propose a novel clarification-based conversational critiquing framework that allows the system to clarify user preferences as it accepts critiques. To support clarification, we propose the use of distributional embeddings that can capture the specificity and generality of concepts through distributional coverage while facilitating state-of-the-art embedding-based recommendation methods. Specifically, we incorporate Distributional Constrastive Embeddings of critiqueable keyphrases with user preference embeddings in a Variational Autoencoder recommendation framework that we term DCE-VAE. Our experiments show that our proposed DCE-VAE is (1) competitive in terms of general performance in comparison to state-of-the-art recommenders and (2) supports effective clarification-based critiquing in comparison to alternative clarification baselines. In summary, this work adds a new dimension of clarification to enhance the well-known critiquing framework along with a novel data-driven distributional embedding for clarification suggestions that significantly improves the efficacy of user interaction with critiquing-based CRSs.


Bayesian Critiquing with Keyphrase Activation Vectors for VAE-based Recommender Systems.
[ pdf ] [ Presentation ] [ code ]

Yang, H.; Shen, T.; and Sanner, S. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-21), Online, 2021. (27.6% acceptance rate)

Critiquing is a method for conversational recommendation that incrementally adapts recommendations in response to user preference feedback. Recent advances in critiquing have leveraged the power of VAE-CF recommendation in a critiquable-explainable (CEVAE) framework that updates latent user preference embeddings based on their critiques of keyphrase-based explanations. However, the CE-VAE has two key drawbacks: (i) it uses a second VAE head to facilitate explanations and critiquing, which can sacrifice recommendation performance of the first VAE head due to multiobjective training, and (ii) it requires iterating an inverse decoding-encoding loop for multi-step critiquing that yields poor performance. To address these deficiencies, we propose a novel Bayesian Keyphrase critiquing VAE (BK-VAE) framework that builds on the strengths of VAE-CF, but avoids the problematic second head of CE-VAE. Instead, the BK-VAE uses a Concept Activation Vector (CAV) inspired approach to determine the alignment of item keyphrase properties with latent user preferences in VAE-CF. BK-VAE leverages this alignment in a Bayesian framework to model uncertainty in a user's latent preferences and to perform posterior updates to these preference beliefs after each critique — essentially achieving CE-VAE's explanation and critique inversion through a simple application of Bayes rule. Our empirical evaluation on two datasets demonstrates that BK-VAE matches or dominates CE-VAE in both recommendation and multi-step critiquing performance.


Unintended Bias in Language Model-driven Conversational Recommendation.
[ pdf ] [ Presentation ] [ code ]

Shen, T.; Li, J; Bouadjenek MR; Mai Z; and Sanner, S.

Conversational Recommendation Systems (CRSs) have recently started to leverage pretrained language models (LM) such as BERT for their ability to semantically interpret a wide range of preference statement variations. However, pretrained LMs are well-known to be prone to intrinsic biases in their training data, which may be exacerbated by biases embedded in domain-specific language data(e.g., user reviews) used to fine-tune LMs for CRSs. We study a recently introduced LM-driven recommendation backbone (termed LMRec) of a CRS to investigate how unintended bias i.e., language variations such as name references or indirect indicators of sexual orientation or location that should not affect recommendations manifests in significantly shifted price and category distributions of restaurant recommendations. The alarming results we observe strongly indicate that LMRec has learned to reinforce harmful stereotypes through its recommendations. For example, offhand mention of names associated with the black community significantly lowers the price distribution of recommended restaurants, while offhand mentions of common male-associated names lead to an increase in recommended alcohol-serving establishments. These and many related results presented in this work raise a red flag that advances in the language handling capability of LM-drivenCRSs do not come without significant challenges related to mitigating unintended bias in future deployed CRS assistants with a potential reach of hundreds of millions of end-users..


Novel Problems and Challenges in Language-based Conversational Recommender Systems
[ thesis ]

Shen, T.

Language-based Conversational Recommender Systems (CRSs) have attracted growing attention as they allow users to express and interactively refine their preferences in natural language. However, there exist open problems in CRSs relating to the challenges users face when articulating accurate natural language preferences and the underlying technologies required to facilitate language-based interactions. Our first contribution addresses the challenge that users have trouble specifying preferences with the right level of specificity. We propose a clarification-based extension of a critiquing-based interaction workflow for CRSs that outperforms state-of-the-art models. In our second contribution, we explore the novel issue of unintended bias in language model-driven conversational recommendation by proposing novel bias evaluation metrics and performing source of bias analysis. In summary, this thesis investigates novel and important challenges in the deployment of language that can help users express their preferences more accurately and allow the identification and resolution of bias that arises in language-model-driven CRSs.