Fair Neighbors
Recommenders for C- and P-fairness in exposure
Description
We address demographic bias in neighborhood-learning models for collaborative filtering recommendations. Despite their superior ranking performance, these methods can learn neighborhoods that inadvertently foster discriminatory patterns. Little work exists in this area, highlighting an important research gap. A notable yet solitary effort, Balanced Neighborhood Sparse LInear Method (BNSLIM) aims at balancing neighborhood influence across different demographic groups. Yet, BNSLIM is hampered by computational inefficiency, and its rigid balancing approach often impacts accuracy. In that vein, we introduce two novel algorithms. The first, an enhancement of BNSLIM, incorporates the Alternating Direction Method of Multipliers (ADMM) to optimize all similarities concurrently, greatly reducing training time. The second, Fairly Sparse Linear Regression (FSLR), induces controlled sparsity in neighborhoods to reveal correlations among different demographic groups, achieving comparable efficiency while being more accurate. Their performance is evaluated using standard exposure metrics alongside a new metric for user coverage disparities. Our experiments cover various applications, including a novel exploration of bias in course recommendations by teachers’ country development status. Our results show the effectiveness of our algorithms in imposing fairness compared to BNSLIM and other well-known fairness approaches.