Fairness in Rankings and Recommenders
Presented at the 2020 International Conference on Extending Database Technology (EDBT)
Description
Currently, algorithmic systems driven by large amounts of data are increasingly being used in all aspects of society. Such systems offer enormous opportunities. They accelerate scientific discovery in all domains, including personalized medicine and smart weather forecasting; they automate tasks; they help improve our lives through personal assistants and recommendations; and they have the potential to transform society through open government, to name just a few of their benefits.
Often, such systems are being used to assist or even replace human decision-making in diverse domains. Examples include software systems used in school admissions, housing, pricing of goods, credit score estimation, job applicant selection, sentencing decisions in courts, and surveillance. A prominent case is the COMPAS software used in courts in the US to assist bail and sentencing decisions through a risk assessment algorithm that predicts future crime. The ubiquitous use of such systems may create possible threats of economic loss, social stigmatization, or even loss of liberty. For instance, a known study by ProPublica found that in COMPAS, the false positive rate for African American defendants (i.e., people labeled “high-risk” who did not re-offend) was nearly twice as high as that for white defendants. Another well-known study shows that names used predominantly by men and women of color are much more likely to generate ads related to arrest records.
Data-driven systems are also being employed by search and recommendation engines, social media tools, and news outlets, among others. Recent studies report that social media has become the main source of online news with more than 2.4 billion internet users, of whom nearly 64.5% receive breaking news from social media instead of traditional sources. Thus, to a great extent, such systems play a central role in shaping our experiences and influencing our perception of the world. Again, there are many reports questioning the output of such systems. For instance, a known study on search results showed evidence for stereotype exaggeration in images returned when people search for professional careers. There are also many other threats, including fake news, abusive content, echo chambers, and filter bubbles.
Fairness in rankings and recommenders
In this tutorial, we pay special attention to the concept of fairness in rankings and recommender systems. By fairness, we typically mean a lack of bias. It is not correct to assume that insights achieved via computations on data are unbiased simply because data was collected automatically or processing was performed algorithmically. Bias may come from the algorithm, reflecting, for example, commercial or other preferences of its designers, or even from the actual data, for example, if a survey contains biased questions. In this tutorial, we review a number of definitions of fairness that aim at addressing discrimination, bias amplification, and ensuring transparency. We organize these definitions around the notions of individual and group fairness. We also present methods for achieving fairness in rankings and recommendations, taking a cross-type view and distinguishing them between pre-processing, in-processing, and post-processing approaches. We conclude with a discussion on new research directions that arise.
Outline
- Motivation & Background
- Motivating examples for the need for fair rankings and recommendations
- Tutorial overview and focus
- Modeling fairness
- Individual / Group fairness
- Fairness in ranked outputs: Attention-based, Probability-based, Pairwise comparisons
- Fairness in recommender systems: Data items, Users, Groups of users, Item providers
- Relationship between fairness and diversity, recommendation independence, transparency and feedback loops
- Ensuring fairness
- Pre-processing approaches: Transform data so that any underlying bias and discrimination are removed
- In-processing approaches: Modify or introducing algorithms that result in fair rankings and recommendations
- Post-processing approaches: Modify the output of existing algorithms
- Open research challenges
- Critical comparison of the existing work
- Open problems
Presenters
Material
Feel free to download the slides of the tutorial here.