Fairness and Explanations in Group Recommender Systems. Towards Trustworthiness and transparency

Fairness and Explanations in Group Recommender Systems. Towards Trustworthiness and transparency

Referencia: ProyExcel_00257

Fecha inicio: 02/12/2022

Fecha fin: 31/12/2025

Entidad: CONSEJERÍA DE UNIVERSIDADES, INVESTIGACIÓN E INNOVACIÓN

Financiación recibida: 99.015

Investigador principal: Luis Martínez, Rosa Mª Rodríguez

Objetivos:

The purpose of this project is to develop novel algorithms and computational tools to boost explanation, fairness, and their synergy, in GRS to cause a more impactful outcome for end users in real-world scenarios. This purpose will be achieved by defining the following specific objectives that will guide us to the general purpose: GOAL 1: To develop post-hoc explanations model for GRS based on LIME and SHAP local-agnostic models. GOAL 2: To study and develop models to improve C-fairness, P-fairness, and two-sided fairness in GRS. GOAL 3: To develop post-hoc models for local- agnostic explanation of C-, P-, and two-sided fairness in GRS. GOAL 4: To test and deploy the developed computational tools in a real-world GRS scenario.

Descripción:

Today, most of the social media networks use automated tools to recommend content or products and to rank, curate and moderate posts. Recommender Systems (RSs), and in particular Group Recommender Systems (GRSs), - a specific kind of RSs for recommending items to a group of users, are likely to become more ubiquitous. These automated content governance tools are receiving an emerging interest as both the algorithms and the decision-making processes behind the platforms are not sufficiently transparent, with a negative impact in domains such as fair job opportunities, fair e-commerce or news exposure. Two of the key requirements that need to be fulfilled to building and maintaining users’ trust in Artificial Intelligence (AI) systems and guarantee this transparency are Fairness and Explainability. But, beyond some previous attempts of boosting both aspects in traditional-individual RS, they have been hardly explored in GRSs. This project, planned for 3 years’ time, aims at addressing this challenge by developing novel algorithms and computational tools in GRS to boost explanation, fairness, and the synergy between both of them through a disruptive multidisciplinary research approach that: 1) extensively brings SHAP and LIME, as state-of-art post-hoc explanation approaches in AI, into RS and GRS contexts, 2) bridges explanation and fairness in RS and GRS, introducing an explanation paradigm shifting coming from "why the recommendations are generated?" to "how fair are the generated recommendations?" and, 3) deploys and studies explanation and fairness in a real-world GRS scenario. The ultimate goal is to guarantee higher user’s trust, and independence of the RS output from any user’s socio-demographic feature.

No hay miembros registrados para este proyecto.

No hay publicaciones registradas para este proyecto.

No hay actividades registradas para este proyecto.