The Influence of Explanations on Consumer Trust in Recommendation Systems
Files
BarbolaniDaMontauto_47751700_2023.pdf
UCLouvain restricted access - Adobe PDF
- 2.01 MB
BarbolaniDaMontauto_47751700_2023_APPENDIX1.pdf
UCLouvain restricted access - Adobe PDF
- 1.23 MB
BarbolaniDaMontauto_47751700_2023_APPENDIX2.pdf
UCLouvain restricted access - Adobe PDF
- 286.92 KB
BarbolaniDaMontauto_47751700_2023_APPENDIX3.pdf
UCLouvain restricted access - Adobe PDF
- 45.96 KB
BarbolaniDaMontauto_47751700_2023_APPENDIX4.pdf
UCLouvain restricted access - Adobe PDF
- 68.94 KB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- Over the last few decades, recommendation systems—tools tailor-made to suggest items to users—have become integral to many platform-based business models. Examples include significant content consumption on platforms like Spotify, YouTube, Netflix, and revenue generation in e-commerce platforms such as Amazon. While earlier research concentrated on system performance, emphasizing algorithmic efficiency, the recent demand for transparency has given rise to eXplainable Artificial Intelligence (XAI). This research delves into the intricate aspects of explainability within recommendation systems, particularly those revolving around music. The study distinguishes three primary types of information for explanations: the data (WHAT) the system utilizes, the rationale (HOW) behind the recommendations, and the system's optimized metrics (WHY). An experimental approach was employed to discern the influence of these information types on user trust and their intention to use the system. The role of user-centred variables like perceived transparency, privacy concerns, and technical knowledge were also assessed. Findings from this study underscore the essential role of explanations in fostering transparency in algorithmic decisions to enhance user trust and use intention. Diverse information types were found to have dissimilar impacts on user trust, with explanations about algorithmic decision logic being notably more influential than those about optimized metrics. Furthermore, users with heightened privacy concerns exhibited a pronounced receptiveness to explanations. Conversely, users with more advanced technical familiarities in AI and recommendation systems were found to lean on their expertise in system evaluation, mitigating the effects of explanations. Overall, this study accentuates the significance of offering explanations in the user's algorithmic experience (AX). By shedding light on the multifaceted dynamics of human-agent interactions, it provides invaluable insights to practitioners across sectors reliant on recommendation systems. Acknowledging and acting upon these findings can enable businesses to fine-tune their recommendation systems, thereby boosting their overall performance.