Practical AI Transparency: Revealing Datafication and Algorithmic Identities

Ana Pop Stefanija, Jo Pierson: Practical AI Transparency: Revealing Datafication and Algorithmic Identities. In: Journal of Digital Social Research, vol. 2, no. 3, 2020.


How does one do research on algorithms and their outputs when confronted with the inherent algorithmic opacity and black box-ness as well as with the limitations of API-based research and the data access gaps imposed by platforms’ gate-keeping practices? This article outlines the methodological steps we undertook to manoeuvre around the above-mentioned obstacles. It is a “byproduct” of our investigation into datafication and the way how algorithmic identities are being produced for personalisation, ad delivery and recommendation. Following Paßmann and Boersma’s (2017) suggestion for pursuing “practical transparency” and focusing on particular actors, we experiment with different avenues of research. We develop and employ an approach of letting the platforms speak and making the platforms speak. In doing so, we also use non-traditional research tools, such as transparency and regulatory tools, and repurpose them as objects of/for study. Empirically testing the applicability of this integrated approach, we elaborate on the possibilities it offers for the study of algorithmic systems, while being aware and cognizant of its limitations and shortcomings.

See all documents refering Cortext Manager

* Information to the authors (GDPR)