Author:
Gesmann-Nuissl Dagmar,Meyer Stefanie
Abstract
AbstractRecommender systems that support us in our everyday lives are becoming more precise and accurate in terms of the appropriateness of recommendations to users’ needs – with the result that the user often follows these recommendations. This is mainly due to the filtering methods and various algorithms used. In our paper, we will look specifically at the recommender systems on gaming platforms. These consist of different components: a shopping component, a streaming component and a social media component. The recommender systems of these components, when considered individually, have certain characteristics in terms of the machine learning and filtering methods used, which are mixed by combining them on one platform. As a result, it is unclear which of the information collected about the user at any time is lost and disappears into obscurity, and what information is used to generate recommendations. The frequently discussed “black box” problem exacerbates at this point and becomes a “black hole.” With the interests of platform users, platform operators, and software developers in mind, we examine the legal provisions that have been established to address this opaqueness: transparency obligations. Derived from the Digital Services Act and the Artificial Intelligence Act, we present various legally valid solutions to address the “black hole” problem and also lead them to practical suggestions for implementation.
Publisher
Springer International Publishing
Reference74 articles.
1. Abdullah, T.A.A., M.S.M. Zahid, and W. Ali. 2021. A Review of Interpretable ML in Healthcare: Taxonomy, Applications, Challenges, and Future Directions. Symmetry 2021 (13): 2439. https://doi.org/10.3390/sym13122439.
2. Adomavicius, G., and A. Tuzhilin. 2005. Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering 17 (6): 734–749. https://doi.org/10.1109/TKDE.2005.99.
3. Anand, A., K. Bizer, A. Erlei, U. Gadiraju, C. Heinze, L. Meub, W. Nejdl, and B. Steinröotter. 2018. Effects of Algorithmic Decision-Making and Interpretability on Human Behavior: Experiments using Crowdsourcing. In Proceedings of the sixth AAAI Conference on Human Computation and Crowdsourcing (Hcomp-18). Zurich: AAAI Press.
4. Ananny, M., and K. Crawford. 2018. Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability. New Media & Society 20 (3): 973–989. https://doi.org/10.1177/1461444816676645.
5. Anderson, P.W. 1972. More is Different: Broken Symmetry and the Nature of the Hierarchical Structure of Science. Science 177 (4047): 393–396.