DOI: 10.1007/s00146-021-01270-5
Terbit pada 12 September 2021 Pada Ai & Society

Examining embedded apparatuses of AI in Facebook and TikTok

Justin Grandinetti

Abstrak

In popular discussions, the nuances of AI are often abridged as “the algorithm”, as the specific arrangements of machine learning (ML), deep learning (DL) and automated decision-making on social media platforms are typically shrouded in proprietary secrecy punctuated by press releases and transparency initiatives. What is clear, however, is that AI embedded on social media functions to recommend content, personalize ads, aggregate news stories, and moderate problematic material. It is also increasingly apparent that individuals are concerned with the uses, implications, and fairness of algorithmic systems. Perhaps in response to concerns about “the algorithm” by individuals and governments, social media platforms utilize transparency initiatives and official statements, in part, to deflect official regulation. In the following paper, I draw from transparency initiatives and statements from representatives of Facebook and TikTok as case studies of how AI is embedded in these platforms, with attention to the promotion of AI content moderation as a solution to the circulation of problematic material and misinformation. This examination considers the complexity of embedded AI as a material-discursive apparatus, predicated on discursive techniques—what is seeable, sayable, knowable in a given time period—as well as the material arrangements—algorithms, datasets, users, platforms, infrastructures, moderators, etc. As such, the use of AI as part of the immensely popular platforms Facebook and TikTok demonstrates that AI does not exist in isolation, instead functioning as human–machine ensemble reliant on strategies of acceptance via discursive techniques and the changing material arrangements of everyday embeddedness.

Artikel Ilmiah Terkait

Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media

Joëlle Swart

1 April 2021

The news that young people consume is increasingly subject to algorithmic curation. Yet, while numerous studies explore how algorithms exert power in citizens’ everyday life, little is known about how young people themselves perceive, learn about, and deal with news personalization. Considering the interactions between algorithms and users from an user-centric perspective, this article explores how young people make sense of, feel about, and engage with algorithmic news curation on social media and when such everyday experiences contribute to their algorithmic literacy. Employing in-depth interviews in combination with the walk-through method and think-aloud protocols with a diverse group of 22 young people aged 16–26 years, it addresses three current methodological challenges to studying algorithmic literacy: first, the lack of an established baseline about how algorithms operate; second, the opacity of algorithms within everyday media use; and third, limitations in technological vocabularies that hinder young people in articulating their algorithmic encounters. It finds that users’ sense-making strategies of algorithms are context-specific, triggered by expectancy violations and explicit personalization cues. However, young people’s intuitive and experience-based insights into news personalization do not automatically enable young people to verbalize these, nor does having knowledge about algorithms necessarily stimulate users to intervene in algorithmic decisions.

The Society of Algorithms

M. Fourcade J. Burrell

27 Mei 2021

The pairing of massive data sets with processes—or algorithms—written in computer code to sort through, organize, extract, or mine them has made inroads in almost every major social institution. This article proposes a reading of the scholarly literature concerned with the social implications of this transformation. First, we discuss the rise of a new occupational class, which we call the coding elite. This group has consolidated power through their technical control over the digital means of production and by extracting labor from a newly marginalized or unpaid workforce, the cybertariat. Second, we show that the implementation of techniques of mathematical optimization across domains as varied as education, medicine, credit and finance, and criminal justice has intensified the dominance of actuarial logics of decision-making, potentially transforming pathways to social reproduction and mobility but also generating a pushback by those so governed. Third, we explore how the same pervasive algorithmic intermediation in digital communication is transforming the way people interact, associate, and think. We conclude by cautioning against the wildest promises of artificial intelligence but acknowledging the increasingly tight coupling between algorithmic processes, social structures, and subjectivities.

Towards Leveraging AI-based Moderation to Address Emergent Harassment in Social Virtual Reality

Samaneh Zamanifard Guo Freeman Lingyuan Li + 2 lainnya

19 April 2023

Extensive HCI research has investigated how to prevent and mitigate harassment in virtual spaces, particularly by leveraging human-based and Artificial Intelligence (AI)-based moderation. However, social Virtual Reality (VR) constitutes a novel social space that faces both intensified harassment challenges and a lack of consensus on how moderation should be approached to address such harassment. Drawing on 39 interviews with social VR users with diverse backgrounds, we investigate the perceived opportunities and limitations for leveraging AI-based moderation to address emergent harassment in social VR, and how future AI moderators can be designed to enhance such opportunities and address limitations. We provide the first empirical investigation into re-envisioning AI’s new roles in innovating content moderation approaches to better combat harassment in social VR. We also highlight important principles for designing future AI-based moderation incorporating user-human-AI collaboration to achieve safer and more nuanced online spaces.

Deceptive AI Ecosystems: The Case of ChatGPT

Ştefan Sarkadi Yifan Xu Xiao Zhan

18 Juni 2023

ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its development and deployment. This paper emphasizes the need to study ChatGPT "in the wild", as part of the ecosystem it is embedded in, with a strong focus on user involvement. We examine the ethical challenges stemming from ChatGPT’s deceptive human-like interactions and propose a roadmap for developing more transparent and trustworthy chatbots. Central to our approach is the importance of proactive risk assessment and user participation in shaping the future of chatbot technology.

Excavating awareness and power in data science: A manifesto for trustworthy pervasive data research

Michael Zimmer Katie Shilton Matthew J. Bietz + 5 lainnya

1 Juli 2021

Frequent public uproar over forms of data science that rely on information about people demonstrates the challenges of defining and demonstrating trustworthy digital data research practices. This paper reviews problems of trustworthiness in what we term pervasive data research: scholarship that relies on the rich information generated about people through digital interaction. We highlight the entwined problems of participant unawareness of such research and the relationship of pervasive data research to corporate datafication and surveillance. We suggest a way forward by drawing from the history of a different methodological approach in which researchers have struggled with trustworthy practice: ethnography. To grapple with the colonial legacy of their methods, ethnographers have developed analytic lenses and researcher practices that foreground relations of awareness and power. These lenses are inspiring but also challenging for pervasive data research, given the flattening of contexts inherent in digital data collection. We propose ways that pervasive data researchers can incorporate reflection on awareness and power within their research to support the development of trustworthy data science.

Daftar Referensi

1 referensi

Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media

Joëlle Swart

1 April 2021

The news that young people consume is increasingly subject to algorithmic curation. Yet, while numerous studies explore how algorithms exert power in citizens’ everyday life, little is known about how young people themselves perceive, learn about, and deal with news personalization. Considering the interactions between algorithms and users from an user-centric perspective, this article explores how young people make sense of, feel about, and engage with algorithmic news curation on social media and when such everyday experiences contribute to their algorithmic literacy. Employing in-depth interviews in combination with the walk-through method and think-aloud protocols with a diverse group of 22 young people aged 16–26 years, it addresses three current methodological challenges to studying algorithmic literacy: first, the lack of an established baseline about how algorithms operate; second, the opacity of algorithms within everyday media use; and third, limitations in technological vocabularies that hinder young people in articulating their algorithmic encounters. It finds that users’ sense-making strategies of algorithms are context-specific, triggered by expectancy violations and explicit personalization cues. However, young people’s intuitive and experience-based insights into news personalization do not automatically enable young people to verbalize these, nor does having knowledge about algorithms necessarily stimulate users to intervene in algorithmic decisions.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.