DOI: 10.7759/cureus.35029
Terbit pada 1 Februari 2023 Pada Cureus

ChatGPT Output Regarding Compulsory Vaccination and COVID-19 Vaccine Conspiracy: A Descriptive Study at the Outset of a Paradigm Shift in Online Search for Information

N. Salim H. Harapan M. Barakat + 6 penulis

Abstrak

Background: Being on the verge of a revolutionary approach to gathering information, ChatGPT (an artificial intelligence (AI)-based language model developed by OpenAI, and capable of producing human-like text) could be the prime motive of a paradigm shift on how humans will acquire information. Despite the concerns related to the use of such a promising tool in relation to the future of the quality of education, this technology will soon be incorporated into web search engines mandating the need to evaluate the output of such a tool. Previous studies showed that dependence on some sources of online information (e.g., social media platforms) was associated with higher rates of vaccination hesitancy. Therefore, the aim of the current study was to describe the output of ChatGPT regarding coronavirus disease 2019 (COVID-19) vaccine conspiracy beliefs. and compulsory vaccination. Methods: The current descriptive study was conducted on January 14, 2023 using the ChatGPT from OpenAI (OpenAI, L.L.C., San Francisco, CA, USA). The output was evaluated by two authors and the degree of agreement regarding the correctness, clarity, conciseness, and bias was evaluated using Cohen’s kappa. Results: The ChatGPT responses were dismissive of conspiratorial ideas about severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) origins labeling it as non-credible and lacking scientific evidence. Additionally, ChatGPT responses were totally against COVID-19 vaccine conspiracy statements. Regarding compulsory vaccination, ChatGPT responses were neutral citing the following as advantages of this strategy: protecting public health, maintaining herd immunity, reducing the spread of disease, cost-effectiveness, and legal obligation, and on the other hand, it cited the following as disadvantages of compulsory vaccination: ethical and legal concerns, mistrust and resistance, logistical challenges, and limited resources and knowledge. Conclusions: The current study showed that ChatGPT could be a source of information to challenge COVID-19 vaccine conspiracies. For compulsory vaccination, ChatGPT resonated with the divided opinion in the scientific community toward such a strategy; nevertheless, it detailed the pros and cons of this approach. As it currently stands, the judicious use of ChatGPT could be utilized as a user-friendly source of COVID-19 vaccine information that could challenge conspiracy ideas with clear, concise, and non-biased content. However, ChatGPT content cannot be used as an alternative to the original reliable sources of vaccine information (e.g., the World Health Organization [WHO] and the Centers for Disease Control and Prevention [CDC]).

Artikel Ilmiah Terkait

User Experience Design for Automatic Credibility Assessment of News Content About COVID-19

Dimitrios Karvelas Jens Rauenbusch Konstantin Schulz + 3 lainnya

29 April 2022

The increasingly rapid spread of information about COVID-19 on the web calls for automatic measures of credibility assessment [18]. If large parts of the population are expected to act responsibly during a pandemic, they need information that can be trusted [20]. In that context, we model the credibility of texts using 25 linguistic phenomena, such as spelling, sentiment and lexical diversity. We integrate these measures in a graphical interface and present two empirical studies to evaluate its usability for credibility assessment on COVID-19 news. Raw data for the studies, including all questions and responses, has been made available to the public using an open license: https://github.com/konstantinschulz/credible-covid-ux. The user interface prominently features three sub-scores and an aggregation for a quick overview. Besides, metadata about the concept, authorship and infrastructure of the underlying algorithm is provided explicitly. Our working definition of credibility is operationalized through the terms of trustworthiness, understandability, transparency, and relevance. Each of them builds on well-established scientific notions [41, 65, 68] and is explained orally or through Likert scales. In a moderated qualitative interview with six participants, we introduce information transparency for news about COVID-19 as the general goal of a prototypical platform, accessible through an interface in the form of a wireframe [43]. The participants' answers are transcribed in excerpts. Then, we triangulate inductive and deductive coding methods [19] to analyze their content. As a result, we identify rating scale, sub-criteria and algorithm authorship as important predictors of the usability. In a subsequent quantitative online survey, we present a questionnaire with wireframes to 50 crowdworkers. The question formats include Likert scales, multiple choice and open-ended types. This way, we aim to strike a balance between the known strengths and weaknesses of open vs. closed questions [11]. The answers reveal a conflict between transparency and conciseness in the interface design: Users tend to ask for more information, but do not necessarily make explicit use of it when given. This discrepancy is influenced by capacity constraints of the human working memory [38]. Moreover, a perceived hierarchy of metadata becomes apparent: the authorship of a news text is more important than the authorship of the algorithm used to assess its credibility. From the first to the second study, we notice an improved usability of the aggregated credibility score's scale. That change is due to the conceptual introduction before seeing the actual interface, as well as the simplified binary indicators with direct visual support. Sub-scores need to be handled similarly if they are supposed to contribute meaningfully to the overall credibility assessment. By integrating detailed information about the employed algorithm, we are able to dissipate the users' doubts about its anonymity and possible hidden agendas. However, the overall transparency can only be increased if other more important factors, like the source of the news article, are provided as well. Knowledge about this interaction enables software designers to build useful prototypes with a strong focus on the most important elements of credibility: source of text and algorithm, as well as distribution and composition of algorithm. All in all, the understandability of our interface was rated as acceptable (78% of responses being neutral or positive), while transparency (70%) and relevance (72%) still lag behind. This discrepancy is closely related to the missing article metadata and more meaningful visually supported explanations of credibility sub-scores. The insights from our studies lead to a better understanding of the amount, sequence and relation of information that needs to be provided in interfaces for credibility assessment. In particular, our integration of software metadata contributes to the more holistic notion of credibility [47, 72] that has become popular in recent years Besides, it paves the way for a more thoroughly informed interaction between humans and machine-generated assessments, anticipating the users' doubts and concerns [39] in early stages of the software design process [37]. Finally, we make suggestions for future research, such as proactively documenting credibility-related metadata for Natural Language Processing and Language Technology services and establishing an explicit hierarchical taxonomy of usability predictors for automatic credibility assessment. © 2022, Springer Nature Switzerland AG.

Artificial intelligence and increasing misinformation

Eric D. Achtyes P. Whybrow S. Monteith + 3 lainnya

26 Oktober 2023

Summary With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.

Artificial Intelligence (AI) in Action: Addressing the COVID-19 Pandemic with Natural Language Processing (NLP)

Chih-Hsuan Wei Alexis Allot Qingyu Chen + 4 lainnya

9 Oktober 2020

The COVID-19 (coronavirus disease 2019) pandemic has had a significant impact on society, both because of the serious health effects of COVID-19 and because of public health measures implemented to slow its spread. Many of these difficulties are fundamentally information needs; attempts to address these needs have caused an information overload for both researchers and the public. Natural language processing (NLP)-the branch of artificial intelligence that interprets human language-can be applied to address many of the information needs made urgent by the COVID-19 pandemic. This review surveys approximately 150 NLP studies and more than 50 systems and datasets addressing the COVID-19 pandemic. We detail work on four core NLP tasks: information retrieval, named entity recognition, literature-based discovery, and question answering. We also describe work that directly addresses aspects of the pandemic through four additional tasks: topic modeling, sentiment and emotion analysis, caseload forecasting, and misinformation detection. We conclude by discussing observable trends and remaining challenges.

Online Information of Vaccines: Information Quality, Not Only Privacy, Is an Ethical Responsibility of Search Engines

M. Goldman P. Bannister Tania Vanzolini + 9 lainnya

11 Agustus 2020

The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines' approach to privacy and the scientific quality of the information they return. We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian, and French. The results show that not only “alternative” search engines (Duckduckgo, Ecosia, Qwant, Swisscows, and Mojeek) but also other commercial engines (Bing, Yahoo) often return more anti-vaccine pages (10–53%) than Google.com (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google.com. Health information returned by search engines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information.

How search engines disseminate information about COVID-19 and why they should do better

Aleksandra Urman R. Ulloa M. Makhortykh

11 Mei 2020

Access to accurate and up-to-date information is essential for individual and collective decision making, especially at times of emergency. On February 26, 2020, two weeks before the World Health Organization (WHO) officially declared the COVID-19’s emergency a “pandemic,” we systematically collected and analyzed search results for the term “coronavirus” in three languages from six search engines. We found that different search engines prioritize specific categories of information sources, such as government-related websites or alternative media. We also observed that source ranking within the same search engine is subjected to randomization, which can result in unequal access to information among users.

Daftar Referensi

0 referensi

Tidak ada referensi ditemukan.

Artikel yang Mensitasi

0 sitasi

Tidak ada artikel yang mensitasi.