Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Social Media’s AI Ethics, Digital Literacy, and AI Trust: Could These Lead to Positive Health Behavior?
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Aim/Purpose: This study investigated the mediating roles of Artificial Intelligence (AI) ethics and AI trust in the relationship between digital literacy and positive health behavior among Thai working-age individuals. The research sought to address a gap in existing literature by integrating these constructs within the context of social media use for health-related purposes. Introduction/Background: Social media is now a primary source of health information in Thailand, with AI-driven recommendation algorithms tailoring content to user profiles and behaviors. While such personalization can improve relevance, it also raises concerns about misinformation, selective exposure, and over-reliance on automated systems. Digital literacy, defined as the ability to locate, evaluate, and use digital content effectively, enables users to navigate such environments more critically. Similarly, AI ethics, which includes accountability, transparency, fairness, and security, can influence how individuals engage with AI-mediated platforms. AI trust, which refers to the willingness to rely on AI recommendations, may encourage adoption but could also reduce active health decision-making when it is excessive or uncritical. Despite substantial research on these constructs in other contexts, there is limited empirical evidence in Thailand. This study addresses this gap by examining their direct and indirect relationships with positive health behavior among Thai working-age adults. Methodology: A quantitative, cross-sectional research design was employed. Data were obtained from 420 Thai working-age individuals through a structured online questionnaire administered via Google Forms. A multi-stage sampling procedure combining cluster and quota sampling was applied. First, Bangkok districts were stratified into three zones: inner, middle, and outer. Four districts were then randomly selected from each zone, followed by the recruitment of 35 participants from each selected district through quota sampling. Inclusion criteria required Thai nationality, current residence in one of the selected districts, and active use of social media. Four instruments were used for measurement. Digital literacy was assessed using a 14-item scale. Perceptions of AI ethics were measured using a 16-item scale, comprising four dimensions: accountability, responsibility, explainability, and security. AI trust was unidimensionally evaluated with an 11-item scale, covering functionality, benefits, and credibility. Positive health behavior was measured using a scale comprised four domains: nutrition, physical activity, relaxation, and preventive behavior. All items were rated on a five-point Likert scale ranging from 1 to 5. Content validity was established through expert evaluation by five domain specialists, using Item–Objective Congruence. Confirmatory factor analysis was conducted to validate the measurement model for the three latent constructs: AI ethics, AI trust, and positive health behavior. Construct validity was confirmed prior to hypothesis testing. Structural equation modeling was then employed to examine the direct and indirect relationships among digital literacy, AI ethics, AI trust, and positive health behavior. Findings: The structural equation model showed an acceptable fit to the empirical data (RMSEA=.07, SRMR=.06, TLI=.96, CFI=.98, PNFI=.63), satisfactory internal consistency (CR=.92–.95), and convergent validity (AVE=.75–.82). The structural model showed meaningful explanatory power across the endogenous constructs (R2=.64–.99). All hypothesized direct and indirect effects were statistically significant. Digital literacy and AI ethics both exhibited positive, statistically significant direct effects on positive health behavior. In contrast, AI trust had a statistically significant negative direct effect on positive health behavior (ß = -.49, p < .01), indicating that excessive reliance on AI systems may discourage proactive health engagement. Digital literacy was positively associated with AI ethics and AI trust; AI ethics was strongly and positively associated with AI trust (ß = .84, p < .01). Mediation analysis further revealed that AI trust significantly mediated the relationship between digital literacy and positive health behavior (ß = -.40, p < .01), as well as between AI ethics and positive health behavior (ß = -.41, p < .01), highlighting a paradoxical role of AI trust in health-related behaviors. Contribution/Impact on Society: The findings highlight the importance of enhancing digital literacy and fostering perceptions of ethical AI in social media to support healthier behaviors in the workplace and beyond. The study suggests that over-reliance on AI systems, even when perceived as ethical, may lead to reduced active engagement in health-promoting behaviors. This underscores the need for balanced digital engagement and critical evaluation skills. Recommendations: Governments and private organizations should collaborate to integrate digital literacy and AI ethics education into public health promotion initiatives. Health-related content on social media should be accompanied by transparency measures and user empowerment strategies to ensure informed decision-making. Research Limitation: The study focused exclusively on Thai working-age individuals in the Bangkok Metropolitan Area, which may limit the generalizability of findings to other regions, age groups, or populations whose social, cultural, and digital environments differ substantially from those examined in this research. Future Research: Future studies should examine additional mediators and moderators to better understand how trust in AI translates into positive health behavior, such as mindfulness, locus of control, and health literacy. Expanding the research to diverse populations and contexts would also enhance the applicability of findings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.