OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.04.2026, 17:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Authors' Reply to ‘Methodological Considerations for Surveys of Dental Students' Knowledge and Attitudes Towards Artificial Intelligence in Oral Cancer Diagnosis’

2026·0 Zitationen·Oral Diseases
Volltext beim Verlag öffnen

0

Zitationen

24

Autoren

2026

Jahr

Abstract

We thank Mourão and colleagues (Mourão et al. 2026) for their comments on our article and for the methodological considerations relevant to the interpretation of our multi-country survey on dental students' knowledge, attitudes and perceptions regarding artificial intelligence (AI) tools in the diagnosis of oral cancer (OC) and oral potentially malignant disorders (OPMDs) (Brailo et al. 2026). We welcome the opportunity to clarify aspects of our methodological approach and to further contextualise the findings of our study. Mourão et al. rightly note that differences in response rates and the relatively small size of some country subgroups may increase the risk of non-response and selection bias (Mourão et al. 2026). As reported in our original paper, the overall response rate was 42.7%, with substantial variation between countries, and some national subgroups were limited in size (Brailo et al. 2026). These issues were explicitly acknowledged as limitations in the discussion, and we agree that country-level comparisons should therefore be interpreted with caution, particularly when subgroup sizes are small (Brailo et al. 2026). However, the primary aim of the study was to provide an initial multicentre overview of dental students' perceptions of AI applications in the diagnosis of OC and OPMDs across several European settings, rather than to support definitive population-level inference (Brailo et al. 2026). In that context, despite the variability in response rates, the inclusion of students from eight universities in six countries allowed us to identify recurring patterns across settings, most notably the limited formal training in AI and the strong demand for greater curricular integration (Brailo et al. 2026). We also appreciate the emphasis placed by Mourão et al. on questionnaire validation and cross-language measurement equivalence in multinational surveys (Mourão et al. 2026). We agree that these are important considerations, especially when the principal aim is robust cross-country comparison. As described in our paper, no validated instrument specifically addressing this topic was available at the time of study design (Brailo et al. 2026). The questionnaire was therefore developed specifically for this study, supported by previously published surveys on related themes, refined iteratively through collaboration among the international author group, and reviewed by final-year dental students for clarity and comprehensibility before dissemination (Brailo et al. 2026). This approach was intended to support content relevance and face validity in an emerging field, while recognising that it does not replace full psychometric validation. Accordingly, we view this study as an initial step that can inform future instrument development (Tsang et al. 2017). In particular, the present findings may help identify which domains are most stable and educationally relevant, thereby supporting future work on internal consistency, construct validity, factor structure and cross-cultural adaptation across languages and settings. The suggestion to report confidence intervals for descriptive estimates is well taken (Mourão et al. 2026). Confidence intervals can enhance the interpretation of survey findings by providing information about the precision of the estimates (Sim and Reid 1999). In the present study, the emphasis was placed on descriptive proportions and comparative analyses to explore potential differences between groups (Brailo et al. 2026). Nonetheless, we acknowledge that including confidence intervals for key proportions could strengthen the statistical presentation of similar surveys in the future. Finally, we appreciate the recommendation to consider established reporting frameworks such as the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) (Eysenbach 2004; Mourão et al. 2026). As research on the role of AI in dental education continues to expand, the use of standardised methodological and reporting frameworks will help improve transparency, comparability and reproducibility across studies. We hope that our study contributes to the ongoing discussion on how best to understand and address educational needs related to the responsible use of AI technologies in the diagnosis of OC and OPMDs, while also providing a foundation for the future validation of the questionnaire. Ana Andabak Rogulj: writing – original draft. Molly Harte: writing – review and editing, visualization, project administration. Vlaho Brailo: writing – original draft, conceptualization. Ivana Škrinjar: writing – review and editing. Giovanni Lodi: writing – review and editing. Danica Vidović Juras: writing – original draft. Marcio Diniz Freitas: writing – review and editing. Jean-Cristophe Fricain: writing – review and editing. José López López: writing – review and editing. Luis Monteiro: writing – review and editing. Niccolò Lombardi: writing – review and editing. Elena Varoni: writing – review and editing. Raj Ariyaratnam: writing – review and editing. Yunpeng Li: writing – review and editing. Ali Abdullah Alqarni: writing – review and editing. Faleh Tamimi: writing – review and editing. Richeal Ní Ríordáin: writing – review and editing. The authors declare no conflicts of interest. Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

Ähnliche Arbeiten