Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
$$\mathcal {EAT}$$ : explainable attentive transformers for identifying the factors influencing dental visits to enhance dental data completeness
1
Zitationen
7
Autoren
2025
Jahr
Abstract
Our proposed method effectively reduced the feature space, thereby improving focus and reducing training and inference time without compromising accuracy. This fusion-based model provides valuable insights for healthcare providers, enabling the development of targeted interventions tailored to specific population needs. Understanding the factors contributing to irregular dental visits can guide evidence-based strategies to overcome barriers and improve overall oral health outcomes.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.333 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.696 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.414 Zit.