Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Advancing AI for India: A Survey on Hierarchical Reasoning and Modular Enhancements in Large Language Models
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) have changed how computers read and write text. They are trained on vast amounts of data and can provide clear answers and perform multiple tasks. Rather than just expanding these models, researchers have recently begun including tools, retrieval techniques, and step-by-step reasoning. Retrieval-Augmented Generation (RAG), tool-using agents like Toolformer and ReAct, and planning techniques like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) are a few examples. Models are made lighter and faster with the use of Mixture of Experts (MoE) and effective fine-tuning techniques such as QLoRA and AWQ. Multiple levels of reasoning that function at varying speeds are added by Hierarchical Reasoning Models (HRMs). This paper gives a clear explanation and comparison of these ideas. Additionally, it examines actual application cases in various Indian fields, including local languages, healthcare, law, education, agriculture, and governance. The study covers the benefits and drawbacks of multiple strategies. The key ideas are: (1) LLMs remain essential as the fundamental language engine; (2) tools and retrieval help in error reduction; and (3) hierarchical plans improve model thinking but increase system complexity. We conclude with suggestions for adapting AI to Indian languages and future research concerns.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.