OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 10:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

PENTLLM: A Pentagon-Based Multi-Expert Large Language Model Architecture for Medical Research Applications

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Background: Artificial Intelligence-based technologies, in particular Large Language Models, are getting increasingly popular with improving capabilities in text generation and image analysis. However, single Large Language Models still lack the capability to do comprehensive academic medical research which is free of bias and hallucinations. Objective: This paper introduces a novel concept of PENTLLM (Pentagon Large Language Model), which is a multi-expert AI architecture using five specialized LLM experts with a central NEXUS synthesis providing a comprehensive academic module including statistical analysis for medical research providing research proposal analysis, peer review for manuscripts, systematic reviews, meta-analysis, apart from other tools of CV writing, document processing, and coding, with self-learning through persistent memory. Methods: PENTLLM architecture consists of 5 LLMs (STEM, Language, Knowledge, Reasoning, Creative) that work in parallel, with responses from all experts synthesized and then unified through a central NEXUS model. The PENTLLM NEXUS system comprises 98 Python modules, 145 API endpoints, and 69,637 lines of code. Moreover, it has integration of agentic AI, LangChain, LangGraph, metacognition and hypermemory to enable workflow orchestration. It is also equipped with a clinical RAG module which enables evidence-based responses based on uploaded clinical guidelines and policies. PENTLLM also has comprehensive tools for office documents. We conducted all trials on a desktop workstation with Intel Core Ultra 9 285K (24 cores), 48GB RAM, and NVIDIA RTX 5090 (32GB VRAM) running Windows 11. Ollama with locally running 32B parameter models were used. Results: PENTLLM demonstrated robust performance across academic medical research tasks. Multi-model architecture of PENTLLM exhibited superior outcomes on various academic tasks when compared to single model approaches. In the peer review task Pentagon mode successfully generated comprehensive qualitative reviewer commentary (431 words covering 8 review dimensions) where single-model approaches failed to produce any detailed output; however, overall scores of peer review across both remained the same. Interestingly, Pentagon used only 38 seconds more compared to a single model. Similarly, in literature review tasks, Pentagon mode again showed a higher quality score (0.74 vs 0.69) and lower hallucination risk (0.25 vs 0.30) with zero detected biases when this output was compared to a single model. When tested for systematic review, Pentagon mode again produced fewer statistical reporting errors (3 vs 7), showed lower hallucination risk (0.36 vs 0.42), and produced more comprehensive output (44,615 vs 37,057 characters, 61 vs 57 references) with comparable processing time (296s vs 301s). It showed personalized responses based on user preferences and interaction due to its self-learning memory system. We noticed that Pentagon mode parallel execution was constrained by single-GPU VRAM capacity, which resulted in sequential model loading. Conclusion: PENTLLM has shown significant advancement in AI-assisted medical research providing a privacy-preserving platform with development and analysis of research proposals to peer review and publication. Multi-model architecture is superior in performance with not much noticeable difference in the output time compared to single models. Full paper to follow…… Patent Pending at United States Patent and Trademark Office (USPTO)

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareTopic Modeling
Volltext beim Verlag öffnen