Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Generative AI as a Triage Tool in Aligned Yet Divergent Investment Decision-Making
0
Zitationen
2
Autoren
2026
Jahr
Abstract
This study explores whether generative artificial intelligence (AI) can exhibit decision-making behavior aligned with that of human experts. A total of 200 startup projects were assessed across four key dimensions. Each project received parallel evaluations from investors and generative AI models. AI models aligned with human evaluators in overall score levels and moderately predicted human ratings yet differed substantially in their score distributions, and the contrasts between the top 20% and the bottom 80% segments across all three models further revealed a distinctly two-tier alignment structure. Two indicators showed the practical impact: human labor time decreased by 94–99.6%, and monetary cost per report dropped by 350–550 times. The results reveal general logic but missed expert-level nuance in bounded, gradual, and stratified alignment with expert evaluators. Bounded alignment reflects AI's structural limits, gradual alignment describes dimension-specific convergence with human judgment, and stratified alignment captures tiered patterns of human–AI co-evaluation.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.366 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.255 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.122 Zit.