MedCo builds and text-enriches a medical knowledge graph using LLMs and EHR data, then fuses text and graph signals via joint LoRA-tuned LLaMA and heterogeneous GNN training to improve EHR clinical predictions on MIMIC datasets.
Title resolution pending
4 Pith papers cite this work. Polarity classification is still indexing.
years
2026 4verdicts
UNVERDICTED 4representative citing papers
EviCare uses deep model-guided evidence to enhance LLM in-context reasoning for accurate diagnosis prediction from EHRs, outperforming baselines by 20.65% on average and 30.97% for novel diagnoses on MIMIC datasets.
K2K framework enables internal memory retrieval in LLMs for healthcare outcome prediction, achieving state-of-the-art results on four benchmarks.
SciDC turns flexible scientific knowledge into standardized decoding constraints via LLMs, delivering a 12% average accuracy gain over vanilla generation on tasks including formulation design, tumor diagnosis, and retrosynthesis.
citing papers explorer
-
Text-Attributed Knowledge Graph Enrichment with Large Language Models for Medical Concept Representation
MedCo builds and text-enriches a medical knowledge graph using LLMs and EHR data, then fuses text and graph signals via joint LoRA-tuned LLaMA and heterogeneous GNN training to improve EHR clinical predictions on MIMIC datasets.
-
EviCare: Enhancing Diagnosis Prediction with Deep Model-Guided Evidence for In-Context Reasoning
EviCare uses deep model-guided evidence to enhance LLM in-context reasoning for accurate diagnosis prediction from EHRs, outperforming baselines by 20.65% on average and 30.97% for novel diagnoses on MIMIC datasets.
-
Efficient and Effective Internal Memory Retrieval for LLM-Based Healthcare Prediction
K2K framework enables internal memory retrieval in LLMs for healthcare outcome prediction, achieving state-of-the-art results on four benchmarks.
-
Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs
SciDC turns flexible scientific knowledge into standardized decoding constraints via LLMs, delivering a 12% average accuracy gain over vanilla generation on tasks including formulation design, tumor diagnosis, and retrosynthesis.