Ophthalmic AI Disease‑Detection & Explanation Framework

Apr 17, 2025 · 1 min read

Visual assets will be released after the framework reaches public beta.

Unified segmentation, classification and language generation deliver explainable eye‑disease reports for clinicians and patients.

Highlights

  • NN-MOBILENET with uncertainty maps for micro‑lesion detection
  • Seven binary ResNeXt classifiers fused for multi‑label output
  • LLaVA‑Med‑13B generates bilingual lay summaries

Current Milestones

  • 100 k IRB‑approved fundus images curated
  • Performance metrics under internal review

Tech Stack PyTorch 2.2, mmsegmentation, LoRA‑finetuned LLaVA‑Med, FastAPI + React dashboard

Next Steps

  1. Multi‑centre clinical validation
  2. ONNX/TensorRT edge deployment
Dongkun Lee
Authors
Ph.D. AI Researcher | XR Simulation | Explainable AI | Anomaly Detection
I am an AI researcher with a Ph.D. in Computer Science at KAIST, specializing in Generative AI for XR simulations and anomaly detection in safety-critical systems.
My work focuses on Explainable AI (XAI) to enhance transparency and reliability across smart infrastructure, security, and education.
By building multimodal learning approaches and advanced simulation environments, I aim to improve operational safety, immersive training, and scalable content creation.