Explainable Road Hazard Detection via Pixel-wise Uncertainty Analysis
Dec 11, 2024
ยท
1 min read

๐ธ Gallery
This project introduces an explainable anomaly detection framework for road environments, focusing on the safety of autonomous driving systems. By leveraging pixel-wise logit variance and uncertainty estimation, the system effectively identifies road hazards such as debris, potholes, and unexpected obstacles.
๐ Key Contributions:
- Uncertainty-aware semantic segmentation to highlight abnormal regions in road scenes.
- Iterative background refinement to enhance the precision of hazard boundaries.
- Practical deployment on dashcam video datasets for real-world evaluation.
- Proposed a novel metric for assessing pixel-level risk factors in unseen environments.
๐ Publications:
- ** Road anomaly segmentation based on pixel-wise logit variance with iterative background highlighting.**
IEEE International Conference on Robotics and Automation (ICRA), 2023.

Authors
Ph.D. AI Researcher | XR Simulation | Explainable AI | Anomaly Detection
I am an AI researcher with a Ph.D. in Computer Science at KAIST, specializing in Generative AI for XR simulations and anomaly detection in safety-critical systems.
My work focuses on Explainable AI (XAI) to enhance transparency and reliability across smart infrastructure, security, and education.
By building multimodal learning approaches and advanced simulation environments, I aim to improve operational safety, immersive training, and scalable content creation.
My work focuses on Explainable AI (XAI) to enhance transparency and reliability across smart infrastructure, security, and education.
By building multimodal learning approaches and advanced simulation environments, I aim to improve operational safety, immersive training, and scalable content creation.