How do we detect policy failures during a rollout without a priori knowledge of potential failures?
Our FAIL-Detect: raised alarms (red border) when policy fails, and no raised alarms when policy succeeds
Recent years have witnessed impressive robotic manipulation systems driven by advances in imitation learning and generative modeling, such as diffusion- and flow-based approaches. As robot policy performance increases, so does the complexity and time horizon of achievable tasks, inducing unexpected and diverse failure modes that are difficult to predict a priori. To enable trustworthy policy deployment in safety-critical human environments, reliable runtime failure detection becomes important during policy inference. However, most existing failure detection approaches rely on prior knowledge of failure modes and require failure data during training, which imposes a significant challenge in practicality and scalability. In response to these limitations, we present FAIL-Detect, a modular two-stage approach for failure detection in imitation learning-based robotic manipulation. To accurately identify failures from successful training data, we frame the problem as sequential out-of-distribution (OOD) detection. We first distill policy inputs and outputs into scalar signals that correlate with policy failures and capture epistemic uncertainty. FAIL-Detect then employs conformal prediction (CP) as a versatile framework for uncertainty quantification with statistical guarantees. Empirically, we thoroughly investigate both learned and post-hoc scalar signal candidates on diverse robotic manipulation tasks. Our experiments show learned signals to be mostly consistently effective, particularly when using our novel flow-based density estimator. Furthermore, our method detects failures more accurately and faster than state-of-the-art (SOTA) failure detection baselines. These results highlight the potential of FAIL-Detect to enhance the safety and reliability of imitation learning-based robotic systems as they progress toward real-world deployment.
We propose a two-stage approach to failure detection:
Table 1: Overview of score methods evaluated in this work.
We focus on the following research questions:
We note that FAIL-Detect achieves high failure detection accuracy with fast detection (details in the paper):
We found that learned scores (i.e., score networks are trained to minimize certain objectives) are more effective than post-hoc scores. Specifically,
Figure 3 (Robobimic - Square): Qualitative results of failure detection scores overlaid with CP bands.
FAIL-Detect's alerts demonstrate strong correlation with observable failure indications in the environment
Figure 4: Physical interpretation of logpZO, the most successful and robust learned score method.
@article{xu2025can,
title={Can We Detect Failures Without Failure Data? Uncertainty-Aware Runtime Failure Detection for Imitation Learning Policies},
author={Xu, Chen and Nguyen, Tony Khuong and Dixon, Emma and Rodriguez, Christopher and Miller, Patrick and Lee, Robert and Shah, Paarth and Ambrus, Rares and Nishimura, Haruki and Itkina, Masha},
journal={arXiv preprint arXiv:2503.08558},
year={2025}
}