Robots capable of traversing flights of stairs play an important role in both indoor and outdoor applications. The capability of accurately identifying a staircase is one of the vital technical functions in these robots. This paper presents a vision-based ascending stair detection algorithm using RGB-Depth (RGB-D) data based on an interpretable model. The method follows the four steps: 1) pre-processing of RGB images for line extraction by applying the dilatation and Canny filters followed by the probabilistic Hough line transform, 2) defining the regions of interests (ROIs) via Kmean clustering, 3) training the initial model based on a support vector machine (SVM) using three extracted features (i.e., gradient, continuity factor, and deviation cost), and 4) building an interpretable model for stair classification by determining the decision boundary conditions. The developed method was evaluated for its performance using our dataset, and the results showed 85% sensitivity and 94% specificity. When the same model was tested on a different test set, the sensitivity and specificity slightly decreased to 80% and 90%, respectively. By shifting the boundary conditions using only a small subset of the new dataset without rebuilding the model, performance was improved to 90% sensitivity and 96% specificity. The presented method is also compared with existing SVM- and neural-network-based methods.
Click here to check the ICRA poster: ICRA2022_Poster_FINAL