• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • ART Lab
  • Research
  • Publications
  • ART Studio
  • People
  • News
  • Contact Us

Adaptive Robotics & Technology Lab

ART Lab

Texas A&M University College of Engineering

Publications

Small and rural towns’ perception of autonomous vehicles: insights from a survey in Texas

Muhammad Usman, Wei Li, Jiahe Bian, Andong Chen, Xinyue Ye, Xiao Li, Bahar Dadashova, Chanam Lee, Kiju Lee, Sivakumar Rathinam & Marcia Ory

Transportation Planning and Technology, 47:2, 200-225, DOI: 10.1080/03081060.2023.2259373

September 2023

People’s perceptions of Autonomous Vehicles (AVs) are critical to understanding the role of AVs in future transportation systems. Most previous work on AVs perceptions is based on large cities or metropolitan areas. This study provides a unique perspective regarding perceptions of impacts of AVs in small and rural communities through an online survey in Central Texas (n = 1153). Our questionnaires gathered basic socio-demographic characteristics and AV impacts variables identified from the literature. We used summary statistics and ordered logistic regression models to reveal the perceived impacts of AVs. Residents of small and rural communities, particularly older adults (65 + years), were more enthusiastic about the development of AVs than the national average. Our findings reveal that being an employed, married male with a higher income increases the likelihood of accepting the impacts of AVs, suggesting further research to explore a feasible approach to utilizing AVs in small, rural communities.

Adaptive Centroidal Voronoi Tessellation With Agent Dropout and Reinsertion for Multi-Agent Non-Convex Area Coverage

Kangneoung Lee & Kiju Lee

IEEE Access

January 8, 2024

IEEE Access 2024

Voronoi diagrams are widely used for area partitioning and coverage control. Nevertheless, their utilization in non-convex domains often necessitates additional computational procedures, such as diffeomorphism application, geodesic distance calculations, or the integration of local markers. Extending these techniques across diverse non-convex domains proves challenging. This paper introduces the adaptive centroidal Voronoi tessellation (aCVT) algorithm, which combines iterative centroidal Voronoi tessellation (iCVT) with an innovative agent dropout and reinsertion strategy. This integration aims to enhance area coverage control in non-convex domains while maintaining adaptability across varied environments without the need for complex computational processes. The efficacy of this approach is validated through simulations involving non-convex domains with disjoint target areas, obstacles, and shape constraints for both homogeneous and heterogeneous agents. Additionally, the aCVT algorithm is extended for real-time coverage control scenarios. Performance metrics are employed to assess the distribution of partitioned Voronoi regions and the overall coverage of the target areas. Results demonstrate improved performance compared to methods that do not incorporate the agent dropout and reinsertion strategy.

Integrated system architecture with mixed-reality user interface for virtual-physical hybrid swarm simulations

C. Zheng, A. Jarecki, and K. Lee

Scientific Reports

9/7/2023

https://doi.org/10.1038/s41598-023-40623-6

System architecture for virtual-physical hybrid swarm simulation

This paper introduces a hybrid robotic swarm system architecture that combines virtual and physical components and enables human–swarm interaction through mixed reality (MR) devices. The system comprises three main modules: (1) the virtual module, which simulates robotic agents, (2) the physical module, consisting of real robotic agents, and (3) the user interface (UI) module. To facilitate communication between the modules, the UI module connects with the virtual module using Photon Network and with the physical module through the Robot Operating System (ROS) bridge. Additionally, the virtual and physical modules communicate via the ROS bridge. The virtual and physical agents form a hybrid swarm by integrating these three modules. The human–swarm interface based on MR technology enables one or multiple human users to interact with the swarm in various ways. Users can create and assign tasks, monitor real-time swarm status and activities, or control and interact with specific robotic agents. To validate the system-level integration and embedded swarm functions, two experimental demonstrations were conducted: (a) two users playing planner and observer roles, assigning five tasks for the swarm to allocate the tasks autonomously and execute them, and (b) a single user interacting with the hybrid swarm consisting of two physical agents and 170 virtual agents by creating and assigning a task list and then controlling one of the physical robots to complete a target identification mission.

ARMoR: Amphibious Robot for Mobility in Real-World Applications

M. Hammond and K. Lee

IEEE/ASME International Conference on Advanced Intelligent Mechatronics

06/2023

This paper presents a mobile robot for amphibious surface locomotion called ARMoR. The locomotion system of ARMoR consists of two wheel-and-leg transformable mechanisms and a customizable balancing tail. A sphere body chassis containing electronic components assembles the wheels and the tail. A combination of chassis design and transformable wheels allows ARMoR to safely navigate various environments, including diverse terrains and water surfaces. The robot is controlled and operated using an embedded microprocessor interfacing with sensing, communicating, and powering modules, including the Global Positioning System (GPS), camera, Inertial Measurement Unit (IMU), wireless communication module, and batteries. ARMoR was tested for its locomotion capabilities on concrete, dirt, grass, rocky surface, low brush, stairs, and water. On concrete, dirt, and grass, ARMoR operated in the wheeled mode; on other surfaces, the wheels transformed into the legged configuration enabling the robot to traverse challenging surface conditions effectively. ARMoR successfully traversed all terrains, and the traversal speeds were measured.

https://art.engr.tamu.edu/wp-content/uploads/sites/170/2023/09/IEEEAIM2023_MHamoond_L.mp4

Computerized Block Games for Automated Cognitive Assessment: Development and Evaluation Study

X. Cheng, G. Gilmore, A. Lerner, and K. Lee

JMIR Serious Games

05/16/2023

doi: 10.2196/40931

 

Background: Cognitive assessment using tangible objects can measure fine motor and hand-eye coordination skills along with other cognitive domains. Administering such tests is often expensive, labor-intensive, and error prone owing to manual recording and potential subjectivity. Automating the administration and scoring processes can address these difficulties while reducing time and cost. e-Cube is a new vision-based, computerized cognitive assessment tool that integrates c
omputational measures of play complexity and item generators to enable automated and adaptive testing. The e-Cube games use a set of cubes, and the system tracks the movements and locations of these cubes as manipulated by the player.

Objective: The primary objectives of the study were to validate the play complexity measures that form the basis of developing the adaptive assessment system and evaluate the preliminary utility and usability of the e-Cube system as an automated cognitive assessment tool.

Methods: This study used 6 e-Cube games, namely, Assembly, Shape-Matching, Sequence-Memory, Spatial-Memory, Path-Tracking, and Maze, each targeting different cognitive domains. In total, 2 versions of the games, the fixed version with predetermined sets of items and the adaptive version using the autonomous item generators, were prepared for comparative evaluation. Enrolled participants (N=80; aged 18-60 years) were divided into 2 groups: 48% (38/80) of the participants in the fixed group and 52% (42/80) in the adaptive group. Each was administered the 6 e-Cube games; 3 subtests of the Wechsler Adult Intelligence Scale, Fourth Edition (WAIS-IV; Block Design, Digit Span, and Matrix Reasoning); and the System Usability Scale (SUS). Statistical analyses at the 95% significance level were applied.

Results: The play complexity values were correlated with the performance indicators (ie, correctness and completion time). The adaptive e-Cube games were correlated with the WAIS-IV subtests (r=0.49, 95% CI 0.21-0.70; P<.001 for Assembly and Block Design; r=0.34, 95% CI 0.03-0.59; P=.03 for Shape-Matching and Matrix Reasoning; r=0.51, 95% CI 0.24-0.72; P<.001 for Spatial-Memory and Digit Span; r=0.45, 95% CI 0.16-0.67; P=.003 for Path-Tracking and Block Design; and r=0.45, 95% CI 0.16-0.67; P=.003 for Path-Tracking and Matrix Reasoning). The fixed version showed weaker correlations with the WAIS-IV subtests. The e-Cube system showed a low false detection rate (6/5990, 0.1%) and was determined to be usable, with an average SUS score of 86.01 (SD 8.75).

Conclusions: The correlations between the play complexity values and performance indicators supported the validity of the play complexity measures. Correlations between the adaptive e-Cube games and the WAIS-IV subtests demonstrated the potential utility of the e-Cube games for cognitive assessment, but a further validation study is needed to confirm this. The low false detection rate and high SUS scores indicated that e-Cube is technically reliable and usable.

Consensus decision-making in artificial swarms via entropy-based local negotiation and preference updating

Chuanqi Zheng and Kiju Lee

Swarm Intelligence

5/15/2023

https://doi.org/10.1007/s11721-023-00226-3

This paper presents an entropy-based consensus algorithm for a swarm of artificial agents with limited sensing, communication, and processing capabilities. Each agent is modeled as a probabilistic finite state machine with a preference for a finite number of options defined as a probability distribution. The most preferred option, called exhibited decision, determines the agent’s state. The state transition is governed by internally updating this preference based on the states of neighboring agents and their entropy-based levels of certainty. Swarm agents continuously update their preferences by exchanging the exhibited decisions and the certainty values among the locally connected neighbors, leading to consensus towards an agreed-upon decision. The presented method is evaluated for its scalability over the swarm size and the number of options and its reliability under different conditions. Adopting classical best-of-N target selection scenarios, the algorithm is compared with three existing methods, the majority rule, frequency-based method, and k-unanimity method. The evaluation results show that the entropy-based method is reliable and efficient in these consensus problems.

Adaptive Mixed-Reality Sensorimotor Interface for Human-Swarm Teaming: Persons with Limb Loss Case Study and Field Experiments

C. Zhao, C. Zheng, L. Roldan, T. Shkurti, A. Nahari, W. Newman, D. Tyler, K. Lee, and M. Fu

Field Robotics

February, 2023

a-SWAT

This paper presents the design, evaluation, and field experiment of the innovative Adaptable Human-Swarm Teaming (α-SWAT) interface developed to support military field operations. Human-swarm teaming requires collaboration between a team of humans and a team of robotic agents for strategic decision-making and task performance. α-SWAT allows multiple human team members with different roles, physical capabilities, or preferences to interact with the swarm via a configurable, multimodal user interface (UI). The system has an embedded task allocation algorithm for the rapid assignment of tasks created by the mission planner to the swarm. The multimodal UI supports swarm visualization via a mixed reality display or a conventional 2D display, human gesture inputs via a camera or an electromyography device, tactile feedback via a vibration motor or implanted peripheral nerve interface, and audio feedback. In particular, the UI system interfacing with the implanted electrodes through a neural interface enables gesture detection and tactile feedback for individuals with upper limb amputation to participate in human-swarm teaming. The multimodality of α-SWAT’s UI adapts to the needs of three different roles of the human team members: Swarm Planner, Swarm Tactician Rear, and Swarm Tactician Forward. A case study evaluated the functionality and usability of α-SWAT to enable a participant with limb loss and an implanted neural interface to assign three tasks to a simulated swarm of 150 robotic agents. α-SWAT was also used to visualize live telemetry from 40 veridical robotic agents for multiple simultaneous human participants at a field experiment.

α-WaLTR: Adaptive Wheel-and-Leg Transformable Robot for Versatile Multiterrain Locomotion

Chuanqi Zheng; Siddharth Sane; Kangneoung Lee; Vishnu Kalyanram; Kiju Lee

IEEE Transaction on Robotics

December 2022 (Early Access)

DOI: 10.1109/TRO.2022.3226114
https://art.engr.tamu.edu/wp-content/uploads/sites/170/2022/12/IEEETRO2022_Low.mp4

Adaptability is a fundamental yet challenging requirement for mobile robot locomotion. This article presents a-WaLTR, a new adaptive wheel-and-leg transformable robot for versatile multiterrain mobility. The robot has four passively transformable wheels, where each wheel consists of a central gear and multiple leg segments with embedded spring suspension for vibration reduction. These wheels enable the robot to traverse various terrains, obstacles, and stairs while retaining the simplicity in primary control and operation principles of conventional wheeled robots. The chassis dimensions and the center of gravity location were determined by a multiobjective design optimization process aimed at minimizing the weight and maximizing the robot’s pitch angle for obstacle climbing. Unity-based simulations guided the selection of the design variables associated with the transformable wheels. Following the design process, α-WaLTR with an embedded sensing and control system was developed. Experiments showed that the spring suspension on the wheels effectively reduced the vibrations when operated in the legged mode and verified that the robot’s versatile locomotion capabilities were highly consistent with the simulations. The system-level integration with an embedded control system was demonstrated via autonomous stair detection, navigation, and climbing capabilities.

Vision-based Ascending Staircase Detection with Interpretable Classification Model for Stair Climbing Robots

Kangneoung Lee, Vishnu Kalyanram, Chuanqi Zheng, Siddharth Sane, and Kiju Lee

IEEE International Conference on Robotics and Automation (ICRA), 2022

May 25, 2022

Robots capable of traversing flights of stairs play an important role in both indoor and outdoor applications. The capability of accurately identifying a staircase is one of the vital technical functions in these robots. This paper presents a vision-based ascending stair detection algorithm using RGB-Depth (RGB-D) data based on an interpretable model. The method follows the four steps: 1) pre-processing of RGB images for line extraction by applying the dilatation and Canny filters followed by the probabilistic Hough line transform, 2) defining the regions of interests (ROIs) via Kmean clustering, 3) training the initial model based on a support vector machine (SVM) using three extracted features (i.e., gradient, continuity factor, and deviation cost), and 4) building an interpretable model for stair classification by determining the decision boundary conditions. The developed method was evaluated for its performance using our dataset, and the results showed 85% sensitivity and 94% specificity. When the same model was tested on a different test set, the sensitivity and specificity slightly decreased to 80% and 90%, respectively. By shifting the boundary conditions using only a small subset of the new dataset without rebuilding the model, performance was improved to 90% sensitivity and 96% specificity. The presented method is also compared with existing SVM- and neural-network-based methods.

Click here to check the ICRA poster: ICRA2022_Poster_FINAL

GA-SVM based Facial Emotion Recognition using Facial Geometric Features

X. Liu, X. Cheng, and K. Lee

DOI: 10.1109/JSEN.2020.3028075

This paper presents a facial emotion recognition technique using two newly defined geometric features, landmark curvature and vectorized landmark. These features are extracted from facial landmarks associated with individual components of facial muscle movements. The presented method combines support vector machine (SVM) based classification with a genetic algorithm (GA) for a multi-attribute optimization problem of feature and parameter selection. Experimental evaluations were conducted on the extended Cohn-Kanade dataset (CK+) and the Multimedia Understanding Group (MUG) dataset. For 8-class CK+, 7-class CK+, and 7-class MUG, the validation accuracy was 93.57, 95.58, and 96.29%; and the test accuracy resulted in 95.85, 97.59, and 96.56%, respectively. Overall precision, recall, and F1-score were about 0.97, 0.95, and 0.96. For further evaluation, the presented technique was compared with a convolutional neural network (CNN), one of the widely adopted methods for facial emotion recognition. The presented method showed slightly higher test accuracy than CNN for 8-class CK+ (95.85% (SVM) vs. 95.43% (CNN)) and 7-class CK+ (97.59 vs. 97.34), while the CNN slightly outperformed on the 7-class MUG dataset (96.56 vs. 99.62). Compared to CNN-based approaches, this method employs less complicated models and thus shows potential for real-time machine vision applications in automated systems.

Pre-print author copy (peer-reviewed and accepted): XLiuIEEESensors2020

  • « Go to Previous Page
  • Go to page 1
  • 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Recent Posts

  • Comparison of GPS Collars and Solar-Powered GPS Ear Tags for Animal Movement Studies September 29, 2025
  • Low-Cost, Compact Mobile Robot for Autonomous Soil Monitoring in Crop Fields September 29, 2025
  • Hardware Prototype and System Apparatus of an Autonomous Robotic Harvesting Cell September 29, 2025
  • Multi-Robot Shepherding: A CLF-CBF Approach September 29, 2025
  • Unmanned aerial system and machine learning driven Digital-Twin framework for in-season cotton growth forecasting September 29, 2025

© 2016–2026 Adaptive Robotics & Technology Lab Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment