Research

I am a member of the University of Virginia Dependable Systems and Analytics Group (UVA-DSA). Our research focuses on the design and validation of Resilient Cyber-Physical Systems (CPS) with applications in surgical robotics, medical devices, and autonomous systems.

My research focuses on improving the safety of robot-assisted surgery (RAS) by developing generalizable, objective, and transparent methods for the analysis of surgical data. I am developing a formal framework for generalized and hierarchical modeling of surgical activities and have created the COntext and Motion Primitive Aggregate Surgical Set (COMPASS). This dataset is used for fine-grained activity recognition and runtime context inference, and can support surgeons during training and real operations with error detection, skill assessment, and autonomy.

Projects


COMPASS: a formal framework and aggregate dataset for generalized surgical procedure modeling

This project proposed a formal framework for the modeling and segmentation of minimally invasive surgical tassks using a unified set of motion primitives (MPs) to enable more objective labeling and the aggregation of different datasets. We create the COntext and Motion Primitive Aggregate Surgical Set (COMPASS) including six dry-lab surgical tasks from three publicly available datasets (JIGSAWS, DESK, and ROSMA) with kinematic and video data and context and MP labels.

Dataset available on GitHub




MIDAS: Multi-modal Image and Data Acquisition System

This project is developing a collection system for the simultaneous and synchronized acquisition of data from multiple sensors and modalities: master console video via video capture, master console manipulator kinematics via magnetic tracking, wrist accelerometers via smart watches, and RAVEN II surgical robot kinematics.



Analysis of executional and procedural errors in dry-lab robotic surgery experiments

Code and error labels available on GitHub

This project created a rubric for identifying task and gesture-specific Executional and Procedural errors and evaluated dry-lab demonstrations of the Suturing and Needle Passing tasks in the JIGSAWS dataset. We labeled video data for erroneous gestures and analyzed kinematic data to identify parameters that distinguish errors.



Reactive autonomous camera system

This project created a robotic arm for camera manipulation that worked independently from our lab's RAVEN II surgical robot. In manual control mode, it accepts commands from foot pedals. In autonomous control mode, it uses an MRCNN to localize and classify objects of interest within the view of the stereoscopic camera and applies a set of control rules to keep those objects in view.

Code availabe on GitHub




Acknowledgements

This research is supported by funding from the National Scicence Foundation.