Quantifying the Quality of Haptic Interfaces
Shape-Changing Haptic Interfaces
Generating Clear Vibrotactile Cues with Magnets Embedded in a Soft Finger Sheath
Salient Full-Fingertip Haptic Feedback Enabled by Wearable Electrohydraulic Actuation
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Pleasant Broad-Bandwidth Haptic Cues
Modeling Finger-Touchscreen Contact during Electrovibration
Perception of Ultrasonic Friction Pulses
Vibrotactile Playback for Teaching Sensorimotor Skills in Medical Procedures
CAPT Motor: A Two-Phase Ironless Motor Structure
4D Intraoperative Surgical Perception: Anatomical Shape Reconstruction from Multiple Viewpoints
Visual-Inertial Force Estimation in Robotic Surgery
Enhancing Robotic Surgical Training
AiroTouch: Naturalistic Vibrotactile Feedback for Large-Scale Telerobotic Assembly
Optimization-Based Whole-Arm Teleoperation for Natural Human-Robot Interaction
Finger-Surface Contact Mechanics in Diverse Moisture Conditions
Computational Modeling of Finger-Surface Contact
Perceptual Integration of Contact Force Components During Tactile Stimulation
Dynamic Models and Wearable Tactile Devices for the Fingertips
Novel Designs and Rendering Algorithms for Fingertip Haptic Devices
Dimensional Reduction from 3D to 1D for Realistic Vibration Rendering
Prendo: Analyzing Human Grasping Strategies for Visually Occluded Objects
Learning Upper-Limb Exercises from Demonstrations
Minimally Invasive Surgical Training with Multimodal Feedback and Automatic Skill Evaluation
Efficient Large-Area Tactile Sensing for Robot Skin
Haptic Feedback and Autonomous Reflexes for Upper-limb Prostheses
Gait Retraining
Modeling Hand Deformations During Contact
Intraoperative AR Assistance for Robot-Assisted Minimally Invasive Surgery
Immersive VR for Phantom Limb Pain
Visual and Haptic Perception of Real Surfaces
Haptipedia
Gait Propulsion Trainer
TouchTable: A Musical Interface with Haptic Feedback for DJs
Exercise Games with Baxter
Intuitive Social-Physical Robots for Exercise
How Should Robots Hug?
Hierarchical Structure for Learning from Demonstration
Fabrication of HuggieBot 2.0: A More Huggable Robot
Learning Haptic Adjectives from Tactile Data
Feeling With Your Eyes: Visual-Haptic Surface Interaction
S-BAN
General Tactile Sensor Model
Insight: a Haptic Sensor Powered by Vision and Machine Learning
Intraoperative AR Assistance for Robot-Assisted Minimally Invasive Surgery

Following recent advances in optics, digital image acquisition, and computer vision, augmented reality (AR) applications are being vigorously researched and effectively deployed in several areas of the healthcare industry. In robot-assisted minimally invasive surgery (RMIS), AR has the potential to reduce the surgeon's cognitive load and thereby increase focus and efficiency by delivering computational, diagnostic, and visualization tools directly in the surgeon console.
In current clinical practice, AR is successfully applied to neurological surgery for navigation and guidance; after preoperative data are matched with the intraoperative scene via a registration process, the surgeon can superimpose the preoperative data onto the patient's anatomy. In RMIS, soft tissues and deformable organs in the abdomen make accurate superimposition and tracking extremely challenging. As such, we aim to achieve accurate registration and tracking by performing a robust anatomical 3D reconstruction of the intraoperative scene. We believe the resulting image-based guidance will have the potential to help surgeons during critical steps of minimally invasive procedures.
We have also explored novel uses and interaction methods for AR in RMIS []. In particular, we developed four voice-controlled functions to view 2D preoperative images, view a live video of the operating room, measure 3D distances, and warn users about instruments that have moved outside the visual field. A user study with eight experienced RMIS surgeons performing dry-lab lymphadenectomy showed that the functions improved the procedure; surgeons particularly appreciated the possibility of accessing patient images on demand, measuring distances intraoperatively, and interacting with the functions using voice commands. Our low-cost platform can be easily integrated into any surgical robot equipped with a stereo camera and a stereo viewer [
].
Members
Publications