Quantifying the Quality of Haptic Interfaces
Shape-Changing Haptic Interfaces
Generating Clear Vibrotactile Cues with Magnets Embedded in a Soft Finger Sheath
Salient Full-Fingertip Haptic Feedback Enabled by Wearable Electrohydraulic Actuation
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Pleasant Broad-Bandwidth Haptic Cues
Modeling Finger-Touchscreen Contact during Electrovibration
Perception of Ultrasonic Friction Pulses
Vibrotactile Playback for Teaching Sensorimotor Skills in Medical Procedures
CAPT Motor: A Two-Phase Ironless Motor Structure
4D Intraoperative Surgical Perception: Anatomical Shape Reconstruction from Multiple Viewpoints
Visual-Inertial Force Estimation in Robotic Surgery
Enhancing Robotic Surgical Training
AiroTouch: Naturalistic Vibrotactile Feedback for Large-Scale Telerobotic Assembly
Optimization-Based Whole-Arm Teleoperation for Natural Human-Robot Interaction
Finger-Surface Contact Mechanics in Diverse Moisture Conditions
Computational Modeling of Finger-Surface Contact
Perceptual Integration of Contact Force Components During Tactile Stimulation
Dynamic Models and Wearable Tactile Devices for the Fingertips
Novel Designs and Rendering Algorithms for Fingertip Haptic Devices
Dimensional Reduction from 3D to 1D for Realistic Vibration Rendering
Prendo: Analyzing Human Grasping Strategies for Visually Occluded Objects
Learning Upper-Limb Exercises from Demonstrations
Minimally Invasive Surgical Training with Multimodal Feedback and Automatic Skill Evaluation
Efficient Large-Area Tactile Sensing for Robot Skin
Haptic Feedback and Autonomous Reflexes for Upper-limb Prostheses
Gait Retraining
Modeling Hand Deformations During Contact
Intraoperative AR Assistance for Robot-Assisted Minimally Invasive Surgery
Immersive VR for Phantom Limb Pain
Visual and Haptic Perception of Real Surfaces
Haptipedia
Gait Propulsion Trainer
TouchTable: A Musical Interface with Haptic Feedback for DJs
Exercise Games with Baxter
Intuitive Social-Physical Robots for Exercise
How Should Robots Hug?
Hierarchical Structure for Learning from Demonstration
Fabrication of HuggieBot 2.0: A More Huggable Robot
Learning Haptic Adjectives from Tactile Data
Feeling With Your Eyes: Visual-Haptic Surface Interaction
S-BAN
General Tactile Sensor Model
Insight: a Haptic Sensor Powered by Vision and Machine Learning
Reconstructing Sign-Language Movements from Images and Bioimpedance Measurements

Sign language is the primary means of communication for more than 70 million Deaf people worldwide. While video dictionaries are widely used for learning signs, they have limitations in terms of viewpoint and integration into AR/VR applications. Converting these videos into expressive 3D avatars could enhance learning and accessibility, but existing reconstruction methods struggle with the inherent challenges of sign language movements, such as rapid motion, self-occlusions, and complex finger articulation.
We propose a novel approach that combines complementary sensing modalities to achieve robust sign language capture. Our research objective is to test the hypothesis that fusing computer vision with electrical bioimpedance sensing between the wrists enables accurate reconstruction of signing, even in challenging real-world conditions.
Our first work, SGNify, introduces linguistic constraints derived from universal rules of sign languages to improve 3D reconstruction from monocular video []. We evaluated our system through quantitative analysis using motion capture data as ground truth, as well as perceptual studies with fluent signers. Results showed that our approach significantly outperforms existing methods, producing more accurate reconstructions. However, the resulting avatars still suffer from depth ambiguities, especially during self-contact events.
Based on our findings that self-contact remains ambiguous in video alone, we then investigated whether electrical bioimpedance measured between the wrists could be used to detect self-contact []. We conducted a systematic data collection with 30 participants, each performing 33 different poses that involved various combinations of body parts (hands touching each other, hands touching face, hands touching chest) and contact sizes (from single fingertip to full hand) and a no-contact baseline pose. We measured their bioimpedance while sweeping frequencies from 100 Hz to 5.1 MHz. Our results demonstrated that the bioimpedance magnitude at high frequencies can be used to reliably detect skin-to-skin contact across individuals with diverse physical characteristics. Ongoing work is evaluating the extent to which direct-sensed bioimpedance improves SGNify's performance.
Members
Publications