Quantifying the Quality of Haptic Interfaces
Shape-Changing Haptic Interfaces
Generating Clear Vibrotactile Cues with Magnets Embedded in a Soft Finger Sheath
Salient Full-Fingertip Haptic Feedback Enabled by Wearable Electrohydraulic Actuation
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Pleasant Broad-Bandwidth Haptic Cues
Modeling Finger-Touchscreen Contact during Electrovibration
Perception of Ultrasonic Friction Pulses
Vibrotactile Playback for Teaching Sensorimotor Skills in Medical Procedures
CAPT Motor: A Two-Phase Ironless Motor Structure
4D Intraoperative Surgical Perception: Anatomical Shape Reconstruction from Multiple Viewpoints
Visual-Inertial Force Estimation in Robotic Surgery
Enhancing Robotic Surgical Training
AiroTouch: Naturalistic Vibrotactile Feedback for Large-Scale Telerobotic Assembly
Optimization-Based Whole-Arm Teleoperation for Natural Human-Robot Interaction
Finger-Surface Contact Mechanics in Diverse Moisture Conditions
Computational Modeling of Finger-Surface Contact
Perceptual Integration of Contact Force Components During Tactile Stimulation
Dynamic Models and Wearable Tactile Devices for the Fingertips
Novel Designs and Rendering Algorithms for Fingertip Haptic Devices
Dimensional Reduction from 3D to 1D for Realistic Vibration Rendering
Prendo: Analyzing Human Grasping Strategies for Visually Occluded Objects
Learning Upper-Limb Exercises from Demonstrations
Minimally Invasive Surgical Training with Multimodal Feedback and Automatic Skill Evaluation
Efficient Large-Area Tactile Sensing for Robot Skin
Haptic Feedback and Autonomous Reflexes for Upper-limb Prostheses
Gait Retraining
Modeling Hand Deformations During Contact
Intraoperative AR Assistance for Robot-Assisted Minimally Invasive Surgery
Immersive VR for Phantom Limb Pain
Visual and Haptic Perception of Real Surfaces
Haptipedia
Gait Propulsion Trainer
TouchTable: A Musical Interface with Haptic Feedback for DJs
Exercise Games with Baxter
Intuitive Social-Physical Robots for Exercise
How Should Robots Hug?
Hierarchical Structure for Learning from Demonstration
Fabrication of HuggieBot 2.0: A More Huggable Robot
Learning Haptic Adjectives from Tactile Data
Feeling With Your Eyes: Visual-Haptic Surface Interaction
S-BAN
General Tactile Sensor Model
Insight: a Haptic Sensor Powered by Vision and Machine Learning
Prendo: Analyzing Human Grasping Strategies for Visually Occluded Objects

Humans display exemplary skill in manipulating objects and can adapt to highly diverse situations. For example, a human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer.
State-of-the-art robot behavior lacks this level of understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation, sometimes even in very awkward and unfavorable postures.
This project investigates how visual occlusion of the object being passed affects the quantitative performance and subjective perception of a human receiver.
We performed an experiment in virtual reality (VR) where each of the three tested objects (hammer, screwdriver, and scissors) was individually presented in a wide variety of poses to the participants []. We developed an open-source grasp generator [
] to devise forty physically realistic scenes with diverse occlusion levels for each of the objects.
The participants were tasked with taking a test object from the hand of the virtual robot as if they were to use it. After each trial they were asked to rate the holdability and direct usability of the object given the grasp they had just performed. We carefully analyzed the user's hand and head motion, the time to grasp the object, the chosen grasp location, and the participant ratings. Results show that visual occlusion significantly impacts the grasping strategy of a human receiver and decreases the perceived holdability and direct usability of the object [].
Our findings lay the groundwork for enriching robot grasping with further knowledge needed to choose the most appropriate grasp for a given task considering visual occlusion and its effects on the human receiver. This new facet to robot intelligence could benefit many HRI scenarios that involve collaborative robotics, such as Industry 4.0 and healthcare.
Members
Publications