Back
Hands are the central means by which humans interact with their surroundings. Understanding human hands help human behavior analysis and facilitate other visual analysis tasks such as action and gesture recognition. Recently, there has been a surge of interest in understanding first-person visual data, and hands are the dominant interaction entities in such activities. Also, there is an explosion of interest in developing computer vision methods for augmented and virtual reality. To deliver an authentic augmented and virtual reality experience, we need to enable humans to interact with the virtual world and allow virtual avatars to communicate and interact with each other. Since hands are the dominant interaction entities in such cases, a thorough understanding of human hands is essential in developing computer vision methods for augmented and virtual reality. In this talk, we will go through some of our recently published works related to the visual reasoning of human hands. We will review our work on detecting hands in unconstrained images, recognizing hands' physical contact states, and associating hands with their corresponding people in-the-wild images. We will conclude the talk by identifying some future directions.
Supreeth Narasimhaswamy (Stony Brook University)
PhD Student
Supreeth is a Ph.D. candidate in the Department of Computer Science at Stony Brook University, U.S.A, working under the supervision of Professor Minh Hoai Nguyen. His current research focuses on understanding human hands in visual data. For example, consider a scene with people interacting with their surroundings. How can we detect hands? How to track hands across videos? Do hands contact objects or humans? What are the contact regions? How do hands grasp and interact with objects? How can we use hand interactions to better model people's activities in the scene? He is interested in using this information and AR/ VR to assist people in performing complex tasks. He has done research internships at Microsoft Mixed Reality Redmond and Snap Research New York, where he has worked on developing computer vision methods that leverage human hands' information to provide better AR/ VR experiences.