Perceiving Systems Conference Paper 2025

InteractVLM: 3D Interaction Reasoning from 2D Foundational Models

Thumb xxl ivlm teaser crop

We introduce InteractVLM, a novel method to estimate 3D contact points on human bodies and objects from single in-the-wild images, enabling accurate human-object joint reconstruction in 3D. This is challenging due to occlusions, depth ambiguities, and widely varying object shapes. Existing methods rely on 3D contact annotations collected via expensive motion-capture systems or tedious manual labeling, limiting scalability and generalization. To overcome this, InteractVLM harnesses the broad visual knowledge of large Vision-Language Models (VLMs), fine-tuned with limited 3D contact data. However, directly applying these models is non-trivial, as they reason only in 2D, while human-object contact is inherently 3D. Thus we introduce a novel Render-Localize-Lift module that: (1) embeds 3D body and object surfaces in 2D space via multi-view rendering, (2) trains a novel multi-view localization model (MV-Loc) to infer contacts in 2D, and (3) lifts these to 3D. Additionally, we propose a new task called Semantic Human Contact estimation, where human contact predictions are conditioned explicitly on object semantics, enabling richer interaction modeling. InteractVLM outperforms existing work on contact estimation and also facilitates 3D reconstruction from an in-the wild image.

Author(s): Dwivedi, S. K. and Antić, D. and Tripathi, S. and Taheri, O. and Schmid, C. and Black, M. J. and Tzionas, D.
Book Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Year: 2025
Month: June
Bibtex Type: Conference Paper (inproceedings)
Event Place: Nashville, Tennessee
State: Accepted
Links:

BibTex

@inproceedings{dwivedi_2025_ivlm,
  title = {{InteractVLM}: {3D} Interaction Reasoning from {2D} Foundational Models},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  abstract = {We introduce InteractVLM, a novel method to estimate 3D contact points on human bodies and objects from single in-the-wild images, enabling accurate human-object joint reconstruction in 3D. This is challenging due to occlusions, depth ambiguities, and widely varying object shapes. Existing methods rely on 3D contact annotations collected via expensive motion-capture systems or tedious manual labeling, limiting scalability and generalization. To overcome this, InteractVLM harnesses the broad visual knowledge of large Vision-Language Models (VLMs), fine-tuned with limited 3D contact data. However, directly applying these models is non-trivial, as they reason only in 2D, while human-object contact is inherently 3D. Thus we introduce a novel Render-Localize-Lift module that: (1) embeds 3D body and object surfaces in 2D space via multi-view rendering, (2) trains a novel multi-view localization model (MV-Loc) to infer contacts in 2D, and (3) lifts these to 3D. Additionally, we propose a new task called Semantic Human Contact estimation, where human contact predictions are conditioned explicitly on object semantics, enabling richer interaction modeling. InteractVLM outperforms existing work on contact estimation and also facilitates 3D reconstruction from an in-the wild image.},
  month = jun,
  year = {2025},
  slug = {dwivedi_2025_ivlm},
  author = {Dwivedi, S. K. and Antić, D. and Tripathi, S. and Taheri, O. and Schmid, C. and Black, M. J. and Tzionas, D.},
  month_numeric = {6}
}