Institute Talks
Next-Generation Biohybrids: Engineering Miniature Machines Inspired by Plant Systems
- 10 December 2024 • 11:00—12:00
- Dr. Isabella Fiorello
- Hybrid - Webex plus in-person attendance in Copper (2R04)
Among living organisms, plants are an ideal source of inspiration for robotics and engineering due to their remarkable evolutionary adaptations to almost every habitat. When miniaturized, plant-inspired machines can navigate confined and complex unstructured surfaces. We introduce a new class of plant-inspired, microfabricated hybrid machines designed for multifunctional tasks such as in situ monitoring and targeted cargo delivery. These machines combine bioinspired design with biohybrid approaches, incorporating the morphological and biomechanical features of both terrestrial and aquatic plants. Advanced techniques such as microcomputed tomography, two-photon lithography, and bioprinting enable the production of scalable and sustainable prototypes. Tested in real-world environments (such as soil, leaf tissues, and aquatic habitats) these machines have demonstrated their potential in applications like climbing robots, precision agriculture, reforestation, and underwater sensing. These technologies showcase the promise of plant-inspired biohybrid machines in environmental protection, conservation, and advanced engineering, with significant implications for fields such as material science, soft robotics, and precision agriculture.
Organizers: Katherine Kuchenbecker Christoph Keplinger
How to predict the inside from the outside? Segment, register, model and infer!
- 28 November 2024 • 10:00—11:00
- Sergi Pujades
- MPI IS Tuebingen, 3rd floor, Aquarium
Observing and modeling the human body has attracted scientific efforts since the very early times in history. In the recent decades, though, several imaging modalities, such as Computed Tomography scanners (CT), Magnetic Resonance Imaging (MRI), or X-ray have provided the means to “see” inside the body. Most interestingly, there is growing evidence pointing that the shape of the surface of the human body is highly correlated with its internal properties, for example, the body composition, the size of the bones, and the amount of muscle and adipose tissue (fat). In this talk I will go over the used methodology to establish the link between the shape of the surface of the body and the internal anatomic structures, based on the classical problems of segmentation, registration, statistical modeling, and inference.
Organizers: Marilyn Keller
Data-Driven Needle Puncture Detection for Urgent Medical Care Delivery in Space
- 23 October 2024 • 17:30—18:00
- Rachael L'Orsa
- Zoom
Needle decompression (ND) is a surgical procedure that treats one of the most preventable causes of trauma-related death: dangerous accumulations of air between the chest wall and the lungs. However, needle-tip overshoot of the target space can result in the inadvertent puncture of critical structures like the heart. This type of complication is fatal without urgent surgical care, which is not available in resource-poor environments like space. Since ND is done blind, operators rely on tool sensations to identify when the needle has reached its target. Needle instrumentation could enable puncture notifications to help operators limit tool-tip overshoot, but such a solution requires reliable puncture detection from manual (i.e., variable-velocity) needle insertion data streams. Data-driven puncture-detection (DDPD) algorithms are appropriate for this application, but their performance has historically been unacceptably low for use in safety-critical applications. We contribute towards the development of an intelligent device for manual ND assistance by proposing two novel DDPD algorithms. Three data sets are collected that provide needle forces, torques, and displacements during insertions into ex vivo porcine tissue analogs for the human chest, and factors affecting DDPD algorithm performance are analyzed in these data. Puncture event features are examined for each sensor, and the suitability of accelerometer measurements and diffuse reflectance is evaluated for ND. Finally, DDPD ensembles are proposed that yield a 5.1-fold improvement in precision as compared to the traditional force-only DDPD approach. These results lay a foundation for improving the urgent delivery of percutaneous procedures in space and other resource-poor settings.
Organizers: Katherine Kuchenbecker Rachael Lorsa
The Atomic Human: Understanding ourselves in the age of AI
- 17 October 2024 • 16:00—18:00
- Neil Lawrence
- Lecture Hall 2D5, Heisenbergstraße 1, Stuttgart
The Max Planck Institute for Intelligent Systems is delighted to invite you to its 2024 Max Planck Lecture in Stuttgart.
Organizers: Michael Black Barbara Kettemann Valeria Rojas
- Guy Tevet
- MPI-IS Tuebingen, N3.022
Character motion synthesis stands as a central challenge in computer animation and graphics. The successful adaptation of diffusion models to the field boosted synthesis quality and provided intuitive controls such as text and music. One of the earliest and most popular methods to do so is Motion Diffusion Model (MDM) [ICLR 2023]. In this talk, I will review how MDM incorporates domain know-how into the diffusion model and enables intuitive editing capabilities. Then, I will present two recent works, each suggesting a refreshing take on motion diffusion and extending its abilities to new animation tasks. Multi-view Ancestral Sampling (MAS) [CVPR 2024] is an inference time algorithm that samples 3D animations from 2D keypoint diffusion models. We demonstrated it by generating 3D animations for characters and scenarios that are challenging to record in elaborate motion capture systems, yet vastly ubiquitous on in-the-wild videos. These include for example horse racing and professional rhythmic gymnastics motions. Monkey See, Monkey Do (MoMo) [SIGGRAPH Asia 2024] explores the attention space of the motion diffusion model. A careful analysis shows the roles of the attention’s keys and queries through the generation process. With these findings in hand, we design a training-free method that generates motion following the distinct motifs of one motion while led by an outline dictated by another motion. To conclude the talk, I will give my modest take on the challenges in the fields and our lab’s current work attempting to tackle some of them.
Organizers: Omid Taheri
- Egor Zakharov
- Max-Planck-Ring 4, N3, Aquarium
Digital humans, or realistic avatars, are a centerpiece of future telepresence and special effects systems, and human head modeling is one of their main components. The abovementioned applications, however, are highly demanding in terms of avatar creation speed, as well as realism, and controllability. This talk will focus on the approaches that create controllable and detailed 3D head avatars using the data from consumer-grade devices, such as smartphones, in an uncalibrated and unconstrained capture setting. We will discuss leveraging in-the-wild internet videos and synthetic data sources to achieve a high diversity of facial expressions and appearance personalization, including detailed hair modeling. We also showcase how the resulting human-centric assets can be integrated into virtual environments for real-time telepresence and entertainment applications, illustrating the future of digital communication and gaming.
Organizers: Vanessa Sklyarova
Collaborative Control for Geometry-Conditioned PBR Image Generation
- 26 September 2024 • 14:00—15:00
- Simon Donne
- Virtual, Live stream at Max-Planck-Ring 4, N3, Aquarium
Current diffusion models only generate RGB images. If we want to make progress towards graphics-ready 3D content generation, we need a PBR foundation model, but there is not enough PBR data available to train such a model from scratch. We introduce Collaborative Control, which tightly links a new PBR diffusion model to a pre-trained RGB model. We show that this dual architecture does not risk catastrophic forgetting, outputting high-quality PBR images and generalizing well beyond the PBR training dataset. Furthermore, the frozen base model remains compatible with techniques such as IP-Adapter.
Organizers: Soubhik Sanyal
Geometry Image Diffusion: Fast and Data-Efficient Text-to-3D with Image-Based Surface Representation
- 26 September 2024 • 14:00—15:00
- Slava Elizarov
- Virtual, Live stream at Max-Planck-Ring 4, N3, Aquarium
In this talk, I will present Geometry Image Diffusion (GIMDiffusion), a novel method designed to generate 3D objects from text prompts efficiently. GIMDiffusion uses geometry images, a 2D representation of 3D shapes, which allows the use of existing image-based architectures instead of complex 3D-aware models. This approach reduces computational costs and simplifies the model design. By incorporating Collaborative Control, the method exploits rich priors of pretrained Text-to-Image models like Stable Diffusion, enabling strong generalization even with limited 3D training data. GIMDiffusion produces 3D objects with semantically meaningful, separable parts and internal structures, which enhances the ease of manipulation and editing.
Organizers: Soubhik Sanyal
From Experimentation to Innovation: Integration of Soft Robotics and Sensing in E-Textiles
- 26 September 2024 • 13:00—14:00
- Adriana Cabrera
- Copper (2R04)
This talk explores the prototyping of e-textiles and the integration of Soft Robotics systems, grounded in experimentation within digital fabrication spaces and Open Innovation environments like Fab Labs. By leveraging CNC fabrication methods and soft material manipulation, this approach reduces barriers between high and low tech, making experimentation more accessible. It also enables the integration of pneumatic actuators, sensors, and data collection systems into e-textiles and wearable technologies. The presentation will highlight how these developments open up new possibilities for creating smart textiles with soft robotic capabilities. Finally, it aims to inspire discussions on the application of haptics and actuators, such as HASEL, in wearables and e-textiles, fostering co-creation of future solutions that blend these innovative technologies with design.
Organizers: Paul Abel Christoph Keplinger
- Panagiotis Filntisis and George Retsinas
- Hybrid
Recent advances in 3D face reconstruction from in-the-wild images and videos have excelled at capturing the overall facial shape associated with a person's identity. However, they often struggle to accurately represent the perceptual realism of facial expressions, especially subtle, extreme, or rarely observed ones. In this talk, we will present two contributions focused on improving 3D facial expression reconstruction. The first part introduces SPECTRE—"Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos"—which offers a method for precise 3D reconstruction of mouth movements linked to speech articulation. This is achieved using a novel "lipread" loss function that enhances perceptual realism. The second part covers SMIRK—"3D Facial Expressions through Analysis-by-Neural-Synthesis"—where we explore how neural rendering techniques can overcome the limitations of differentiable rendering. This approach provides better gradients for 3D reconstruction and allows us to augment training data with diverse expressions for improved generalization. Together, these methods set new standards in accurately reconstructing facial expressions.
Organizers: Victoria Fernandez Abrevaya