Header logo is

Institute Talks

The Atomic Human: Understanding ourselves in the age of AI

Max Planck Lecture
  • 17 October 2024 • 16:00—18:00
  • Neil Lawrence
  • Lecture Hall 2D5, Heisenbergstraße 1, Stuttgart

The Max Planck Institute for Intelligent Systems is delighted to invite you to its 2024 Max Planck Lecture in Stuttgart.

Organizers: Michael Black Barbara Kettemann Valeria Rojas

Diffusion Models for Human Motion Synthesis

Talk
  • 14 October 2024 • 15:30—16:30
  • Guy Tevet
  • MPI-IS Tuebingen, N3.022

Character motion synthesis stands as a central challenge in computer animation and graphics. The successful adaptation of diffusion models to the field boosted synthesis quality and provided intuitive controls such as text and music. One of the earliest and most popular methods to do so is Motion Diffusion Model (MDM) [ICLR 2023]. In this talk, I will review how MDM incorporates domain know-how into the diffusion model and enables intuitive editing capabilities. Then, I will present two recent works, each suggesting a refreshing take on motion diffusion and extending its abilities to new animation tasks. Multi-view Ancestral Sampling (MAS) [CVPR 2024] is an inference time algorithm that samples 3D animations from 2D keypoint diffusion models. We demonstrated it by generating 3D animations for characters and scenarios that are challenging to record in elaborate motion capture systems, yet vastly ubiquitous on in-the-wild videos. These include for example horse racing and professional rhythmic gymnastics motions. Monkey See, Monkey Do (MoMo) [SIGGRAPH Asia 2024] explores the attention space of the motion diffusion model. A careful analysis shows the roles of the attention’s keys and queries through the generation process. With these findings in hand, we design a training-free method that generates motion following the distinct motifs of one motion while led by an outline dictated by another motion. To conclude the talk, I will give my modest take on the challenges in the fields and our lab’s current work attempting to tackle some of them.

Organizers: Omid Taheri


Reconstruction and Animation of Realistic Head Avatars

Talk
  • 10 October 2024 • 14:00—15:00
  • Egor Zakharov
  • Max-Planck-Ring 4, N3, Aquarium

Digital humans, or realistic avatars, are a centerpiece of future telepresence and special effects systems, and human head modeling is one of their main components. The abovementioned applications, however, are highly demanding in terms of avatar creation speed, as well as realism, and controllability. This talk will focus on the approaches that create controllable and detailed 3D head avatars using the data from consumer-grade devices, such as smartphones, in an uncalibrated and unconstrained capture setting. We will discuss leveraging in-the-wild internet videos and synthetic data sources to achieve a high diversity of facial expressions and appearance personalization, including detailed hair modeling. We also showcase how the resulting human-centric assets can be integrated into virtual environments for real-time telepresence and entertainment applications, illustrating the future of digital communication and gaming.

Organizers: Vanessa Sklyarova


  • Simon Donne
  • Virtual, Live stream at Max-Planck-Ring 4, N3, Aquarium

Current diffusion models only generate RGB images. If we want to make progress towards graphics-ready 3D content generation, we need a PBR foundation model, but there is not enough PBR data available to train such a model from scratch. We introduce Collaborative Control, which tightly links a new PBR diffusion model to a pre-trained RGB model. We show that this dual architecture does not risk catastrophic forgetting, outputting high-quality PBR images and generalizing well beyond the PBR training dataset. Furthermore, the frozen base model remains compatible with techniques such as IP-Adapter.

Organizers: Soubhik Sanyal


  • Slava Elizarov
  • Virtual, Live stream at Max-Planck-Ring 4, N3, Aquarium

In this talk, I will present Geometry Image Diffusion (GIMDiffusion), a novel method designed to generate 3D objects from text prompts efficiently. GIMDiffusion uses geometry images, a 2D representation of 3D shapes, which allows the use of existing image-based architectures instead of complex 3D-aware models. This approach reduces computational costs and simplifies the model design. By incorporating Collaborative Control, the method exploits rich priors of pretrained Text-to-Image models like Stable Diffusion, enabling strong generalization even with limited 3D training data. GIMDiffusion produces 3D objects with semantically meaningful, separable parts and internal structures, which enhances the ease of manipulation and editing.

Organizers: Soubhik Sanyal


  • Adriana Cabrera
  • Copper (2R04)

This talk explores the prototyping of e-textiles and the integration of Soft Robotics systems, grounded in experimentation within digital fabrication spaces and Open Innovation environments like Fab Labs. By leveraging CNC fabrication methods and soft material manipulation, this approach reduces barriers between high and low tech, making experimentation more accessible. It also enables the integration of pneumatic actuators, sensors, and data collection systems into e-textiles and wearable technologies. The presentation will highlight how these developments open up new possibilities for creating smart textiles with soft robotic capabilities. Finally, it aims to inspire discussions on the application of haptics and actuators, such as HASEL, in wearables and e-textiles, fostering co-creation of future solutions that blend these innovative technologies with design.

Organizers: Paul Abel Christoph Keplinger


Advancements in 3D Facial Expression Reconstruction

Talk
  • 23 September 2024 • 12:00—13:00
  • Panagiotis Filntisis and George Retsinas
  • Hybrid

Recent advances in 3D face reconstruction from in-the-wild images and videos have excelled at capturing the overall facial shape associated with a person's identity. However, they often struggle to accurately represent the perceptual realism of facial expressions, especially subtle, extreme, or rarely observed ones. In this talk, we will present two contributions focused on improving 3D facial expression reconstruction. The first part introduces SPECTRE—"Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos"—which offers a method for precise 3D reconstruction of mouth movements linked to speech articulation. This is achieved using a novel "lipread" loss function that enhances perceptual realism. The second part covers SMIRK—"3D Facial Expressions through Analysis-by-Neural-Synthesis"—where we explore how neural rendering techniques can overcome the limitations of differentiable rendering. This approach provides better gradients for 3D reconstruction and allows us to augment training data with diverse expressions for improved generalization. Together, these methods set new standards in accurately reconstructing facial expressions.

Organizers: Victoria Fernandez Abrevaya


Generalizable Object-aware Human Motion Synthesis

Talk
  • 12 September 2024 • 14:00—15:00
  • Wanyue Zhang
  • Max-Planck-Ring 4, N3, Aquarium

Data-driven virtual 3D character animation has recently witnessed remarkable progress. The realism of virtual characters is a core contributing factor to the quality of computer animations and user experience in immersive applications like games, movies, and VR/AR. However, existing automatic approaches for 3D virtual character motion synthesis supporting scene interactions do not generalize well to new objects outside training distributions, even when trained on extensive motion capture datasets with diverse objects and annotated interactions. In this talk, I will present ROAM, an alternative framework that generalizes to unseen objects of the same category without relying on a large dataset of human-object animations. In addition, I will share some preliminary findings from an ongoing project on hand motion interaction with articulated objects.

Organizers: Nikos Athanasiou


  • Lorena Velásquez
  • Hybrid - Webex plus in-person attendance in Oxygen (5N18)

Individuals with limb loss often choose prosthetic devices to complete activities of daily living (ADLs) as they can provide enhanced dexterity and customizable utility. Despite these benefits, high abandonment rates persist due to uncomfortable, cumbersome, and unreliable designs. Despite restoring motor function, dexterous sensorimotor control remains severely impaired due to the absence of haptic feedback. This presentation details the design and evaluation of tendon-actuated mock prostheses with integrated state-based haptic feedback and their anthropomorphic tendon-actuated end effectors.

Organizers: Katherine Kuchenbecker Uli Bartels


Modelling the Musculoskeletal System

Talk
  • 04 September 2024 • 10:30—11:30
  • Thor Besier
  • Max Planck Ring 4, N3

Thor Besier leads the musculoskeletal modelling group at the Auckland Bioengineering Institute and will provide an overview of the institute and some of the current research projects of his team, including the Musculoskeletal Atlas Project, Harmonising clinical gait analysis data, Digital Twins for shoulder arthroplasty, and Reproducibility of Knee Models (NIH funded KneeHUB project).

Organizers: Marilyn Keller


Technologies of Thin Films for High-power Laser Systems

Talk
  • 22 August 2024 • 10:00—11:30
  • Prof. Zhanshan Wang
  • Copper (2R04)

High-power laser systems significantly influence solutions for major scientific issues and high-tech industries. Thin films are one of the core components of advanced high-power laser systems. With the development of output power and application scenarios, high-power laser systems have to satisfy increasingly stringent requirements on the damage threshold, optical loss, and capabilities of optical field control for thin-film components of laser systems. In terms of improving the laser damage threshold, we revealed a physical mechanism of "localized strong point", which will induce laser damage on thin films, and established a solution of "field control design method" for manipulating the distribution of standing wave fields by adjusting the film structure. We further proposed a new method for obtaining quantitative damage laws of "localized strong point" using artificial defects, which lays the foundation for a "full-process quantification" approach to control defect preparation in thin films. Regarding optical loss in thin films, we clarified the relationship between optical factors, interface relevance, and film interface scattering. We then proposed engineering strategies of 1) multi-objective optimization techniques for synergistically control optical factors and spectral efficiency, 2) oblique growth for changing interface PSD relevance, 3) ion-activated oxygen technology, nano-composite material technology, and high-temperature annealing technology for reducing film absorption loss, and 4) defect flattening technology for mitigating the absorption and scattering loss. In the aspect of optical field control, we detailed discussed the pros and cons of traditional optical thin films and metasurfaces in controlling the amplitude and phase of electromagnetic waves. A quasi 3D multilayer metasurface structure enhances the non-local energy flow control capability through the efficient coupling of transmission waves and Bloch waves, achieving an efficiency exceeding 99% in anomalous reflection of light frequency for the first time. We elucidated the controllability of the degrees of freedoms of the metasurface structure over the phase difference and phase dispersion of Bragg modes within the structure, and achieve an efficiency exceeding 99% in broadband depolarization perfect Littrow diffraction through topological optimization of the metasurface structure shape. Additionally, we developed a new additive manufacturing method based on atomic layer deposition and etching techniques, avoiding microstructure shape changes and localized hotspots caused by etching, that effectively improves the efficiency and damage threshold of the multilayer film metasurface structure.

Organizers: Christoph Keplinger