TL;DR: EgoControl generates first person view videos, controlled on the full-body motion of the FPV agent.
With just a few context frames you can explore various future scenarios by controlling the full agent body.

Loading video...

Abstract

Egocentric video generation with fine-grained control through body motion is a key requirement towards embodied AI agents that can simulate, predict, and plan actions. In this work, we propose EgoControl, a pose-controllable video diffusion model trained on egocentric data. We train a video prediction model to condition future frame generation on explicit 3D body pose sequences. To achieve precise motion control, we introduce a novel pose representation that captures both global camera dynamics and articulated body movements, and integrate it through a dedicated control mechanism within the diffusion process. Given a short sequence of observed frames and a sequence of target poses, EgoControl generates temporally coherent and visually realistic future frames that align with the provided pose control. Experimental results demonstrate that EgoControl produces high-quality, pose-consistent egocentric videos, paving the way toward controllable embodied video simulation and understanding.

Performance Comparison Chart

Citation

@article{egocontrol2025,
  title={EgoControl: Controllable Egocentric Video Generation via 3D Full-Body Poses},
  author={Enrico Pallotta and Sina Mokhtarzadeh Azar and Lars Doorenbos and Serdar Ozsoy and Umar Iqbal and Juergen Gall},
  journal={arXiv preprint arXiv:2511.18173},
  year={2025}
}