Paper accepted at ICCV on Semantics-Aware Co-Speech Gesture Generation!
Our paper titled “SemGes: Semantics-aware Co-Speech Gesture Generation using Semantic Coherence and Relevance Learning” is accepted to be presented at the international conference on computer vision (ICCV) 2025! This work introduces a novel approach for generating co-speech gestures that are semantically coherent and relevant to the spoken content, improving the naturalness and expressiveness of multimodal communication.
The paper demo is available here.
Enjoy Reading This Article?
Here are some more articles you might like to read next: