Paper
13 May 2024 Human motion generation with StyleGAN
Kazuki Yamamoto, Makoto Murakami
Author Affiliations +
Proceedings Volume 13158, Seventh International Conference on Computer Graphics and Virtuality (ICCGV 2024); 131580A (2024) https://doi.org/10.1117/12.3029447
Event: Seventh International Conference on Computer Graphics and Virtuality (ICCGV24), 2024, Hangzhou, China
Abstract
Humanoid characters in games and videos move naturally as humans do in the real world. Recently, research has been conducted on generating natural motion using deep learning. Currently, there is a need to generate and control diverse natural motions. Therefore, this study proposes a model for generating human motions using StyleGAN, a deep generative model used in image generation. The performance of the proposed model was experimentally assessed on motion capture datasets; the results confirmed that the model could generate diverse and natural motions through random generation and intermediate latent variable interpolation. Additionally, experiments with style mixing indicated that the low-level layer could control the class of movements and the middle-level layer could control the positions of the arms and legs, posture, and body orientation.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Kazuki Yamamoto and Makoto Murakami "Human motion generation with StyleGAN", Proc. SPIE 13158, Seventh International Conference on Computer Graphics and Virtuality (ICCGV 2024), 131580A (13 May 2024); https://doi.org/10.1117/12.3029447
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Motion models

Interpolation

Machine learning

Data modeling

Convolution

Feature extraction

Statistical analysis

Back to Top