VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning
AuthorsXin Cheng†, Yuyue Wang†, Xihua Wang†, Yihan Wu†, Kaisi Guan†, Yijing Chen†, Peng Zhang, Kieran Liu, Meng Cao, Ruihua Song†
VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning
AuthorsXin Cheng†, Yuyue Wang†, Xihua Wang†, Yihan Wu†, Kaisi Guan†, Yijing Chen†, Peng Zhang, Kieran Liu, Meng Cao, Ruihua Song†
Video-conditioned sound and speech generation, encompassing video-to-sound (V2S) and visual text-to-speech (VisualTTS) tasks, are conventionally addressed as separate tasks, with limited exploration to unify them within a signle framework. Recent attempts to unify V2S and VisualTTS face challenges in handling distinct condition types (e.g., heterogeneous video and transcript conditions) and require complex training stages. Unifying these two tasks remains an open problem. To bridge this gap, we present VSSFlow, which seamlessly integrates both V2S and VisualTTS tasks into a unified flow-matching framework. VSSFlow uses a novel condition aggregation mechanism to handle distinct input signals. We find that cross-attention and self-attention layer exhibit different inductive biases in the process of introducing condition. Therefore, VSSFlow leverages these inductive biases to effectively handle different representations: cross-attention for ambiguous video conditions and self-attention for more deterministic speech transcripts. Furthermore, contrary to the prevailing belief that joint training on the two tasks requires complex training strategies and may degrade performance, we find that VSSFlow benefits from the end-to-end joint learning process for sound and speech generation without extra designs on training stages. Detailed analysis attributes it to the learned general audio prior shared between tasks, which accelerates convergence, enhances conditional generation, and stabilizes the classifier-free guidance process. Extensive experiments demonstrate that VSSFlow surpasses the state-of-the-art domain-specific baselines on both V2S and VisualTTS benchmarks, underscoring the critical potential of unified generative models.
STIV: Scalable Text and Image Conditioned Video Generation
August 1, 2025research area Computer Vision, research area Methods and Algorithms
The field of video generation has made remarkable advancements, yet there remains a pressing need for a clear, systematic recipe that can guide the development of robust and scalable models. In this work, we present a comprehensive study that systematically explores the interplay of model architectures, training recipes, and data curation strategies, culminating in a simple and scalable text-image-conditioned video generation method, named STIV…
Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis
July 14, 2025research area Methods and Algorithms, research area Speech and Natural Language Processing
The rapid progress of foundation models and large language models (LLMs) has fueled significantly improvement in the capabilities of machine learning systems that benefit from mutlimodal input data. However, existing multimodal models are predominantly built on top of pre-trained LLMs, which can limit accurate modeling of temporal dependencies across other modalities and thus limit the model’s ability to jointly process and leverage multimodal…