CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
MP3•Beranda episode
Manage episode 432986994 series 2954468
Konten disediakan oleh Rob. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Rob atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation. Its application to video generation is still facing many challenges: The potential huge computation cost makes the training from scratch unaffordable; The scarcity and weak relevance of text-video datasets hinder the model understanding complex movement semantics. In this work, we present 9B-parameter transformer CogVideo, trained by inheriting a pretrained text-to-image model, CogView2. We also propose multi-frame-rate hierarchical training strategy to better align text and video clips. As (probably) the first open-source large-scale pretrained text-to-video model, CogVideo outperforms all publicly available models at a large margin in machine and human evaluations.
2022: Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang
https://arxiv.org/pdf/2205.15868
…
continue reading
2022: Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang
https://arxiv.org/pdf/2205.15868
298 episode