MoFlow: Motion-Guided Flows for Recurrent Rendered Frame Prediction

Zhizhen Wu1, Zhilong Yuan1, Chenyu Zuo1, Yazhen Yuan2, Yifan Peng3, Guiyang Pu4, Rui Wang1, Yuchi Huo1,5*
1State Key Lab of CAD&CG, Zhejiang University, 2Tencent, 3The University of Hong Kong,
4China Mobile (Hangzhou) Information Technology Co., Ltd, 5Zhejiang Lab
*Indicates Corresponding Author

Abstract

Rendering realistic images in real-time on high-frame-rate display devices poses considerable challenges, even with advanced graphics cards. This stimulates a demand for frame prediction technologies to boost frame rates. The key to these algorithms is to exploit spatiotemporal coherence by warping rendered pixels with motion representations. However, existing motion estimation methods can suffer from low precision, high overhead, and incomplete support for visual effects. In this article, we present a rendered frame prediction framework with a novel motion representation, dubbed motion-guided flow (MoFlow), aiming at overcoming the intrinsic limitations of optical flow and motion vectors and precisely capture the dynamics of intricate geometries, lighting, and translucent objects. Notably, we construct MoFlows using a recurrent feature streaming network, which specializes in learning latent motion features from multiple frames. The results of extensive experiments demonstrate that, compared to state-of-the-art methods, our method achieves superior visual quality and temporal stability with lower latency. The recurrent mechanism allows our method to predict single or multiple consecutive frames, increasing the frame rate by over 2×. The proposed approach represents a flexible pipeline to meet the demands of various graphics applications, devices, and scenarios.

BibTeX

If you find our paper helpful, please consider citing:
@article{wu25motion,
  author = {Wu, Zhizhen and Yuan, Zhilong and Zuo, Chenyu and Yuan, Yazhen and Peng, Yifan and Pu, Guiyang and Wang, Rui and Huo, Yuchi},
  title = {MoFlow: Motion-Guided Flows for Recurrent Rendered Frame Prediction},
  year = {2025},
  issue_date = {April 2025},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  volume = {44},
  number = {2},
  issn = {0730-0301},
  url = {https://doi.org/10.1145/3730400},
  doi = {10.1145/3730400},
  journal = {ACM Trans. Graph.},
  month = apr,
  articleno = {22},
  numpages = {18},
  keywords = {Real-time rendering, frame extrapolation, spatial-temporal}
}