Adaptive Recurrent Frame Prediction with Learnable Motion Vectors

Zhizhen Wu1, Chenyu Zuo1, Yuchi Huo1,2*, Yazhen Yuan3, Yifan Peng4, Guiyang Pu5, Rui Wang1, Hujun Bao1,2
1State Key Lab of CAD&CG, Zhejiang University, 2Zhejiang Lab, 3Tencent,
4The University of Hong Kong, 5China Mobile (Hangzhou) Information Technology Co., Ltd.
*Indicates Corresponding Author

Abstract

The utilization of dedicated ray tracing graphics cards has revolutionized the production of stunning visual effects in real-time rendering. However, the demand for high frame rates and high resolutions remains a challenge. The pixel warping approach is a crucial technique for increasing frame rate and resolution by exploiting the spatio-temporal coherence. To this end, existing super-resolution and frame prediction methods rely heavily on motion vectors from rendering engine pipelines to track object movements. This work builds upon state-of-the-art heuristic approaches by exploring a novel adaptive recurrent frame prediction framework that integrates learnable motion vectors. Our framework supports the prediction of transparency, particles, and texture animations, with improved motion vectors that capture shading, reflections, and occlusions, in addition to geometry movements. In addition, we introduce a feature streaming neural network, dubbed FSNet, that allows for the adaptive prediction of one or multiple sequential frames. Extensive experiments against state-of-the-art methods demonstrate that FSNet can operate at lower latency with significant visual enhancements and can upscale frame rates by at least two times. This approach offers a flexible pipeline to improve the rendering frame rates of various graphics applications and devices.

BibTeX

Thank you for being interested in our paper. If you find our paper helpful or use our work in your research, please cite:

@inproceedings{wu2023adaptive,
author = {Wu, Zhizhen and Zuo, Chenyu and Huo, Yuchi and Yuan, Yazhen and Peng, Yifan and Pu, Guiyang and Wang, Rui and Bao, Hujun},
title = {Adaptive Recurrent Frame Prediction with Learnable Motion Vectors},
year = {2023},
isbn = {9798400703157},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3610548.3618211},
doi = {10.1145/3610548.3618211},
booktitle = {SIGGRAPH Asia 2023 Conference Papers},
articleno = {10},
numpages = {11},
keywords = {Frame Extrapolation, Real-time Rendering, Spatial-temporal},
location = {, Sydney, NSW, Australia, },
series = {SA '23}
}