Multimorphing (Paper@CGI2023)
Multidimensional Image Morphing
Fast Image-based Rendering of Open 3D and VR Environments
CGI'2023
(Journal Track)
Simon Seibt1, Bastian Kuth2, Bartosz von Rymon Lipinski1, Thomas Chang1 and Marc Erich Latoschik3
2 Department of Computer Science, Coburg University of Applied Sciences and Arts
3 Human-Computer Interaction Group, University of Wuerzburg
Multi-morphing rendering results for the following image datasets:
"Farmland" (real scene), "Nature Walk" (real scene), "Living Room" (real scene), "Cave" (synthetic scene) and "Bistro" (synthetic scene).
Abstract
The demand for interactive photorealistic 3D environments has increased in recent years and in various fields such as architecture, engineering and entertainment. Nevertheless, achieving a balance between quality and performance for high-performance 3D applications and Virtual Reality (VR) remains a challenge. This paper addresses this issue by revisiting and extending view interpolation for image-based rendering, enabling the exploration of spacious open environments in 3D and VR. Therefore, we introduce multi-morphing, a novel rendering method based on a spatial data structure of 2D image patches, called the image graph. With this approach, novel views can be rendered with up to six degrees of freedom using only a sparse set of views. The rendering process does not require 3D reconstruction of geometry, nor per-pixel depth information: All relevant data for output is extracted from local morphing cells of the image graph. Detection of parallax image regions during preprocessing reduces rendering artifacts by extrapolating image patches from adjacent cells in real-time. Additionally, a GPU-based solution to resolve exposure inconsistencies within a dataset is presented, enabling seamless transitions of brightness when moving between areas with varying light intensities. Experiments on multiple real-world and synthetic scenes demonstrate that the presented method achieves high ``VR-compatible'' frame rates, even on mid-range and legacy hardware, respectively.
Paper
Pipelines
Visual Results
Demo Video
Visual comparisons
Spixelwarp [1]
Soft3D [2]
Instant NGP [3]
Nerfacto [4]
Proposed
BibTeX
Coming soon!
References
[1] | Chaurasia G, Duchene S, Sorkine-Hornung O, Drettakis G. Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics. 2013 June; 32:1–12. |
[2] | Penner E, Zhang L. Soft 3D reconstruction for view synthesis. ACM Transactions on Graphics. 2017 November; 36:1–11. |
[3] | Müller T, Evans A, Schied C, Keller A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics. 2022 July; 41:1–15. |
[4] | Tancik M, Weber E,Ng E, Li R, Yi B, Wang T, Kristoffersen A, Austin J, Salahi K, Ahuja A, Mcallister D, Kerr J, Kanazawa A. Nerfstudio: A Modular Framework for Neural Radiance Field Development. ACM Transactions on Graphics. 2021; 40:1–18. |