
M.S. Thesis Defense: Max Nguyen
Wednesday, April 16, 2025
11 AM
211 Schorr Center
Zoom: https://unl.zoom.us/j/98230219317
"OpNet: Pixel Synthesis Neural Network for Interactive Volume Visualization"
Volume visualization systems responsive to the user’s data-dependent operations, like changing the color and opacity transfer functions, can significantly enhance the efficiency of uncovering critical intrinsic patterns within the volumetric data. However, existing volume visualization systems often require re-execution of the entire visualization pipeline whenever the transfer functions are altered, resulting in substantial computational overhead and hindering real-time interactivity. In this work, we proposed a pixel synthesis neural network, OpNet, to directly predict the pixel results in constant time by jointly considering the data, viewing parameters, and transfer functions. Our approach decouples the data and color/opacity mapping from the compositing process of ray casting by learning features from data and transfer functions separately. This design enables efficient rendering for modified transfer functions by inferring only a subset of the network, thereby significantly reducing input latency. Furthermore, OpNet extracts a latent representation from rays to accurately model pixel-level similarities, and leverages superpixel rendering through ray grouping to further optimize the rendering performance. Experimental results demonstrate that OpNet achieves superior rendering latency with high rendering quality compared to traditional GPU-accelerated ray casting and state-of-the-art generative image synthesis methods, offering a promising solution for real-time, interactive volume visualization.
Committee:
Dr. Hongfeng Yu, Advisor
Dr. Lisong Xu
Dr. Huijing Du