On Combining Ray Tracing and Rasterization in Rendering Global Illumination by Using a Generative Adversarial Neural Network
DOI:
https://doi.org/10.24160/1993-6982-2025-3-144-152Keywords:
generative-adversarial neural network, computer graphics, global illumination, 3D renderingAbstract
The article presents a study of methods for using an image obtained by ray tracing at a reduced resolution (25% of the target resolution) to generate a high-resolution image of a 3D scene with global illumination using a generative-adversarial network within a time shorter than that obtained in rendering when using ray tracing at high resolution. Three modification versions of one method for rendering 3D scenes using a generative-adversarial network for producing realistic illumination in the screen space are proposed, which differ from each other in application of the model with scaling input data to the required resolution by bicubic interpolation; the model differing from the given one by addition of Gaussian noise to convolution layers of the discriminator; and the model that uses the transfer of the image obtained by ray tracing method in reduced resolution directly to the hidden convolutional layer instead of the generator input layer, thereby excluding the need to apply interpolation. Experiments carried out using software implementations of neural networks made it possible to determine visually and confirm – by estimating both the SSIM structural similarity index and the peak signal-to-noise ratio PSNR that the addition of full rendering to the input data improves the image quality with respect to both indicators, and that the addition of noise to the discriminator layers also improves network performance with respect to both indicators, but not to a significant extent. The transfer of the full render to the generator hidden convolutional layers instead of the input layer degrades the quality with respect to both indicators, but makes it possible to avoid undesirable effects at the edges of objects. The time taken by the neural network to process images in using the three models mentioned above varies insignificantly (within 2%).
References
2. Eversberg L., Lambrecht J. Generating Images with Physics-based Rendering for an Industrial Object Detection Task: Realism Versus Domain Randomization // Sensors. 2021. V. 21(23). P. 7901.
3. Thomas M.M., Forbes A.G. Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network [Электрон. ресурс] https://arxiv.org/pdf/1710.09834 (дата обращения 30.09.24).
4. Harris-Dewey J., Klein R. Generative Adversarial Networks for Non-raytraced Global Illumination on Older GPU Hardware [Электрон. ресурс] https://arxiv.org/abs/2110.12039 (дата обращения 30.09.24).
5. Xiao L. e. a. Neural Supersampling for Real-time Rendering // ACM Trans. Graphics. 2020. V. 39. Pp. 1—12.
6. Yan Xinkai e. a. Neural Rendering and Its Hardware Acceleration: a Review [Электрон. ресурс] https://arxiv.org/pdf/2402.00028 (дата обращения 30.09.24).
7. Salmi A. e. a. Generative Adversarial Shaders for Real‐time Realism Enhancement [Электрон. ресурс] https://arxiv.org/abs/2306.04629 (дата обращения 30.09.24).
8. Fayçal A. e. a. Approximating Global Illumination with Ambient Occlusion and Environment Light Via Generative Adversarial Networks // Pattern Recognit. Lett. 2022. V. 166. Pp. 209—217.
9. Fayçal A., Babahenini M.Ch. Forest fog Rendering Using Generative Adversarial Networks // The Visual Computer. 2022. V. 39. Pp. 943—952.
10. Choi Myungjin e. a. Deep Representation of a Normal Map for Screen-space Fluid Rendering // Appl. Sci. 2021. V. 11(19). P. 9065.
11. Hu Jinkai e. a. Efficient Real-time dynamic Diffuse Global Illumination Using Signed Distance Fields // The Visual Computer. 2021. V. 37. Pp. 2539—2551.
12. Xin H., Zheng S., Xu K., Yan L. Lightweight Bilateral Convolutional Neural Networks for Interactive Single-bounce Diffuse Indirect Illumination // IEEE Trans. Visualization and Computer Graphics. 2022. V. 28(4). Pp. 1824—1834.
13. Wang Z., Bovik A.C., Sheikh H.R, Simoncelli E.P. Image Quality Assessment: from Error Visibility to Structural Similarity // IEEE Trans. Image Proc. 2024. V. 13(4). Pp. 600—612.
14. Langr J., Bok V. GANs in Action: Deep learning with Generative Adversarial Networks. N.-Y.: Manning Publ., 2019.
15. Goodfellow I.J. e. a. Generative Adversarial Networks [Электрон. ресурс] https://arxiv.org/abs/1406.2661 (дата обращения 30.09.24).
16. Rubinov K., Frolov A., Mamontov A. Educational Resources for Remote Comparative Study of Generative Adversarial Networks // Proc. VII Intern. Conf. Information Technologies in Eng. Education (Inforino). Moscow, 2024. Pp. 1—5.
17. Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation [Электрон. ресурс] https://arxiv.org/abs/1505.04597 (дата обращения 30.09.24).
18. PyTorch Documentation — PyTorch 2.2 Documentation [Электрон. ресурс] https://pytorch.org/docs/stable/index.html (дата обращения 30.09.24).
19. Sønderby C.K. e. a. Amortised MAP Inference for Image Super-resolution [Электрон. ресурс] https://arxiv.org/abs/1610.04490 (дата обращения 30.09.24).
20. Home of the Blender project — Free and Open 3D Creation Software [Электрон. ресурс] https://www.blender.org/ (дата обращения 30.09.24).
21. Astuti I.A. e. a. Comparison of Time, Size and Quality of 3D Object Rendering Using Render Engine Eevee and Cycles in Blender // Proc. V Intern. Conf. Computer and Informatics Eng. Jakarta, 2022. Pp. 54—59.
22. Yoshimura A., Ikeda S., Harada T.. Geometry and Texture Streaming Architecture in Radeon™ ProRender [Электрон. ресурс] https://gpuopen.com/download/publications/GPUOpen2022_RadeonProRenderStreaming.pdf (дата обращения 30.09.24).
23. Profeta R. Introduction to Google Colaboratory for Research [Электрон. ресурс] https://rutube.ru/video/70f0af2f1b3efe297feb23e8f924669b/ (дата обращения 30.09.24).
24. Scikit-image: Image Processing in Python — Scikit-image [Электрон. ресурс] https://scikit-image.org/ (дата обращения 30.09.24).
---
Для цитирования: Рубинов К.А., Фролов А.Б. О сочетании трассировки лучей и растеризации при визуализации глобального освещения посредством генеративно-состязательной нейронной сети // Вестник МЭИ. 2025. № 3. С. 144—152. DOI: 10.24160/1993-6982-2025-3-144-152
---
Конфликт интересов: авторы заявляют об отсутствии конфликта интересов
#
1. Pharr M., Jakob W., Humphreys G. Physically Based Rendering: from Theory to Implementation. Cambridge: MIT Press, 2023
2. Eversberg L., Lambrecht J. Generating Images with Physics-based Rendering for an Industrial Object Detection Task: Realism Versus Domain Randomization. Sensors. 2021;21(23):7901.
3. Thomas M.M., Forbes A.G. Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network [Elektron. Resurs] https://arxiv.org/pdf/1710.09834 (Data Obrashcheniya 30.09.24).
4. Harris-Dewey J., Klein R. Generative Adversarial Networks for Non-raytraced Global Illumination on Older GPU Hardware [Elektron. Resurs] https://arxiv.org/abs/2110.12039 (Data Obrashcheniya 30.09.24).
5. Xiao L. e. a. Neural Supersampling for Real-time Rendering. ACM Trans. Graphics. 2020;39:1—12.
6. Yan Xinkai e. a. Neural Rendering and Its Hardware Acceleration: a Review [Elektron. Resurs] https://arxiv.org/pdf/2402.00028 (Data Obrashcheniya 30.09.24).
7. Salmi A. e. a. Generative Adversarial Shaders for Real‐time Realism Enhancement [Elektron. Resurs] https://arxiv.org/abs/2306.04629 (Data Obrashcheniya 30.09.24).
8. Fayçal A. e. a. Approximating Global Illumination with Ambient Occlusion and Environment Light Via Generative Adversarial Networks. Pattern Recognit. Lett. 2022;166:209—217.
9. Fayçal A., Babahenini M.Ch. Forest fog Rendering Using Generative Adversarial Networks. The Visual Computer. 2022;39:943—952.
10. Choi Myungjin e. a. Deep Representation of a Normal Map for Screen-space Fluid Rendering. Appl. Sci. 2021;11(19):9065.
11. Hu Jinkai e. a. Efficient Real-time dynamic Diffuse Global Illumination Using Signed Distance Fields. The Visual Computer. 2021;37:2539—2551.
12. Xin H., Zheng S., Xu K., Yan L. Lightweight Bilateral Convolutional Neural Networks for Interactive Single-bounce Diffuse Indirect Illumination. IEEE Trans. Visualization and Computer Graphics. 2022;28(4):1824—1834.
13. Wang Z., Bovik A.C., Sheikh H.R, Simoncelli E.P. Image Quality Assessment: from Error Visibility to Structural Similarity. IEEE Trans. Image Proc. 2024;13(4):600—612.
14. Langr J., Bok V. GANs in Action: Deep learning with Generative Adversarial Networks. N.-Y.: Manning Publ., 2019.
15. Goodfellow I.J. e. a. Generative Adversarial Networks [Elektron. Resurs] https://arxiv.org/abs/1406.2661 (Data Obrashcheniya 30.09.24).
16. Rubinov K., Frolov A., Mamontov A. Educational Resources for Remote Comparative Study of Generative Adversarial Networks. Proc. VII Intern. Conf. Information Technologies in Eng. Education (Inforino). Moscow, 2024:1—5.
17. Ronneberger O., Fischer P., Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation [Elektron. Resurs] https://arxiv.org/abs/1505.04597 (Data Obrashcheniya 30.09.24).
18. PyTorch Documentation — PyTorch 2.2 Documentation [Elektron. Resurs] https://pytorch.org/docs/stable/index.html (Data Obrashcheniya 30.09.24).
19. Sønderby C.K. e. a. Amortised MAP Inference for Image Super-resolution [Elektron. Resurs] https://arxiv.org/abs/1610.04490 (Data Obrashcheniya 30.09.24).
20. Home of the Blender project — Free and Open 3D Creation Software [Elektron. Resurs] https://www.blender.org/ (Data Obrashcheniya 30.09.24).
21. Astuti I.A. e. a. Comparison of Time, Size and Quality of 3D Object Rendering Using Render Engine Eevee and Cycles in Blender. Proc. V Intern. Conf. Computer and Informatics Eng. Jakarta, 2022:54—59.
22. Yoshimura A., Ikeda S., Harada T.. Geometry and Texture Streaming Architecture in Radeon™ ProRender [Elektron. Resurs] https://gpuopen.com/download/publications/GPUOpen2022_RadeonProRenderStreaming.pdf (Data Obrashcheniya 30.09.24).
23. Profeta R. Introduction to Google Colaboratory for Research [Elektron. Resurs] https://rutube.ru/video/70f0af2f1b3efe297feb23e8f924669b/ (Data Obrashcheniya 30.09.24).
24. Scikit-image: Image Processing in Python — Scikit-image [Elektron. Resurs] https://scikit-image.org/ (Data Obrashcheniya 30.09.24)
---
For citation: Rubinov K.A., Frolov A.B. On Combining Ray Tracing and Rasterization in Rendering Global Illumination by Using a Generative Adversarial Neural Network. Bulletin of MPEI. 2025;3:144—152. (in Russian). DOI: 10.24160/1993-6982-2025-3-144-152
---
Conflict of interests: the authors declare no conflict of interest

