Visual Coherence

The core of mixed reality technology lies in seamlessly integrating virtual content into the physical environment, which is considered one of the key technologies for achieving human-machine visual fusion. From a user experience perspective, the realism of mixed reality mainly depends on the degree of fusion between virtual and real images at the human perception level.

To achieve an immersive experience, mixed reality systems must provide coherent visual stimuli. Therefore, improving visual coherence and eliminating visual discontinuity in mixed reality environments is a key issue that needs to be addressed in the current field of graphics display.

../../../assets/images/about/P3-1.png



Lighting Visual Coherence

Traditional visual coherence research has mainly focused on geometric occlusion and physical collision, while paying insufficient attention to the lighting effects of virtual objects. In most mixed reality systems, there is a lack of reasonable shadow interaction between virtual and real objects, and the influence of environmental lighting on virtual objects is not considered. Such geometric-level consistency alone is insufficient to produce convincing realism.

Due to the lack of accurate acquisition of real scene lighting information, virtual objects in mixed reality environments often exhibit lighting effects (including diffuse reflection, specular reflection, shadows, etc.) that are inconsistent with the surrounding environment. This visual inconsistency significantly affects users' immersive experience.

../../../assets/images/about/P3-2.gif





../../../assets/images/about/P3-3.png

Mixed Display Image Rendering

Based on the consideration of lighting visual coherence, mixed reality rendering must support the rendering of virtual light on real objects (Re-Lighting), while also supporting the recognition of physical light sources (Light Estimation) to support the rendering of physical light on virtual objects. As shown in the left figure, rendering different layers of graphics separately and displaying them through VST/OST head-mounted displays after synthesis is currently a feasible implementation scheme. Considering the computing power of wearable devices, this work must use distributed computing and lightweight models. On this basis, the author proposes a real-time rendering/model training optimization method based on prior knowledge






Perceptual Prior Research

A large number of studies have shown that human visual experience significantly affects actual visual perception. These accumulated prior knowledge form stable memory images in the brain, which then affect the final visual perception results (psychological image). This visual deviation (optical illusion) phenomenon based on experience provides us with a new idea for optimizing real-time rendering and machine learning, allowing us to effectively reduce computational load while ensuring visual effects from a human perception perspective.

../../../assets/images/about/P3-6.png


../../../assets/images/about/P3-4.png

Prior Knowledge in Brightness and Color Perception

../../../assets/images/about/P3-5.png

Prior Knowledge in Lighting Direction Perception






Real-Time Rendering Optimization

In terms of rendering technology, we innovatively proposed a self-emitting object rendering method based on post-processing by deeply analyzing the visual characteristics of real light sources. This method not only avoids complex rendering calculations for indirect light shade, but also achieves more realistic visual effects. At the same time, we built a controllable light experimental environment based on the intelligent lighting system (Yeelight), and established an accurate mapping relationship between physical light parameters and rendering parameters through systematic parameter matching tests, providing reliable technical support for lighting consistency in mixed reality environments.

../../../assets/images/about/P3-7.png


../../../assets/images/about/P3-8.gif

Dark Room and Controllable Light Source