VViD: A physics-based vision improvement algorithm

VViD: A physics-based vision improvement algorithm

i lite (2022). DOI: 10.1186/s43593-022-00034-y “width=”800″height=”530″/>
The physical interpretation of the VViD algorithm shows its effect in the spatial domain (top row) and in the spectral domain (bottom row). In the spatial domain, the real part of the image is almost unchanged while the imaginary part is generated after diffraction. This observation supports the mathematical approximation in the latter part of the paper. attributed to him: i lite (2022). DOI: 10.1186 / s43593-022-00034-y

In a new paper published in i liteA team of scientists led by Professor Bahram Jalali and graduate student Kalin McPhee of UCLA has developed a new algorithm to perform computational imaging tasks. The research paper “VEViD: Improving Visibility via Virtual Diffraction and Coherent Detection” uses a physics-based algorithm to correct poor lighting and low contrast in images captured in low-light conditions.

In such circumstances, digital images often carry undesirable visual qualities such as low contrast, feature loss, and poor signal-to-noise ratio. Low-light image optimization aims to improve these qualities for two purposes: increasing the visual quality of human perception and increasing the accuracy of computer vision algorithms. Previously, real-time processing could be a boon for comfortable viewing. In the latter, it is a requirement for emerging applications such as autonomous vehicles and security where image processing must be completed with low latency.

The paper shows that physical diffraction and coherent detection can be used as a toolbox for digital image and video conversion. This approach leads to a new and surprisingly powerful algorithm for low-light and color optimization.

Unlike traditional algorithms that are mostly hand-made experimental rules, the VViD algorithm simulates physical processes. In contrast to approaches based on deep learning, this technique is unique in having its roots in deterministic physics. The algorithm is interpretable and does not require labeled data for training. The authors explain that although the mapping to physical processes is not precise, in the future it may be possible to implement a physical device that implements the algorithm in the analog domain.

The paper demonstrates the high performance of VEViD in many imaging applications such as security cameras, night driving, and space exploration. The ability of VViD to improve color has also been demonstrated.

The algorithm’s exceptional computational speed is demonstrated by processing 4K video at more than 200 frames per second. Comparison with leading deep learning algorithms shows comparable or better image quality but one to two faster processing speed.

Deep neural networks have proven to be powerful tools for object detection and tracking, and are key to many emerging technologies that take advantage of autonomous machines. The authors demonstrate the usefulness of VViD as a preprocessing tool that increases the accuracy of object detection by a Common Neural Network (YOLO).

Image-first processing by VViD allows neural networks trained on daylight images to recognize objects in nighttime environments without retraining, making these networks more robust while saving massive amounts of time and energy.

more information:
Bahram Jalali et al., VEViD: Improving visibility via virtual diffraction and coherent detection, i lite (2022). DOI: 10.1186 / s43593-022-00034-y

Provided by the Chinese Academy of Sciences

the quote: VEViD: A Physics-Based Vision Improvement Algorithm (2022, Nov 8) Retrieved Nov 10, 2022 from https://techxplore.com/news/2022-11-vevid-vision-algorithm-based-physics.html

This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.

#VViD #physicsbased #vision #improvement #algorithm

Leave a Comment

Your email address will not be published. Required fields are marked *