Followers | 2 |
Posts | 1540 |
Boards Moderated | 0 |
Alias Born | 05/06/2017 |
Thursday, March 31, 2022 12:51:05 PM
source
https://www.streetinsider.com/Globe+Newswire/WIMI+Holographic+Academy%3A+Computer+Image+Scene+Construction+Based+on+Virtual+Reality+Technology/19849411.html
March 30, 2022 11:42 PM EDT
WIMI Holographic Academy, working in partnership with the Holographic Science Innovation Center, has written a new technical article describing their exploration of holographic AI vision technology.
Through the study of stereo vision principle, a panoramic splicing algorithm based on vertical edge processing is proposed. The matching of samples is highlighted by the main vertical edges to avoid local optimality, and a unified circular trajectory image sequence method is proposed to make the image acquisition more convenient. It rotates the fixed acceleration around the fixed center of the horizontal plane to take photos, and collect the light information of each point in the horizontal plane, with a three-dimensional effect.
With the development of computer graphics, real virtual environments can be established through traditional geometric methods. Typically, 3 D geometric models are created to describe features such as lighting and surface textures in a scene. Creating images of viewpoint observations by calculating the light intensity requires high computational power, and the resulting effects are difficult to reproduce the complex natural textures presented in the photographs. Recently, an image-based rendering technique was proposed, which has the rendering quality of real scenes, and the algorithm is only related to the image resolution. Scientists from WIMI Holographic Academy, the research institute of Nasdaq listed enterprise WIMI Hologram (NASDAQ: WIMI), discusses the significance of virtual scene based on image drawing technology, discusses the principle of stereo photography geometry, discusses the three-dimensional full light function construct virtual scene method, in order to reduce the traditional method in the corresponding point matching the local optimal.
1. The photographic geometry principle of stereopsis
Virtual reality technology is a hot spot in the field of science and technology at home and abroad in recent years, involving computer graphics, dynamics and artificial intelligence. With the development of related disciplines, virtual reality technology has developed rapidly and is applied in the fields of aerospace, medical rehabilitation, construction and manufacturing. With the help of computers, virtual reality technology realizes the virtual illusion that people feel through audio-visual means. Virtual reality technology is both immersive and interactive. It uses timely simulation of advanced user interface using various information channels. Humans receive 80% of the total information received through visual perception. Building virtual reality is an important part of virtual reality, and it has been widely used in many fields.
1. 1 Imaging geometry principles
Virtual reality technology requires users to be strongly immersed in the virtual environment. Building virtual scenes is a way for users to experience real visual effects. Imaging systems typically convert 3-D scenes into 2-D grayscale images. Perspective projection is a commonly used imaging model. It is characterized by the light of the scene passing through the projection center, and the straight line perpendicular to the image plane is the projection axis. Orthogonal projection is a special case of perspective projection that uses light parallel to the optical axis to project a scene onto the image plane. Obtaining the distance between each point and the camera is an important task of the visual system. Each pixel value in the depth plot represents the distance between a point in the scene and the camera. Passive range sensors are the light energy emitted by the visual system. Based on the depth information of the image recovery scene, the depth information can be estimated indirectly by using the shadow features and motion features of the grayscale images. Radar ranging systems and triangular ranging systems are common active ranging sensor systems. Active vision mainly studies the combination of vision and behavior to obtain stable perception through active control of the camera position and aperture parameters.
1. 2 Principles of stereoscopic imaging
Finding the conjugate pairs with the actual stereo image pair is the most difficult step in stereo vision, and many constraints are established in order to reduce the corresponding point mismatch. The traditional feature point search selects feature points on one image, with a feature point located on the pole line corresponding to another image. If the distance between the known target and the camera is within a certain interval, the search range can be limited to a small interval of computer image scene construction based on virtual reality technology, which can greatly reduce the search space for finding the corresponding points and reduce the number of mismatch points. The matching point may not appear exactly on the corresponding pole line in the image plane. Stereo vision usually consists of more than two video cameras. The light intensity of the corresponding point in the scene varies greatly, and the image needs to be normalized before matching. Each feature point of an image can only correspond to a unique point of the other image. The projections of each point on the object surface in the image are continuous, and the continuous constraints on the object boundary do not hold.
1. 3 Regional Correlations
Edge features usually correspond to the boundaries of the object. The depth value of the boundary can be any value between the depth distance of the object closed edge. The edges of the contour image observed on the two image planes do not correspond to the edges of the object, and the edges of the image plane can only be detected along the closed edges. The fundamental problem of recovery depth is to identify more feature points distributed in the image. The matching points in the two images should be identified as easily as possible. Interested operators should look for areas with large variation in the image. It can be chosen where the interest metric function has a local maximum. After identifying the features in these two images, many different methods can be used for matching. Only the points that satisfy the pole line constraints can be the matching points.
2. Virtual scene construction method of all-light function
IBR (image-based rendering) technology has changed the understanding of traditional computer graphics. The IBR technique for constructing a virtual environment is based on functional theory. The panoramic method of constructing virtual scenes breaks through the matching characteristics of image samples. The virtual scene construction method of uniform circular trajectory image sequence is a 3 D complete function. It can produce horizontal stereo visual effect, and is easy to obtain, the rendering process is independent of internal parameters such as camera focal length, has high operating efficiency.
2. 1 Full-light function
The all-light function (Plenoptic Function) describes all the information seen from any point in space, describes the environmental mapping geometry that may occur in a given scene, constructs a continuous all-light function from some directed discrete samples, and draws a new view through the resampling function. The scene can be constructed in any way by collecting incident light at any viewpoint, and the image is also the information that passes through a specific point in space at a specific time. Five-dimensional all-optical functions are difficult to capture. The dimensionality of the full-optical function can continue to decrease. The all-optical function is a continuous function reconstructed from the spatially sampled optical information under Nyquist's law. The function is sampled according to the viewpoint parameters to obtain the image observed through the viewpoint.
2. 2 Virtual scene construction method of all-light function
The 2-dimensional panorama is fixed. The all-light function reduced to two dimensions is a fixed viewpoint and only collects information about fixed light spots in space. In the reconstructed virtual scene, the viewpoint cannot be moved, and no parallax produces a stereoscopic sense. Panorama views can be easily obtained if panoramic cameras are used. Photos taken in different directions can be sewn into panoramas with a normal camera. The main process of constructing a virtual scene is stitching together a panorama. Panorama reflecting a particular scene can be obtained by seamlessly stitching images with overlapping boundaries collected by the camera, such as stitching straight lines into dashed lines.
Current image stitching algorithms generally adopt stitching algorithms and suture algorithms. Matching is performed by searching for a local communication. The sampling sequence acquisition process of splicing panoramic images is to fix the center of light at a fixed center and take photos at a certain angle to rotate in a fixed plane. It can consider a translational relationship between adjacent image samples in the same image space. The vertical edge of the image becomes an important feature in the matching. Based on the vertical edge processing method, the image sample using the image difference method is usually rich in detail. Digital image capture is not exactly the same due to the random noise. Edges in a detail texture often introduce other errors during the processing. Smoothing image samples are sharpened by vertical edges, and image sample convolutional gradient operator methods are used to highlight sharpened edge features without causing the edges to be widened, thus introducing errors.
2. 3 Virtual scene construction method of uniform circular track image sequence
Flat function and 2-dimensional panorama do not provide stereoscopic vision. An appropriate form of a pure function should be found to create a virtual scene. A person can still feel intense stereopsis when he moves horizontally with no parallax in the direction. The image virtual scene construction method based on the image sequence of uniform circular trajectory is to place the camera near the rotation center O and use the captured image sequence as input. By processing the obtained image sequence, the observed images can be obtained from different viewpoints within a certain range of the horizontal plane, and the acquisition of the image sequence is relatively simple.
2. 4 Depth of field calculation
The virtual scenario construction allows the user to move within a certain range. Conventional depth calculation methods first use two calibrated pinhole cameras to capture images of the same object at different locations. To calculate the 3 D position of each point in the first image, it is best to search the match in the second image. A lot of work has been done in deep computing. The multi-baseline stereo imaging algorithm eliminates the effects of noise through multiple images and outperforms the conventional correlation algorithms. Each feature point of an image can only correspond to the unique feature point of the image. Since most feature points are not very obvious, corresponding ambiguities usually arise. One point can be a true counterpart, and the other point is a false counterpart. Multibaseline stereoimaging is an effective way to eliminate blur of the corresponding points. In practice, it is no need to calculate the depth of each image in the input image sequence, and the depth information of all field attractions can be obtained by calculating the depth of a panorama.
3. Virtual scene construction experiment
Using the sharpening image method to highlight the matching features of samples, in most cases, the automatic splicing of panoramic images can be completed, effectively avoiding the latter search process into local limitations, and the splicing results are basically seamless. The minimum similarity distance method is insensitive to image brightness changes and has certain noise resistance. The experiments were carried out in a computer simulation environment. The simulated environment is a synthetic indoor scene that includes geometric objects like tables and cylinders placed in the room. The method of constructing virtual scene by using uniform circular track image sequence produces stereo visual effect and realizes the experiment of obtaining depth information of virtual scene. Videos were shot evenly around the horizontal circle to obtain the desired image sequence. When the viewpoint parameters change, the view of the new viewpoint is redrawn according to the principle of the image sequence of uniform circular trajectories. When the viewpoint varies within a predetermined range, the correspondence between the image sequence corresponding to the viewpoint is calculated. When the drawing light is not at the sampling point, it is replaced with adjacent points. The simulated environment is an indoor scene where the vertical depth approximation can be used to reduce vertical deformation. Assuming that the depth of the scene is a constant, the vertical distortion decreases, and the width of the image decreases to different degrees. The relative depth changes due to the viewpoint changes, and the information is no longer acquired in the vertical direction. The depth of the field attraction is not actually a s-constant, and the observed deformation is the assumed difference between depth and design.
The method is less computationally efficient and draws new viewpoints with a uniform circular trajectory image sequence when the field of view is less accurate. The depth estimation of image sequences constitutes virtual scenes using a multi-baseline stereo imaging algorithm, first stitching the same sequence of each frame into panoramic images. Each panorama size is 300,240. Cut the 320,240 images from a column L=160 patchwork panorama. Set the window size used to calculate the SSSD function to 9x9. Searching against the parallax variation in the stitched panoramic columns from the longest baseline L=260, and calculating the corresponding depth values, we calculated the SSD for each baseline. The purpose of the search is to find the SSSD minimum depth value. The search ranges from parallax d=0 to the direction of the pixels in the panorama and in the same direction as the reference point in the reference image search. Adjusting the vertical changes with the estimated depth of field works best, but should be based on the estimated depth of field. If the view drawing of the scene and viewpoint is required to be in real time and the image quality is faster.
4. Conclusion
Overall, the construction technique based on image virtual scene, which is based on all-optical function theory. According to Nyquist's law, the paper uses the continuous all-optical function in space. When the viewpoint parameters change, sampling is based on the viewpoint parameters. First, the geometric theory of visual photography for constructing virtual scenes is discussed. Then, the full-light function of the panoramic image is studied, and the vertical edge processing splicing algorithm is proposed. Finally, the estimation method of depth of field information is studied.
Founded in August 2020, WIMI Holographic Academy is dedicated to holographic AI vision exploration, and conducts research on basic science and innovative technologies, driven by human vision. The Holographic Science Innovation Center, in partnership with WIMI Holographic Academy,is committed to exploring the unknown technology of holographic AI vision, attracting, gathering and integrating relevant global resources and superior forces, promoting comprehensive innovation with scientific and technological innovation as the core, and carrying out basic science and innovative technology research.
Contacts
Holographic Science Innovation Center
Email: pr@holo-science. com
Recent WIMI News
- WIMI develops glasses product "WiMi HoloAR Lens" to enhance wearable user experience • PR Newswire (US) • 09/26/2024 01:00:00 PM
- WiMi Developed a Blockchain-Based Integrated Architecture for Security Maintenance in Cloud Computing • PR Newswire (US) • 09/19/2024 01:30:00 PM
- WiMi Developed a Blockchain Empowered Asynchronous Federated Learning for Optimizing Model Training • PR Newswire (US) • 09/09/2024 02:30:00 PM
- WiMi Hologram Cloud Announced an Identity Management Model Based on Blockchain • PR Newswire (US) • 08/23/2024 03:00:00 PM
- Form F-3 - Registration statement by foreign private issuers • Edgar (US Regulatory) • 08/09/2024 10:12:37 AM
- WiMi Developed a Payment Channel Rebalancing Model for Layer-2 Blockchain • PR Newswire (US) • 08/08/2024 02:40:00 PM
- WiMi Announced a Secure and Trusted Collaborative Learning Based on Blockchain for IoT • PR Newswire (US) • 08/06/2024 01:00:00 PM
- WiMi Announced a Blockchain-based Framework for Secure Data Sharing in Smart Cities • PR Newswire (US) • 07/29/2024 01:00:00 PM
- WiMi Built an Advanced Data Structure Architecture Using Homomorphic Encryption and Federated Learning • PR Newswire (US) • 07/26/2024 01:00:00 PM
- WiMi Developed a Blockchain-Based Trust Approach for Cloud Computing to Address Security in Services • PR Newswire (US) • 07/19/2024 03:00:00 PM
- WiMi Announced a Genetic Algorithm-based Consensus Algorithm for Blockchain • PR Newswire (US) • 07/15/2024 01:30:00 PM
- WiMi Announced a Blockchain Consensus Mechanism Based on Improved Distributed Consistency and Hash Entropy • PR Newswire (US) • 07/01/2024 02:30:00 PM
- WiMi Announced a Intelligent Interconnected Decision-making System Based on Blockchain Technology • PR Newswire (US) • 06/12/2024 02:30:00 PM
- WIMI Announced to Jointly Establish a Micro-Consciousness Quantum Research Center With MicroAlgo • PR Newswire (US) • 06/04/2024 08:06:00 AM
- WiMi Developed BlockChain Guardian Technology to Enable IIOT Cybersecurity • PR Newswire (US) • 06/03/2024 02:50:00 PM
- WiMi Announced the Internet of Things (IoT) Data Perception Based on Blockchain Technology • PR Newswire (US) • 05/24/2024 03:10:00 PM
- WiMi Announced a Blockchain Data Encryption Technology Based on Machine Learning and Fully Homomorphic Encryption Algorithm • PR Newswire (US) • 05/22/2024 04:54:00 PM
- WiMi Announced PoLe Blockchain Consensus Mechanism to Enable Efficient Training of Neural Networks • PR Newswire (US) • 05/17/2024 03:58:00 PM
- WiMi Announced the Innovative Application of RAFT Consensus Algorithm in Blockchain to Build a Secure Data Management Model • PR Newswire (US) • 05/15/2024 02:20:00 PM
- WiMi Developed a Swin-Transformer and Hash-based On-Chain Copyright Detection Technology • PR Newswire (US) • 05/06/2024 02:50:00 PM
- WiMi is Working on the Blockchain Collaborative Hybrid Consensus Algorithm to Optimize Blockchain Performance and Security • PR Newswire (US) • 05/02/2024 03:40:00 PM
- WiMi Hologram Cloud Announced the B-TEC Technology to Enhance Information Security • PR Newswire (US) • 04/23/2024 03:20:00 PM
- WiMi Announced GradingShard Blockchain Sharding Technology • PR Newswire (US) • 04/22/2024 02:30:00 PM
- WiMi Announced an Efficient Connectivity Solution Based on Blockchain Lightweight Architecture 5G IoT • PR Newswire (US) • 04/18/2024 03:00:00 PM
- WiMi Developed a Hybrid Machine Learning Model Based on VMD and SVR to Lead Bitcoin Price Prediction • PR Newswire (US) • 04/08/2024 12:30:00 PM
FEATURED Element79 Gold Corp. Reports Significant Progress in Community Relations and Development Efforts in Chachas, Peru • Oct 9, 2024 10:30 AM
Unitronix Corp Launches Share Buyback Initiative • UTRX • Oct 9, 2024 9:10 AM
BASANITE INDUSTRIES, LLC RECEIVES U.S. PATENT FOR ITS BASAFLEX™ BASALT FIBER COMPOSITE REBAR AND METHOD OF MANUFACTURING • BASA • Oct 9, 2024 7:30 AM
BNCM COMPLETES MERGER WITH DELEX HEALTHCARE • BNCM • Oct 8, 2024 9:54 AM
CBD Life Sciences, Inc. (CBDL) Reaches Unprecedented Heights With Explosive Growth and Strategic Expansion in 2024 • CBDL • Oct 8, 2024 8:00 AM
Unitronix Corp. to Invest $3 Million in USA Unity Coin Project • UTRX • Oct 7, 2024 7:08 AM