InvestorsHub Logo
Followers 2
Posts 1540
Boards Moderated 0
Alias Born 05/06/2017

Re: None

Thursday, 12/09/2021 2:59:07 AM

Thursday, December 09, 2021 2:59:07 AM

Post# of 784
WiMi Holographic Academy:
Key applications of human-computer interaction and remote device control in virtual reality environments.


source
https://inf.news/en/tech/6af13fd0c4d0109cf2a1475cb4928956.html

2021-12-07 15:25 HKT

With the development of virtual reality and 5G technology, new technical elements have been injected into human-computer interaction and remote control technology. Human-computer interaction and remote control based on traditional technology can no longer meet people's needs. As the scientists of the “Wimi Hologram US.WIMI” research institute under the Nasdaq-listed company “Wimi Hologram US.WIMI” , scientists believe that people need to build a realistic visualization effect that can interact with virtual scenes naturally and in a virtual reality environment. A system for remotely controlling equipment in

The following is the fusion viewpoint of science and technology of the WIMI Holographic Academy of Sciences, which is of cutting-edge guiding significance for the "key technologies of human-computer interaction and remote control of equipment in a virtual reality environment".

Background introduction
With the introduction of concepts such as "intelligent manufacturing", "industry 4.0", and "industrial Internet", related research has become popular. In the manufacturing industry, digital and intelligent transformation and upgrading of factories are also in full swing. Many digital factories are equipped with advanced human-computer interaction methods and the functions of remote state monitoring, operation, and maintenance of equipment. Through advanced human-computer interaction and equipment remote control system, engineers and technicians and staff can monitor, debug and maintain the equipment's operating status in a comfortable office or even thousands of miles away, greatly improving work efficiency.
Especially under severe working conditions such as high temperature, toxic, high ionizing radiation, engineers and technicians can remotely operate on-site equipment through advanced human-computer interaction and remote control technology to protect the staff from damage. In addition to industrial applications, in surgical operations, doctors can use advanced human-computer interaction and remote control technology to achieve remote surgical operations, which not only allows patients thousands of miles away to enjoy the highest quality medical services, but also greatly Reduce the physical burden of medical staff. In space exploration, with the help of human-computer interaction and remote control technology, scientists do not need to risk their lives to go to outer space in person, but only need to remotely operate robots thousands of miles away from the ground control center.

Virtual reality technology provides a new way to view and manipulate three-dimensional data, and inject new technical elements into the human-computer interaction and equipment remote control system. Virtual reality technology enables users to enter the scene built by the computer graphics system and interact with the virtual objects in it. With the promotion of HTC, Facebook and other technology companies, virtual reality hardware platforms and software development platforms have begun to enter the market. Among them, the HTCVIVE helmet display system developed by HTC has excellent performance such as large field of view, high immersion, high refresh rate, and good interactivity. At present, VR has become the next outlet after the Internet and smart phones. Technology manufacturers around the world are rushing to enter the VR market, and VR-related research and applications are in full swing.

The continuous upgrading of Internet technology and the continuous improvement of network speed and communication reliability have provided the communication technology conditions for the remote control operating system. Usually, a teleoperation system needs to establish a special communication link. With the improvement of the performance of the Internet, it becomes possible to establish a teleoperation system based on the Internet. Especially with the full deployment of 5G, the speed of Internet communication will reach 1Gbit/s, and the communication delay will be shortened to less than 20 milliseconds. With the support of 5G, people can remotely drive their own cars, remotely operate home appliances, remotely use large-scale software deployed on remote servers, and remotely operate large-scale experimental equipment on the remote operating system terminal. In industrial production, most of the equipment can be connected to the Internet to transmit large amounts of real-time data. The operating experience of engineers and technicians in remotely controlling field equipment will be significantly improved.

Research Status of Virtual Reality Technology at Home and Abroad
The Da Vinci surgical robot system jointly developed by IBM, MIT and Heartport uses a variety of human-computer interaction and remote control technologies. The system collects human manipulation information and tactile information, which is processed by the system and transmitted to the manipulator to perform surgical actions. The imaging system of the manipulator collects and amplifies the visual signal and transmits it to the virtual reality imaging system to present the surgical scene to the operator in three dimensions. The system uses a variety of interactive methods such as system interface, VR stereo imaging, human motion sensing, and tactile sensing to realize human-machine information transmission. Use information real-time transmission and high-speed processing technology to eliminate the delay of information transmission and processing. The precise servo feedback control system and a variety of advanced control algorithm programs are used to achieve precise control of the robotic arm.

The latest T-HR3 robot system released by Toyota, Japan, uses remote control to completely imitate human movements. The operating system collects the movement information of the person, and transmits the movement information of the person to the robot to perform the same action through the wireless data link. The robot's vision system returns the collected visual information to the operator, and presents it to the operator in a three-dimensional manner through the VR imaging system.

The Global Hawk UAV system developed by Northrop Grumman of the United States realizes the transmission of human-machine information through the ground control station, and the information is transmitted to the UAV through the high-speed data link for execution. The information is transmitted back to the ground control station, and the control station presents the parameters of the drone to the operator through the display panel. The virtual reality system for pilot training developed by Beijing University of Aeronautics and Astronautics can immerse the pilots in the virtual cabin environment. The virtual cabin draws virtual scenes in real time according to the pilots' operations, so that the pilots have a human-computer interaction environment similar to the real driving environment. Greatly reduce the training cost of pilots.

From the above research, several common technologies of human-computer interaction and remote control in virtual reality environment can be extracted. These technologies can be summarized as the collection of human information, the collection of machine information, the three-dimensional presentation of information and remote control, etc. Machine vision is an important way to collect information. Whether for humans or machines, information collection through vision is an important way to obtain information. The earliest research on machine vision was the pattern recognition and analysis of two-dimensional images. With the development of computers in the 1970s, the theoretical system of machine vision was created. Marr has established a complete machine vision theory system. Marr's vision theory divides vision processing into two-dimensional data collection, extraction of key elements, and three-dimensional reconstruction. According to the point, line, curvature and other elements of the image and the relationship between various elements, through a series of post-processing, the three-dimensional information of the scene is restored. On the basis of Marr's vision theory, a series of improved models with feature picking, information feedback, and purposeful feature recognition have been developed.

The composition of the machine vision system includes a lighting system, a photoelectric conversion system, an imaging system, and an image processing module. The target is converted into a digital signal through photoelectric conversion, and the system processes the digital signal to form the grayscale and color numbers of the pixels. Acquire useful knowledge based on the characteristics of digital signals. In terms of light sources, different lighting schemes are usually used for different feature extraction requirements. In the vision system, there are a variety of lighting methods, including back lighting, forward lighting, structured light sources, and flashing lights. Backlighting means that the light source penetrates the illuminated object and the direction of the light source faces the camera. Forward lighting means that the light source directly illuminates the object, and the reflected light of the object is captured by the camera to form an image. Structured light sources are light sources with certain shape characteristics, such as wire-mounted or point-shaped array light sources. These shapes will cause deformation when irradiated on the object. The system deduces the depth information of the object based on the deformation information. The flash is a discontinuous light, which only has a brief light when the camera is shooting. The advancement of CCD (Charge Coupled Device) and image capture card technology has promoted the application of machine vision.

With the improvement of the performance of photoelectric conversion devices, the components are getting smaller and smaller, and the signal transmission ability is getting stronger and stronger. In a vision system centered on a PC (Personal Computer) machine, a graphics card is needed to process the captured images. The graphics card can digitize the image into pixel and gray value information. Extracting the features of complex signals needs to be implemented step by step with the help of multiple steps. First, distinguish the target of attention from the background. When the difference between the target and the background is small and difficult to distinguish, it is usually necessary to enlarge and enhance the features of the target. After the feature is enlarged and enhanced, it can be distinguished from the background. Methods to separate the target from the background include: false target deletion method, adaptive threshold method, step-by-step method, multi-information fusion method and so on.

At present, 3D motion capture technology is a hot research topic. Three-dimensional motion capture technology can measure, track, and record the motion trajectory of objects in three-dimensional space. It not only collects human motion information, but also has a wide range of applications in many research fields. The production of three-dimensional motion capture technology can be traced back to the 1970s, when it was first proposed by psychologist Johansson in the study of visual perception of human motion. Since the 1980s, professors and scholars represented by Calvert, Carol, Robertson, Walters and Tardif have successively carried out in-depth research on 3D motion capture technology, which has promoted the development of this technology and made this technology increasingly mature.

With the in-depth research of virtual reality technology and the maturity of related technologies, interactive control with virtual models in a virtual environment has become a reality. Virtual reality involves many technical fields and is the result of the joint development of multiple technologies. Virtual reality technology involves model building and 3D display, human-computer interaction, wearable sensors, machine learning and other fields of technology. The use of virtual reality technology can provide users with sensory experiences including vision, touch, hearing and even smell, so that participants have a strong sense of being there. The virtual reality system needs to collect and capture the user's motion state information in real time, so as to generate the corresponding image screen and project it to the user's eyes. This technology integrates the latest developments in 3D display, simulation, machine vision, machine learning, parallel processing and other technologies. It is a high-tech simulation system assisted by computer technology.

The concept of virtual reality was first proposed in the middle of the 20th century. NASA has conducted research on key technologies such as virtual reality displays and sensors, and introduced advanced technologies such as LCD (Liquid Crystal Display) displays, enabling the development of virtual reality technology. In 1968, Harvard University developed a head-mounted display that, in conjunction with a head motion capture system, can provide users with preliminary stereo vision. This is an important breakthrough in the research of virtual reality technology. Subsequent development of virtual reality systems is mostly based on this architecture. In 2015, the virtual reality helmet system developed by HTC VIVE was put on the market, representing that the research of virtual reality technology has moved from the laboratory to the market. The research of virtual reality technology has also shifted from theoretical research to research on specific issues in application.

In summary, machine vision, motion capture, VR, and remote control technologies continue to advance. Da Vinci surgical robots and Toyota T-HR3 robots have achieved immersion in the virtual reality environment through the integration of the above technologies and system integration. Human-computer interaction and remote operation of the machine. Immersive human-computer interaction in the virtual reality environment will replace the existing human-computer interaction methods of mouse, keyboard, and display screen. With the progress of related research and development and breakthroughs in key technologies, the sci-fi scenes of remote control alien bodies that appeared in the movie Avatar will become reality.

Combination of human-computer interaction and equipment remote control frame system in virtual reality environment
In order to solve the data collection, model construction and display, human-computer interaction, security control and other problems that need to be solved in the human-computer interaction and equipment remote control system in the virtual reality environment, a framework based on human-computer interaction and equipment remote control in the virtual reality environment The model is shown in Figure 1.
From a logical analysis, the framework model includes five levels:
- hardware execution layer,
- industrial network layer,
- system layer,
- logic control layer, and
- user operation layer.

There is a certain logic or business interaction relationship between these levels, and each level contains several security control mechanisms and methods, forming a complete set of human-computer interaction and device remote control logic architecture in a virtual reality environment.

- (1) Hardware execution layer
The hardware execution layer is the hardware composition of the system, which mainly executes various production instructions sent from the UI interface module in the virtual reality system. In the intelligent manufacturing workshop, industrial robots, CNC machine tools, machining centers, 3D printers, intelligent AGV (Automated Guided Vehicle) carts and workers form intelligent man-machine clusters. The operating status information of these devices is collected by the data acquisition device, and the data is provided to the loading and display module of the virtual model after data processing.

(2) Industrial network layer
The industrial network layer is a real-time industrial communication network that interconnects the field control unit at the bottom of the virtual reality system and intelligent production equipment. Commonly used are industrial equipment network communication based on technical means such as fieldbus and industrial Ethernet. It includes wired transmission and Wireless transmission method. Wired transmission methods are generally based on fieldbus, industrial Ethernet, etc., using multiple types of standards such as Profibus, Profinet, TCP/IP protocol, and high-speed, high-bandwidth and high-reliability network transmission channels. Wireless network transmission methods such as industrial wireless sensor networks for data transmission and sensor connection do not require on-site wiring, which is convenient and fast, but the network bandwidth and transmission reliability are poor.

- (3) System layer
The system layer refers to the system software above the operating and hardware layer. It is the middle core layer of the framework model of human-computer interaction and equipment remote control system under virtual reality environment. Through the system layer, safe and reliable remote monitoring and control of smart equipment under virtual reality environment can be realized. The main functions include data collection, equipment control, communication management, model driving, 3D display, system security management, etc. In this model, the typical functions of system security management include digital signatures, symmetric encryption, asymmetric encryption, and security mechanisms.

- (4) Logical control layer
The logical control layer is the business logical relationship of the system. Mainly deal with the operation instructions from the user operation layer to the system layer, and process related data. This layer mainly includes the logic control of business process, data processing and algorithm optimization, etc. The main modules include instruction execution, data processing, business interaction, etc.

. (5) User operation layer
The user operation layer is the interface that enables users to interact with the software system and issue various instructions to the hardware execution layer. It consists of various application system interface interfaces, multiple types of user terminal equipment (such as data gloves, cameras, VR glasses, etc.), and multiple Industrial API interface, etc.

In order to realize remote control of industrial equipment under virtual reality environment, a complete system has been established. The development of multiple technologies such as machine vision, virtual reality, and remote control provide good technical support for the construction of such a system. In order to coordinate and cooperate with various technologies to be better applicable to the system environment to be established, it is also necessary to comprehensively use and optimize the various technologies. The system construction steps are shown in Figure 3.

As shown in Figure 3, in step 1, construct the system's data acquisition system. The data acquisition system provides data feed support for the system. The status information of a variety of equipment on site is obtained through the data acquisition system, and the virtual model is driven according to these data. Then, with the help of visual markers, a data collection method for indirect acquisition of equipment movement information.

In step 1, research on related technologies for visual mark design, recognition and positioning.

Step 2. Complete the real-time construction and 3D display of the scene. The field device data obtained in step 1 is presented in the form of 3D visualization through step 2. Step 2 Mainly study the real-time construction of the scene, fast loading, dynamic movement, virtual and real synchronization and other problems. Finally, the realization scheme of 3D display and the interactive scheme under the virtual reality environment are studied.

Step 3. Build a low-latency, safe remote control system, and realize the software and hardware connection between the system and the controlled device. In the virtual reality system, the operation instructions generated by the UI interface are sent to the terminal device through the remote control system and executed. In this step, we will study and solve the problems of communication delay, communication safety and the establishment of PLC-based terminal control system.

Step 4. Perform system integration and testing of the solutions researched and proposed in steps 1 to 3, and the established system modules. This part uses data cache files to realize the connection between system modules. Finally, run the test on the system and its modules to complete the construction of the system.

Research Outlook
Many technological developments have created good conditions for the construction of a human-computer interaction and equipment remote control system in a virtual reality environment. People hope to apply new technologies such as virtual reality and Internet 5G to the field of digital manufacturing. However, to integrate such a complex system that integrates software, electronics, and mechanics, various technologies need to be optimized and improved. Research on key technologies such as human-machine data acquisition, scene real-time modeling and 3D display, low-latency and safe remote control, and integrate key technologies. The human-computer interaction and equipment remote control system under the virtual reality environment developed can further improve the human-machine Interactive experience and remote control efficiency, while improving key technical issues have universal applicability, have good reference value for further improving the automation and intelligence level of smart factories, and have a lot of market demand.

Established in August 2020, the WiMi Holographic Academy of Sciences is dedicated to holographic AI vision to explore the unknown in science and technology, and with human vision as the driving force, to carry out basic science and innovative technology research. The Holographic Science Innovation Center is committed to holographic AI vision to explore the unknown science and technology, attract, gather, and integrate relevant global resources and superior forces, promote comprehensive innovation with scientific and technological innovation as the core, and carry out basic science and innovative technology research. WiMi Hologram Academy of Sciences plans to expand scientific research on the future world in the following areas:

- 1. Holographic computing science:
brain-computer holographic computing, quantum holographic computing, photoelectric holographic computing, neutrino holographic computing, biological holographic computing, magnetic levitation holographic computing

- 2. Holographic communication science:
brain-computer holographic communication, quantum holographic communication, dark matter holographic communication, vacuum holographic communication, photoelectric holographic communication, magnetic levitation holographic communication

- 3. Micro-integration science:
brain-computer micro-integration, neutrino micro-integration, biological micro-integration, optoelectronic micro-integration, quantum micro-integration, magnetic levitation micro-integration

- 4. Holographic cloud science:
brain-computer holographic cloud, quantum holographic cloud, photoelectric holographic cloud

The following are some of the scientists of the WiMi Hologram Academy of Sciences:
Zhang Ting, a postdoctoral fellow at Northwestern University, a doctorate from the University of Hong Kong, category C of the Peacock Program for overseas high-level talents, mainly engaged in research and development and application of VR/MR key technologies and optimization of complex service systems, and published 5 holographic patents. Won the first prize of Hubei Province in the National "Challenge Cup" Entrepreneurship Plan Competition and the first prize of Huazhong University of Science and Technology.

Yao Wei, Ph.D. in Computer Science and Technology Engineering, Hunan University, main research direction:
Memristive neural network and its dynamic behavior, application: image processing, secure communication. Based on the memristor circuit with long-term memory characteristics of VDCCTA and its composed neural network. Participate in designing a neural network system model based on memristor. Memristor-based bionic neuron and synaptic connection microelectronic circuit design, participate in the memristor-based neural network system model design and dynamic behavior analysis.

Chen Nengjun, PhD in Economics from Renmin University of China, Postdoctoral in Applied Economics from Shanghai Jiaotong University, Deputy Secretary-General of Guangdong Financial Innovation Research Association, Director of Guangdong International Service Trade Association. Mainly engaged in the research of cultural technology and industrial economy, in recent years, he has made good achievements in the field of copyright industry research. In recent years, he has hosted and researched "Digital Creative Industry in the 5G Era: Global Value Chain Reconstruction and China's Path", "Shenzhen Accelerating the Development of Artificial Intelligence Industry", "Research on China's Copyright Trade Development Strategy from the Perspective of a Powerful Trade", "Cultural Technology Integration Research: Based on the dual perspective of copyright transactions and financial support" and many other provincial and ministerial topics, and published many papers in core journals such as "Business Research", "China Circulation Economy", and "China Cultural Industry Review".

Pan Jianfei, Ph.D. from the Hong Kong Polytechnic University, is currently a talent in the "Thousand Hundred and Ten Project" of universities in Guangdong Province, a high-level overseas talent in Shenzhen, a high-level talent in Shenzhen, and an outstanding scholar of Shenzhen University. The research fields are mainly automation + VR applications, advanced digital manufacturing, digital manufacturing holographic twin factory, robots, etc. Presided over a number of National Natural Science Foundation projects, Guangdong Province Science and Technology Plan projects and Guangdong Province Natural Science Foundation projects.

Du Yufan, PhD in optical engineering from Beijing Jiaotong University, obtained more than 20 patents related to display products, published 3 journal articles, and created the world's highest resolution 8K*4K VR products, and proposed the use of light field display technology to solve VR Convergence conflict problem; launched the first monocular AR glasses with 100% localization rate, and jointly proposed the concept of a non-contact interactive operating system based on future spatial information (System On Display), and implemented virtual reality digital in the operator system Industrial cooperation.

Wu Chaozhi, Ph.D. in Opto-Mechanical Engineering and Application, Shenzhen University. His research direction is mainly precision/micro electrochemical machining. He has published many journal papers and conference papers and obtained three related patents. He has participated in the National Key Research and Development Program and the National Natural Science Foundation of China. Key projects of the research plan, etc.

Ding Ru, Chinese Academy of Social Sciences, Ph.D. in Technical Economics and Management from the Institute of Quantitative Economics, engaged in big data and digital economy, innovation and development research, scientific research project management and other fields. The main research areas are scientific and technological services, industrial economic research, technological innovation and entrepreneurship . Served as the deputy secretary-general of Shandong Technology Market Association. He is good at integrating innovative resources, expanding innovative business and innovative industrial planning and industrial economy, and participating in related innovative research and industrial resource docking in the application of virtual reality technology.

The above is a small group of scientists from the Wimiar Academy of Sciences. PhDs at home and abroad who are interested in joining the Academy of Wimiar Sciences can send their resumes to: mark@wimiar.com. Any individual or unit who aspires to do holographic scientific research can also publish personal scientific papers to this mailbox to obtain scientific research funding.

The WiMi Holographic Academy of Sciences aims to promote cutting-edge research in computer science, holography, quantum computing and other related fields for actual industry scenarios and the future world. Establish an industry-research cooperation platform, promote the application of major technological innovations, and create an ecosystem of in-depth integration of industries and research centers. WiMi Holographic Academy of Sciences adheres to the mission of "Let there be technology where there are people", focusing on holographic scientific research in the future world, and contributing to the advancement of global human science and technology.

WiMi Hologram was established in 2015, NASDAQ: WiMi.
WiMi Hologram focuses on holographic cloud services, mainly in vehicle AR holographic HUD, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation, meta-universe holographic AR/VR equipment, meta-universe Holographic cloud software and other professional fields, covering from holographic vehicle AR technology, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR virtual advertising technology, holographic AR virtual entertainment technology, holographic ARSDK payment, interactive holographic virtual communication, Meta universe holographic AR technology, Meta universe virtual cloud service and other holographic AR technologies, is a holographic cloud comprehensive technical solution provider.
Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent WIMI News