CORE TECHNOLOGY
The eye is said to have been born during the Cambrian period, triggering a major outbreak of biological species. After more than 500 million years of evolution, it has become the most important sensory organ for most organisms, and is also known as the "most precise and intelligent" work created by nature.
The eyes are a part of the brain that protrudes from the body, and it can also be said that the visual system contains almost the entire brain. Therefore, the performance of the eyes is closely related to the intelligence level of the organism. The clever combination of binocular function and brain allows humans to clearly and accurately observe the world from macro to micro, as well as its subtle changes, ensuring their dominant position in nature.
Compared to monocular vision, binocular vision can enhance the perception of information from two-dimensional to three-dimensional while completing all the functions of monocular vision. It can complete more visual algorithm functions and help the system complete intelligent decision-making through accurate environmental perception.
The team has a deep technical accumulation in the field of visual image AI information processing based on deep learning technology, especially for binocular stereo vision information processing. They have achieved excellent technical achievements in 3D reconstruction, positioning and navigation, environmental understanding, object tracking, and detection, and have been applied in multiple fields such as intelligent security, industrial detection, and virtual reality.
Starting from the research of humanoid biomimetic binocular vision systems, the team has years of research and development experience in the field of multi-sensor fusion information processing. It has multiple independent innovative technologies in the fields of multiera-cam fusion processing, multin-iertial navigation fusion processing, visual/auditory/inertial navigation/radar/GPS, and other multimodal information fusion processing. Based on this technology, the team studies the problems related to audio-visual fusion information processing, autonomous positioning, and navigation (SLAM), Widely used in the modules and systems developed by the team.