A Brief Introduction To Robotics And 3D Vision
The proposal of “Made in China 2025” and the “New Generation Artificial Intelligence Development Plan” marked that my country has officially entered the era of developing intelligent manufacturing.
As an important branch of the robotics field, industrial robots, as an important support for my country’s industrial development, are now more and more widely used in the field of intelligent manufacturing. Under the opportunity of new infrastructure, industrial robots gradually play the dual role of physical serialization and production realization in industrial automation. While focusing on big data and artificial intelligence, they land in industrial automation applications, and eventually rise to data collection and information systems. The future development direction of industrial robots.
China is the world’s largest industrial robot market, but the number of industrial robots per 10,000 workers in China is still lower than that of developed countries such as the United States, South Korea, and Japan, so there is still a lot of room for growth. At the same time, domestic robots are facing four major opportunities:
1. Efficient government promotes the establishment of connections between practitioners and enterprises, and quickly forms advantages of scale;
2. New scene requirements are born in the new application industry;
3. Under the current economic situation, domestically produced substitutes for foreign brands have gradually become a trend;
4. China’s manufacturing industry and industrial ecology are moving towards mid-to-high end.
The latest data shows that the main market growth points are in Germany and Japan, which have increased by more than 30%, which is inseparable from the highly developed technology of the sensor industry and robot core component manufacturers in Germany and Japan.
At present, the first level of intelligent human-machine collaborative robots has received widespread attention. Although the number of installed equipment is still very low, accounting for only 3.24%, the annual installation of collaborative robots in 2018 increased by 23%. In 2017, there were about 11,100 collaborative robots. In 2018, there were nearly 14,000 collaborative robots among more than 422,000 industrial robots. The emergence of bionic anthropomorphic dual-arm collaborative robots can further improve the freedom of robots. Both the medical and industrial fields have great development potential.
Robots developed from the “remote manipulator” in the 1950s, a programmable joint device, showing a trend from automation to intelligence. Among them, dual-arm collaborative robots have important applications in the field of space robots. Japan planned to build a flying remote robot attendant with two coordinated robot arms to repair satellites and perform other coordinated operations on a self-propelled platform.
Robots developed by the United States to detect Mars and other planets usually have their arms mounted on crawler or wheeled mobile vehicles to dig up soil samples. Due to the long lag time of command signal transmission, remote feedback control is meaningless, so robots are more required to anthropomorphize self-decision-making.
The dual-arm collaborative robot has great application potential in the field of medical minimally traumatic robotic surgery. However, due to the insufficient degree of freedom of the manipulator and the inability to meet the requirements for the precise operation of the robot, it is not popular enough. Therefore, increasing the degree of freedom of the operator can improve efficiency and increase Safety and overcome the above difficulties. In the initial stage, the remote operation hand technology can be used to control the surgical operation hand. The sensor gloves worn by the surgeon provide tactile and force feedback, and the position of the fingers is measured by the sensor and transmitted to the micro multi-fingered hand. With the further development of intelligence, control and sensing technology and its application in surgical operators, surgical workstations that can be used in battlefields and space will appear in the near future.
Three-Dimensional Vision Has Become A Key Factor For Field Applications And Future Development
In recent years, optical measurement can be divided into active measurement and passive measurement according to whether or not to project light actively. Typical active measurement methods include time-of-flight method, structured light measurement method, laser triangulation method, and coherent optical measurement methods represented by moiré fringe and speckle interference. There are already representative measurement instruments. Three-dimensional vision sensors such as multi-line laser vision sensors, binocular structured light sensors, lidars, and time-of-flight imaging originals have been trying to enter various fields to play the role of intelligent sensing. Optical passive measurement does not need to project a light field. It mainly measures three-dimensional scenes based on the principle of stereo vision, represented by typical methods such as SFM (structure from motion), binocular and multi-eye vision. The equipment required is relatively simple and its measurement accuracy It depends on the texture of the scene itself, which is generally above the millimeter level, so it is widely used in the consumer market.
The structured light vision measurement system is applied to industrial robots, and the application fields are mostly three-dimensional measurement and visual guidance grasping. Such as teleoperation surgical robots, space repair robots, intelligent welding robots, parts sorting and assembly robots. Compared with lidar, TOF and other 3D measurement technologies, the structured light vision measurement system has the advantages of strong anti-interference ability, high accuracy, and more suitable for use in complex environments such as industry. Especially in the field of intelligent welding robots, the future vision system will not only be able to master the data analysis and recognition capabilities of a specific industrial scene through deep learning technology, but also obtain accurate workpiece and factory environment information through three-dimensional vision sensors, which will be useful for future robots. The intelligent operation provides a variety of possibilities.
With the gradual popularization of robotics in my country’s industrial field, robotic flexible vision measurement and positioning technology plays an important role in the intelligent and customized development of industrial robots. Robot flexible vision measurement has significant advantages such as high precision, high efficiency, non-contact and non-destructive measurement. At present, three-dimensional vision measurement systems have been used in the field of space and deep sea, surgical industry, welding robots and other robots at home and abroad for size measurement of key parts, online measurement, automatic non-destructive testing, welding seam tracking, surface detection and inspection of large-area parts Positioning etc. It is foreseeable that 3D robot vision will play an increasingly important role in high-tech fields such as future product design, smart factories, autonomous driving, and robot operations in extreme environments.
The demand for dynamic measurement has gradually emerged in many projects of industrial robot automation measurement and vision guidance. In the above-mentioned robot structured light measurement system, in order to complete the quality control and positioning measurement of the components, the following requirements are put forward for the meaning of the corresponding visual dynamic measurement:
Dynamic Measuring Range
The measurement object has the phenomenon of different ranges to be measured. Large components such as the surface of the aircraft and the hull quality control range are floating in the range of 20-250m. The measurement has the characteristics of a wide range of engineering surveying and mapping, and precision measurement requirements. In addition, most of the robot’s visual guidance is in the working area of the workbench, and the measurement range is about 5m. Therefore, it is necessary to control the local accuracy and the measurement range to be variable.
Measurement Data Dynamic
The reported structured light measurement methods often use stable binocular stereo vision structure and line laser sensor, and even use multi-camera and multi-projection measurement network stereo vision structure in some special application scenarios. This system structure measures surface obstructed objects. There are limitations, only the common areas in the field of view can be measured, which leads to the generation of blind areas. The combination of robots and structured light measurement methods can effectively avoid this problem. The data obtained from different positions and different angles can be registered and stitched , Can effectively make up the blind zone data.
Measuring Target Dynamics
The industrial scene of most key parts of the industry still uses assembly lines and workshops as the main work units. The placement and processing of target objects have dynamic characteristics. Some scenes require robots to hold the measuring head for in-situ measurement, which can significantly improve the measurement efficiency of key dimensions in key components.
The above-mentioned dynamic measurement has outstanding problems in actual work, which makes the single structured light vision measurement system unable to meet the requirements of adjustable measurement range, supplementary measurement data, and variable measurement target. Therefore, combining the structured light vision measurement sensor and the dual-arm collaborative robot to perform dynamic measurement of the target object will lay a foundation for system integration to solve the intelligent problem of the robotics industry. At the same time, the sensor has the characteristics of non-contact rapid measurement, and the analysis of the image processing lighting of the measurement object can provide help for the research in the field of online monitoring, target recognition and robot navigation.