But there are not many books about ROS, and there are even fewer learning communities available in China. This hard open class will show you how to design a mobile robot using ROS.
Sharing guest Li Jinbang: Founder and CEO of EAI Technology, graduated from Beijing Institute of Technology with a master's degree. He has many years of experience in research and development of linux underlying technology in Netease, Snowball and Tencent Technology Department. 20 15 co-founded EAI technology, responsible for the research and development of SLAM algorithm and related positioning and navigation software products. EAI technology, focusing on robot movement, provides consumer-grade high-performance lidar, slam algorithm and robot mobile platform.
Three parts of a mobile robot
The so-called intelligent movement means that the robot can independently plan the route, avoid obstacles and reach the target according to the changes of the surrounding environment.
Robots simulate various behaviors of people. Imagine what organs people need to cooperate when walking. First observe the surrounding environment with your eyes, then use your brain to analyze how to get to the target, then walk with your legs, and so on until you reach the target address. If the robot wants to move intelligently, it also needs the close cooperation of eyes, brain and legs.
leg
"Leg" is the foundation of robot movement. The "legs" of robots are not limited to human or animal legs, but also include parts that can make robots move, such as wheels and tracks, which are collectively called "legs".
The advantage of humanoid leg is that it can not only move in complex road conditions (such as climbing stairs), but also imitate human movements (such as dancing) more vividly. Disadvantages are: complex structure and control unit, high cost and slow action.
Therefore, most mobile robots are wheeled robots, which have the advantages of simple wheel design, low cost and fast moving speed. There are also many kinds of wheeled vehicles: two-wheeled balance vehicles, three-wheeled, four-wheeled and multi-wheeled vehicles and so on. At present, the most economical and practical ones are two driving wheels and one universal wheel.
eye
The robot's eye is actually a sensor. Its function is to observe the surrounding environment, with lidar, vision (depth camera, single and double cameras) and auxiliary (ultrasonic ranging and infrared ranging) suitable for robot eyes.
"brain"
The robot's brain is responsible for receiving the data transmitted by the "eyes", calculating the route in real time and directing the leg movement.
In fact, it is to convert what you see into a data language. Aiming at a series of problems such as how to describe data and how to realize processing logic. ROS system provides us with a good development framework.
Introduction to ROS
ROS is a linux-based operating system. Its predecessor is a project established by Stanford Artificial Intelligence Laboratory to support Stanford intelligent robots, which can mainly provide some standard operating system services, such as hardware abstraction, bottom device control, common function realization, interprocess message and packet management, etc.
ROS is based on graphic architecture, so processes of different nodes can receive, publish and aggregate various information (such as sensing, control, status, planning, etc.). ). At present, ROS mainly supports Ubuntu operating system.
Some people ask whether ROS can be installed in a virtual machine, which is generally possible, but we suggest installing a dual system and running ROS exclusively with Ubuntu.
In fact, ROS can be divided into two layers, the lower layer is the operating system layer mentioned above, and the upper layer is various software packages that realize different functions, such as positioning drawing, action planning, perception, simulation and so on. ROS (low-level) uses BSD licenses, which are all open source software and can be used for research and commercial purposes for free, while the packages provided by advanced users use many different licenses.
Using ROS to realize the movement of robot
For two-dimensional space, the random motion of wheeled machine can be realized by linear velocity+angular velocity.
Linear speed: describes the speed at which the robot moves back and forth.
Angular velocity: describes the angular velocity of robot rotation.
Therefore, the main purpose of controlling the robot's motion is to convert the linear speed and angular speed into the speed of the left and right wheels, and then convert the linear speed and angular speed into the speed of the left and right wheels through the wheel diameter and wheel pitch.
There is a key problem here, that is, the choice of encoder and pid speed regulation.
Selection of encoder: In general, the encoder and the wheel are on the same shaft. At present, if the speed is below 0.7m/s, it is possible to select the encoder from 600 key to 1200 key. However, it should be noted that it is best to use a two-line encoder, and the difference between the two-line outputs A and B is 90 degrees, so that the image can be stabilized. The image stabilizer can be more accurate in the later mileage calculation.
Through the feedback of wheel encoder and the real-time adjustment of motor PMW by PID, the speed control of left and right wheels is realized. Calculate the mileage of the car in real time and get the change of the moving position of the car.
The position change of the calculated car is calculated by the encoder. If the wheel slips, the calculated change may be different from the actual change. Solving this problem is actually to see that the problem is more serious. The important thing is to walk 5 meters and only walk 4.9 meters, or to walk 180 degrees and only walk 179 degrees.
In fact, the angle is not allowed to have a greater impact on the car. Generally speaking, the linear distance accuracy of the car can be controlled within cm, and the angular accuracy can be controlled within 1%~2%. Because angle is an important parameter, many people use gyroscopes to correct it.
So sometimes people ask how accurate the car is. In fact, the accuracy is relatively high now, and it is inevitable that there will be problems such as slipping, and it is impossible to achieve the accuracy of 100%.
Now it is acceptable to navigate by self-built map to reach the distance and angle of the car. To improve the accuracy, it may need the assistance of other equipment, such as laser radar, which can be used for secondary detection and correction.
The storage format of lidar data will first have a size range, if it exceeds the range, it will be invalid. There are also several sampling points, so that the lidar can tell you how many degrees the sampling points are.
Another final intensity is to tell you the accuracy of the data, because lidar also takes the highest point data and has a certain accuracy. The ppt above is actually scanning the shape of a wall with lidar.
In fact, it is meaningless for lidar to scan a static shape. The significance of radar mapping is actually to build a map of the room.
How to draw a map?
The first step is to collect eye data:
For lidar, ROS defines a special data structure in sensor_msgs packet to store the relevant information of laser message, which is called LaserScan.
It specifies the effective range of laser, the sampling angle of scanning point and the measured value of each angle. Lidar can measure the distance, shape and real-time change of obstacles through 360-degree real-time scanning.
The second step is to turn the data seen by the eyes into a map:
Gmapping of ROS converts lidar/scanning data into raster map data, where black represents obstacles and white represents blank areas, which can pass smoothly, and gray: unknown areas. As the robot moves, lidar can observe whether there are obstacles in the same position in many different directions. If the obstacle threshold exceeds the set value, it indicates that there is an obstacle here. Otherwise, there is no obstacle to calibration. It is a raster map, which shows the size of obstacles, blank areas and unknown areas with different gray levels. It is convenient for the next positioning and navigation.
Sometimes there are straight walls, but the robot can't walk straight. At this time, the problem may be that the robot wheel slips and so on, and it is crooked. The map drawn at this time may also be crooked. This situation can be avoided by adding a gyroscope. Because of the characteristics of lidar, sometimes black or mirror will lead to inaccurate ranging.
At present, the solution is not to use lidar, or to use lidar and ultrasonic wave for auxiliary processing.
The map of ROS is multi-layered, so I can stack multiple lidar at different heights and draw a map together. After the map is drawn, you can do positioning navigation.
How to locate and navigate?
Positioning: It is actually a probabilistic positioning, not the accuracy of 100%. According to the shape of the surrounding obstacles scanned by the lidar, it matches the shape of the map to judge the probability of the robot position.
The success of robot positioning has a great relationship with map characteristics. If the regional features are obvious, the robot can easily judge its position. If it is difficult to locate, someone may need to specify the initial position, or add led to identify the position, or other positioning equipment to help locate.
At present, there are more and more technologies for vision through color or light.
Navigation: global path planning+local adjustment (dynamic obstacle avoidance)
Navigation is actually global positioning. One is to plan according to the existing map, but local route planning will be carried out during the operation. But the overall situation is still dominated by the global path.
Navigation still has a lot of work to do. For example, the path planning of sweeper is different from that of service robot. The sweeping robot may need a full coverage map with corners, while the service robot mainly plans around the specified path or the shortest path, which is the most workload part of ROS.
Path planning varies greatly according to different application scenarios, but ROS provides a basic path planning development package, and on this basis, we will do our own path planning.
Robot description and coordinate system transformation
When navigating, which areas can pass depends on the shape of the robot and other information. ROS uses URDF (Unified Robot Description Format) to describe the size layout of robot hardware, such as the position of wheels, the size of chassis and the installation position of lidar, which will affect the transformation of coordinate system.
The coordinate system follows the premise that each frame can only have one parent frame, and then make some eye contact or association.
The installation position of lidar directly affects the output data of /scan. Therefore, the relative positions of the lidar and the robot need to be transformed into coordinates, so as to convert the lidar data into the robot visual angle data.
The coordinate system of ROS finally boils down to three standard frames, which can simplify many common robot problems:
1) globally accurate but locally discontinuous frames ("mapping")
2) Frames with global inaccuracy but local smoothness ("odom")
3) Robot's own frame ("base_link")
Many sensors (such as lidar, depth camera and gyro accelerometer) can calculate the coordinate relationship between base_link and odom, but because "each frame can only have one parent frame", only one node (such as robot_pose_ekf fusion multi-sensor) can publish the coordinate relationship between base_link and odom.
Base link's own coordinate system, because different parts are installed in different positions on the robot, should correspond to base link's coordinate system, because all sensors have to "see" through the perspective of the robot.
A friend asked me that when the lidar was building a map, the map would be chaotic after the car moved, because the coordinate system of the car chassis and the coordinate system of the lidar were not calibrated accurately.
The relationship between map and Odom
Because the car movement needs a local connection, such as the car is moving forward and accumulating, this is the role of odometer, and the map plays a global and discontinuous role, which corresponds to the map through lidar.
To learn ROS, the change of coordinate system is very important. Another point of coordinate system transformation is that each frame has only one parent frame. Sometimes, if both coordinates are related to it, it means that A and B are related, and B and C are related, instead of B/C and A. ..
The parent-child relationship of the three coordinate frames is as follows:
Map-> Odom-> Basic link
In fact, both map and odom should be associated with base_link, but in order to abide by the principle that each frame can only have one parent frame, according to map and base_link and odom->; Base_link, calculate the coordinate relationship between the map and odom and publish it.
Odom-> The odometer node calculates and publishes the coordinate relationship of base_link.
Map-> The coordinate relationship of base_link is calculated by the positioning node, but it is not published, but receives ODOM->; Base_link coordinates, calculate the map->; Odom coordinates, then release.
In the case of only odometer, you can run without lidar, but first you should simply avoid obstacles according to the preset map.
Wonderful question and answer
Q: Has the real-time performance of ROS improved?
A: Real-time improvement depends on the design of ROS2.0. In fact, the progress of ROS2.0 is published online. But in fact, his progress is far from practical application, at least in the second half of this year, but we can study his code. He greatly improved real-time memory management and thread management.
Q: vslam needs a lot of memory and CPU. What hardware configuration did Miss Li use in the actual project? How big a map can you make?
A: That's true. At present, it is still assisted by lidar and sensors. This has little to do with the size of the map, but mainly with the complexity of the terrain obstacles.