In our last blog post we shared with you the excitement of the MathWorks Robotics Competition in which teams were challenged to design a control system to race a LEGO MINDSTORM NXT robot around a set of target points in the fastest time. The event was great fun and it was an interesting project to work on as it posed a number of technical challenges. The two main challenges we needed to overcome were:
- Measuring the robots position by 2D localisation
- Communicating with the robot to achieve closed loop control
2D Localisation using Image Processing
A LEGO MINDSTORMS NXT robot is capable of very basic localisation by using its wheel encoders. This dead reckoning approach is suitable for simple, low speed, navigation where the wheels will always maintain good traction. However, asking teams to complete the competition as quickly as possible inevitably introduces a lot of wheel slip and makes dead reckoning unreliable. To overcome this shortfall, a visual feedback system was introduced.
By placing a green ball on top of the robot, a camera can be used to track its movement around the arena. This process has a couple of steps.
Green detection by thresholding
The simplest way to detect a distinctly coloured ball in an image is by a technique known as thresholding. The process is
- Combine the Red, Green and Blue (RGB) pixel values in to single intensity values. For example, to detect a green ball we use the following equation to prioritise green pixels whilst ignoring other colours (negative intensity values are treated as zero)
- Compare the intensity values of each pixel with a threshold value. Pixels above this value are considered to be the correct colour (binary 1) and all others are ignored (binary 0).
- Tune the threshold value manually to ensure only the desired object is detected
Now we have tuned our threshold to identify only the pixels of our green ball we need to find its centroid position. This is done with using the Blob Analysis block from the Computer Vision System Toolbox.
Once we know where the ball is in the image, we need to correlate this to its position in the real world. This is done by first performing a calibration of the camera using red balls placed in known positions (the corners of the arena). The four red balls are identified in the image using the thresholding technique above with a different intensity equation which isolates red pixels. Their pixel positions can then be correlated with their real positions by estimating the projective transformation matrix. This is done using the Estimate Geometric Transformation block from the Computer Vision System Toolbox.
Now we know the transformation matrix, which correlates pixels to real world positions, we can then measure the centroid of the green ball and calculate the robot position by applying the transform.
Achieving closed loop control
The image processing system described above was run on a Raspberry Pi given to each team. Before the robot control system can be implemented, the Raspberry Pi must communicate the current position to the robot. This was achieved by connecting a Bluetooth dongle to the Raspberry Pi and using the LEGO MINDSTORMS Bluetooth Mailbox capability.
Teams design a control system in Simulink which they deploy to the LEGO MINDSTORMS using the Simulink Support Package. This controller must drive the robot through a series of target positions as quickly as possible. To give feedback to the teams, a laptop connected to the Raspberry Pi runs a custom MATLAB App which receives the robot’s position information and displays its progress.