By Thomas Humbert, David Moeller, Greg DeVos, Josh Kaster
Cedarville University’s Roboboat team
In Part 1, Sergio described the RoboBoat Competition and introduced our entry, the Cedarville University RoboBoat team. In this article, we will go into depth with regards to how we programmed our autonomous aquatic vehicle’s PC and five Raspberry Pi’s using MATLAB and Simulink.
Last year we implemented high level control, image processing and computer vision on the PC. In order to increase the frame rate of our image processing we investigated a couple of different options.
Our first thought was to use multiprocessing since it allows many systems to be running simultaneously and takes full advantage of today’s multicore processors. We found that although MATLAB has timer objects that allow users to implement multithreading on a single core, multiple cores on a CPU could not be utilized, as MATLAB does not support multi-processing beyond parallel loop structures.
The alternative solution to this problem, which did achieve the desired parallelism, was to offload the image processing from the laptop computer, and leave an event driven GUI on the main boat/shore laptops. We turned to a Raspberry Pi network because it requires low power and Simulink models could easily be downloaded to them. Communicating to the Pis would require sending and receiving our data via UDP (User Datagram Protocol).
Figure 1: Infrastructure diagram of the 2014 Cedarville RoboBoat
We wrote the main MATLAB GUI program on the Boat Laptop and structured the code to support parallelized operations and event-driven communication to devices. Part of this parallelization was achieved by offloading the image and acoustic processing algorithms to a Raspberry Pi network. This increased the code’s modularity, performance and ease of debugging.
After doing some research on how MATLAB connects to devices, we found out that communication and timer objects supports asynchronous events. These event based functions are usually called callbacks. Device communications events (like a UDP or Serial buffer reaching a certain number of bytes or getting a terminator character) could also be assigned callback functions. Along with that, we found that MATLAB had asynchronous send functions where data would be written to the output buffer and forgotten about. We also found that MATLAB could use timers to trigger events, and decided that a fixed-period timer event would be ideal for executing our PID algorithms. This event structure is summarized in the following figure:
Figure 2: Block diagram of the 2013-2014 Cedarville RoboBoat’s main code on the laptop
Since many of the systems take a long time to complete we needed a way to improve performance. As a cheap alternative to replacing our system with expensive hardware we created a way to offload the majority of the computations that our main laptop was performing onto a network of Raspberry Pis, increasing the stability and consistency of the boat’s actions. By taking the majority of the computations off of the boat laptop, it could now focus on high-level decision-making based on information fed to it from the Raspberry Pis.
The Raspberry Pi network currently consists of five Raspberry Pis and a central laptop all connected using UDP. We installed an Ethernet switch in order to link all the communication lines together. Each Pi sends back pertinent data over a specific UDP port for the boat laptop to decipher and adjust the thrusters and camera servo. The benefit of having a system like this enables the Pis to work independently while the boat laptop continues to work through each event. As the events transpire, they pull the information from the Pis when it is time to read from them again. The Raspberry Pi Network is physically shown in the following figures:
Figure 3: (Left) Boat Suitcase with all internal components. (Right) Boat Suitcase with labeled internal components.
Automated Docking Challenge
The “Automated Docking Challenge” consists of being able to detect three different shapes, a circle, a triangle, and a cross, and driving to the appropriate one. After experimenting with different properties of shapes we determined that using the “Corner Detection” block in Simulink did a good job at differentiating between the shapes. This worked very well when the shapes were the only objects in the frame but as soon as noise was introduce into the image the shapes would be lost. We then turned our attention to finding the shapes in the frame. To do this we use the intensity of the current frame from the camera and run edge detection on this using the “Edge Detection” block. Since the shapes are white on black backgrounds, the shapes create enclosed loops while most of the noise is removed. We then use the bounding box of the enclosed loops to pull out the selection from the original image. This provides a higher level of detail and we run our shape analysis on the needed pixels saving on computational power.
After we have determined where the possible shape is, we run our shape analysis. This starts off with corner detection as mentioned before. If it has the required number of corners then multiple checks are run to ensure the accuracy of the system. For a circle we find the radius from the area and compare it to the major and minor axis. For the triangle, we use the location of the corners to find the angles between them to ensure that it is a triangle. For the cross we calculate what the major axis and the perimeter should be and compare them to the output of the “Blob Analysis” block, helping us to know if the blob is a solid shape or just noise. If the blob passes these tests, the data is then transmitted back to the laptop. This algorithm is summarized in the following diagram (Figure 4).
Figure 4: The Raspberry Pi Pattern Detection algorithm for the Automated Docking Challenge.
Figure 5: (Top Left) Possible Blobs. (Top Right) Raw Camera Feed with symbol-matched marker. (Center) Region of Interest for Each Symbol. (Bottom) Binary Image of Pattern with Corner Markers.
After major performance and optimization improvements we are able to run the entire system on a Raspberry Pi with our current system we are able to achieve 8.2 frames a second which is fine for this challenge. After the shapes are detected on the Pi, The three shape centroids are sent back to the laptop, where they are used to determine boat’s movement.
Color detection for the buoys was also a strong focus, with many tested algorithms. Many of the challenges like the obstacle field and the pinger challenge involved being able to correctly identify a buoy and the color of it. After looking into several different methods including simple color thresholding and color quantizer, where every color gets put into one of 27 bins, we realized that none of these methods would provide the accuracy we needed.
Figure 6: (Left) Image of the feed after running edge detection, morphology, and “imfill.” (Right) Resulting image with overlay of where the program sees the buoy.
We then took a different approach by looking for circles (Figure 6). Matlab offers a function called “imfindcircles” that returns a metric of where it has the most confidence on where circles are located. The video frames being sent to this function are first refined with edge detection and some morphological operations to enhance the edges. This greatly improved our ability to detect the buoys. We then examine the color inside the circle to determine what color the buoy is. In order to quantifiably know if an improvement helped we wrote a program where the user clicks on the buoys of a recorded video to determine where the real buoys are. We then collect data from the algorithm and compare it to the real buoy data. The program then reports back the percent accurately for each color as well as the false positives. This has made knowing if a change helped or not and determining if the performance cost was worth the change. We were not able to get this system up and running for the competition, but its development state has great potential for next year’s team.
Acoustic Beacon Challenge
The competition for this summer will haveseveral buoys with audio pingers underneath them (Figure 6). One of them will be emitting a chosen signal between 25 kHz and 40 kHz. We are required to find which buoy is emitting the frequency and then report the coordinates of the result.
Figure 7: Field diagram for Acoustic Beacon Positioning Challenge
We are completing this challenge by first using the color detection to find one of the colored buoys and then systematically going from one buoy to the next until all five colors are checked. Once at the buoy we will sit and listen for the frequency.
Using an underwater microphone (hydrophone), we first designed a filter using the MATLAB’s Filter Design and Analysis tool by typing >> fdatool at the MATLAB Commmand Window. We then placed the filter in a Simulink Model’s “Discrete Filter” block and deployed the model to a Raspberry Pi. As a result, the input from the hydrophone was passed to the Raspberry Pi which filtered out all signals except the desired frequency (Figure 8). After running through the filter, the sum of the result is added and mapped. This magnitude tells us how close and how strong the signal is. The closer we are to the pinger, the higher the magnitude.
Figure 8: FDATOOL Customized Filter Design for Acoustic Beacon Positioning Challenge.
In order to compensate for the Raspberry Pis limitation of 44.1 kHz sampling rate, we are using the roll-down effect of the ultrasonic frequencies to identify the signal. For example we can listen for a 25 kHz signal and hear its alias under 4.103 kHz. If the HDMI monitor is hooked up to the Audio Pinger Pi, it will show the frequency it is seeking and the scope displaying the magnitude of the desired frequency. This model has been successfully tested under the audible frequency range (Figure 8). In the overall GUI, there is a slider that shows the magnitude of the filtered signal.
Figure 8. Scope after running sample audio taken while navigating by the emitting buoy.