This is the team poster we used when we presented our work. We worked on everything as a team, meaning that we all collaborated on these problems and shared our solutions so that the rest of us could also understand.
Our goal for this project was to program an AUV to chase another AUV, using computer vision and control autonomously. We split this challenge into two parts: Letting the computer effectively control the AUV's movements and having the computer make decisions on where the AUV would go based on what it saw out of its camera. Controlling the AUV is split into kinematics and PID, while decision-making involves using computer vision to detect April tags, as well as the lanes on the bottom of the pool.
The AUV was capable of holonomic movement, meaning it could strafe and turn simultaneously. It did this through an "x" configuration of the thrusters, as you can see in the diagram. This gave it great maneuverability but meant that they were challenging to program. We used kinematic equations to calculate the power we sent to each motor based on a desired input force and turn. Here, you can see the kinematic formulas we derived.
The skills to derive these formulas came in handy when calculating the kinematics behind another project: my differential swerve drive.
The AUV also used PID to control its movements. PID compares the current position with the desired position and outputs the magnitude required to correct the robot to the desired position using the error, its derivative, and its integral. This project is where I really understood what goes into PID tuning.
lane detection was one of the harder challenges, as it is a novel problem. We had to write a computer vision algorithm from scratch.
The graphic above shows how the algorithm works. It takes in an input image, and then converts it into grayscale, and detects edges in the image. After those edges are detected, the magic happens. The algorithm will first fit a fitness score on each of the lines to determine if they might be part of a lane and if they are novel. Lines that are unlikely to be real are filtered out. Then every possible lane made up of two lines is made, and each of those lanes is given a fitness score based on its probability of being a real lane. The lane with the highest fitness score is then chosen.
This Image processing also had to work in real-time, meaning that lots of optimizations took place.
Once the AUV has a lane it likes, it will calculate its center relative to the AUV in both angle and displacement. This value is then fed to a PID to control the AUV.
This is a video of our first (and only ) test of the lane detection. We were able to get enough pool time to test more, so this version is very poorly tuned. However, you can see it working, which means that all the code and systems are good.
This image shows our Apriltag detection and PID working. The robot can see the April tag and tell where it is in relation to it. The orange arrows are what the robot thinks it should do to center itself with the apriltag. It is PID controlled, so the velocity of the apriltag is taken into account.
Here, you can see the AUV running the apriltag detection code and following the apriltag. It is also able to judge the distance of the april tag, and get close to it.