Part of UNSW Australia coursework for MTRN4110: Robot design. Our team consisted of 4 members and we maintained our code using a Github repository.
The first task was to assemble the components on a platform that was to be attached to the provided hexapod. The components included: Depth sensor RGB camera, IMU, on-board computer (LattePanda supporting Win10), Arduino, battery and power management board. The on-board computer was to communicate with a remote laptop running a Matlab client using TCP over Wi-fi. We developed the code in C++ to read sensor data, send it and receive commands over TCP. This allowed us to control the robot remotely and do complex processing on a more powerful machine.
The second task was to implement (on the client) localization of the robot using IMU data and triangulation (using the known obstacles and depth camera ranges to them). This involved using different mathematical tools for detecting the obstacles and for example fitting a plane to the floor to account for the current pitch angle of the robot. At this point we could move the robot using movement commands sent from the client GUI.
Finally with the provided grid on the floor and randomly allocated obstacles we were to implement a path planning algorithm (we used A*) and make the robot navigate autonomously to the selected location on the client. We had some problems with localization due to poor data quality from the depth sensor camera and our instructor provided us with a laser scanner mounted in the corner of the room. We tried using the data from the laser for improvement of the localization but due to time limitation never implemented it in the final solution. Therefore the robot was able to plan the path and navigate, however not precisely.