Every year a new race car comes to life representing all the hard work of a dedicated team. A race car is the perfect platform for complex and interdisciplinary engineering. Each car stand out with unique characteristics and features, all developed by engineering students at NTNU. After a car has raced at the Formula Student competitions, it is used for testing, drivers training and on numerous promotions and events. Even though Revolve NTNU has a history of taking big steps every year, there is a clear knowledge and experience transfer from one team to the next. The cars represent our legacy and we will continue our steep learning curve to design the best engineered car in the field.
Revolve NTNU decided early to join the new challenge of driverless and followed closely as the first FSD competition was held in 2017. The 20 members on the driverless team have been working hard to place Trondheim on the map of autonomous racing, resulting in a solid system based on the combination of Machine Learning and classical AI. It was Eld, our 2017 car that was chosen as our very first driverless vehicle. It now has actuators for control, an Emergency Brake System, an Nvidia Drive PX2 Auto Chauffeur for processing and severaly new sensors for the autonomous system. A Velodyne VLP-16 in the front gives us a point cloud, while two FLIR color cameras in the main hoop make it possible to classify the cones with color and size.
We have developed two separate autonomous systems, both an end-to-end machine learning system and a full driving pipeline. For detection we use cone-detection combining LiDAR and camera with the MNIST classifier and use YOLOv3 as a redundant system. Furthermore, Eld uses a graphSLAM-variant for location and mapping, and trackfinder using triangulation and circles. Model Predictive Contouring Control with a FORCES solver is used for high-speed driving. To facilitate early testing we used a 1/7 scale Traxxas race car and Gazebo for simulation.
ATMOS Driverless, our second generation autonomous vehicle reached its potential during the season reaching both the goal of doing 5 m/s on Autocross and 10 m/s on Trackdrive. With the help of two LiDAR’s and a camera, the car is able to navigate itself. In total, we process more than 39 million pixles from the camera every second. That is about the same as 20 TV-screens. The two LiDAR’s on our car compliment each other, and accumulates over 1.6 million points each second. To find the right path, the car makes calculations about around 50 000 possible paths each second, and chooses the right one based on probability. This pipeline going from raw data to action runs at 15 Hz and that with a processing unit which is 70% smaller than last year.
The driverless project in Revolve NTNU needs primarily three things. Firstly, it needs a car that is reliable and works both mechanically and electrically. This includes the competence to be able to fix the car if something happens to it. Second, you need code to make the car, based on the surroundings, make choices about where to drive. So far we have more than 65 000 lines of code, which approximates to the number of words there are in the whole trilogy of Lord of the Rings. Lastly, and maybe most importantly, you need dedicated people who are willing to put down the efforts necessary. Greeting from the ATMOSDV Team!