Self-driving vehicles need sensory inputs to perceive and navigate through environments. Processing images from sensors, such as cameras, radars, or lasers, helps these vehicles in localization and object detection.
In this webinar, we show how you can develop advanced image processing applications using QCar, a feature vehicle of Quanser’s Self-Driving Car Research Studio. On the example of the state estimation via April Tag localization, they will demonstrate the application development in Simulink.
We also describe the development workflow using QCar and provide an in-depth explanation of the various components within the Simulink example, and how the application’s architecture comes together to achieve actuation, image processing, and state estimation.