Visual-Inertial Odometry Using Synthetic Data
This example uses:
Automated Driving Toolbox
Sensor Fusion and Tracking Toolbox
This example shows how to estimate the pose (position and orientation) of a ground vehicle using an inertial measurement unit (IMU) and a monocular camera. In this example, you:
Create a driving scenario containing the ground truth trajectory of the vehicle.
Use an IMU and visual odometry model to generate measurements.
Fuse these measurements to estimate the pose of the vehicle and then display the results.
Visual-inertial odometry estimates pose by fusing the visual odometry pose estimate from the monocular camera and the pose estimate from the IMU. The IMU returns an accurate pose estimate for small time intervals, but suffers from large drift due to integrating the inertial sensor measurements. The monocular camera returns an accurate pose estimate over a larger time interval, but suffers from a scale ambiguity. Given these complementary strengths and weaknesses, the fusion of these sensors using visual-inertial odometry is a suitable choice. This method can be used in scenarios where GPS readings are unavailable, such as in an urban canyon.
16 фрилансеров(-а) в среднем готовы выполнить эту работу за $487
Dear Client, We love to discuss this project and our team will ask you queries to resolve all doubts on this project. We assure you to give high quality service. Warm Regards, Om Infowave
Highly interested with your project and I'm ready to start right now. My completion rate is always 100% that I STRICTLY BITE ONLY WHICH I CAN CHEW. Please message me to discuss more!