[sfwa]

Video playback

One of the challenges in tuning the UKF on live data (as opposed to simulated sensor data derived from X-Plane flight logs) is the lack of “true” values against which the UKF output can be compared.

The most common approaches appear to be:

Since cameras are cheaper than real gyros, we decided to go with that approach, and recorded a test flight with a GoPro Hero3 Black mounted in the nose.

Artificial horizon overlay on UAV video

The GoPro is capable of recording at 240fps with a resolution of 848×480, which is sufficient for our purposes: although our UKF outputs attitude and position estimates every millisecond, there are no significant dynamics above 120Hz, so the GoPro is able to capture everything we need.

I decided that rather than writing some sort of horizon detection system to output pitch and roll co-ordinates from the video, it’d be easier (and more interesting) to write a program that overlays an artificial horizon and markers in fixed locations on the ground onto the video playback. This means that checking accuracy is a manual task, but makes it possible to gauge yaw and position error as well as pitch and roll, which would otherwise require some much more sophisticated image processing to accomplish.

After writing the script using pyglet for video playback and the OpenGL overlays, I found that the significant perspective distortion caused by the GoPro’s wide-angle lens made it very difficult to set up the OpenGL projection matrix to yield reasonable results. Since there isn’t much information on the horizontal and vertical FOVs of the various GoPro modes—probably since the sensor is different in each of the GoPro Hero3 models, and the area of the sensor used varies depending on video resolution and framerate—I used OpenCV to calibrate the camera.

OpenCV’s calibration process involves printing out a checkerboard pattern, and recording frames with that pattern in a variety of sizes and positions. I grabbed 10 stills from a video I recorded in the same mode as the flight, and within a couple of seconds OpenCV output the calibration parameters: a 3×3 intrinsic matrix, which is used to set up the OpenGL projection, and a set of distortion parameters.

GoPro Hero3 Black calibration frame showing checkerboard pattern

Plugging the camera intrinsic into the OpenGL projection matrix using the convert_hz_intrinsic_to_opengl_projection function in strawlab/opengl-hz worked very well, and fixed most of the perspective-induced mismatch in the pitch of the horizons. However, the barrel distortion in the video was unaffected, and the curvature of the (actual) horizon made it difficult to determine the error magnitude. To correct this, I ran OpenCV’s built-in undistort function over the entire video (the function takes the camera calibration parameters determined earlier); the output video file has a somewhat reduced FOV but essentially no distortion, so the horizon is always a straight line.

This image is an unprocessed frame from the original video, exhibiting significant barrel distortion:

Regular, unprocessed GoPro video frame showing barrel distortion of the horizon

After OpenCV undistortion, the frame looks like this:

GoPro video frame undistorted using OpenCV

With lens and projection issues accounted for, the match between artificial and actual horizons is typically within 5° pitch and roll; even small errors are very easy to spot, and the information overlay on the video enables investigation of periods with more significant errors. Since our accuracy objective is ±2° typical pitch and roll error, with ±5° worst-case error, we’ve still got a way to go—however as our sensors are essentially uncalibrated, and no particular care was taken to align the magnetometer and accelerometer/gyro boards prior to flight, there’s quite a lot of low-hanging fruit.

Artificial horizon overlay on UAV video

The triangular marker near the centre of the above image is supposedly the launch point; in that frame it’s about 10m off, but the error ranges up to about 50m at other points in the video. It’s not entirely clear to me why that is, but it’s something I’m actively looking into since the entire point of this is being able to locate objects on the ground accurately. Although the OBC permits up to 100m (!) of error in determining target location, it seems to me that < 10m should be entirely achievable, so that’s what we’re going for.

Having watched the video dozens of times while tuning UKF and OpenGL parameters, I’m now convinced of the utility of an automated pitch/roll detection script which would enable the comparison to be made automatically in much faster than real time. It seems like OpenCV should be able to do horizon detection using a combination of Canny edge detection and a Hough transform, so I’ll look into that before doing further UKF tuning.

github.com/sfwa twitter.com/sfwa_uav youtube.com/user/sfwavideo