[sfwa]

Horizon detection with OpenCV

After getting the video playback and artificial horizon code working, it became clear that watching a video over and over while trying to remember whether the current run looked more accurate than the previous run isn’t an especially good way to test UKF tuning parameters.

Since we’re using the camera horizon as the reference for pitch and roll error, it would be nice if we could find the horizon line and calculate pitch and roll automatically, then print out the UKF attitude error compared with those values. Although the accuracy of automatic horizon detection might not be as good as manual detection, the errors in that process shouldn’t be correlated with UKF error, thus changes in the overall error output during the tuning process will still be reflective of UKF accuracy improvements or regressions.

Our first attempt at horizon detection used OpenCV’s Canny transform to find edges in the video—the output being a black and white image with white indicating edge pixels—and the cv2.HoughLinesP transform to obtain the start and end points of detected line segments. The line segments were then converted to ideal pinhole camera coordinates based on the camera’s intrinsic matrix, and pitch and roll were extracted from them using basic trig.

Video frame input to horizon detector Canny transform output

However, the Hough transform proved too sensitive to noise on the horizon (e.g. trees, mountains). Only very straight segments were detected, which often resulted in no horizon output at all.

To rectify this we switched to using a least-squares regression to fit a line to the white points (= edges) in the Canny output; this results in some noise in the horizon line when there are other strongly-contrasting areas in the image, but on our footage it’s entirely acceptable. Based on a manual sample of frames, when the “true” horizon is detected (about 90% of the time) the output pitch and roll values are accurate to within 1–2°.

This was surprisingly easy: for each frame, the code below runs the undistortion function and extracts the horizon in the form of a point along the line, and the line direction.

import sys
import cv2
import numpy as np


CV_CAP_PROP_FRAME_COUNT = 7
CV_CAP_PROP_FPS = 5
CV_CAP_PROP_FRAME_WIDTH = 3
CV_CAP_PROP_FRAME_HEIGHT = 4


# python correct-video.py ../Flight/20130714/GOPR0036.MP4 /Users/bendyer/Projects/UAV\ Samples/Flight/20130714/GOPR0036-Corrected.avi

if __name__ == '__main__':
    if not sys.argv[2] or not sys.argv[2].startswith('/') or not sys.argv[2].endswith('.avi'):
        print 'Second argument must be the FULL output video path, ending in .avi'

    video = cv2.VideoCapture(sys.argv[1])

    num_frames = int(video.get(CV_CAP_PROP_FRAME_COUNT))
    fps = video.get(CV_CAP_PROP_FPS)
    width = int(video.get(CV_CAP_PROP_FRAME_WIDTH))
    height = int(video.get(CV_CAP_PROP_FRAME_HEIGHT))

    print "%d frames @ %d fps" % (num_frames, fps)

    intrinsic = np.array(
        [[ 299.39646639,    0.,          419.96165812],
         [   0.,          302.5602385,   230.25411049],
         [   0.,            0.,            1.        ]]
    )
    distortion = np.array(
        [-0.16792771, 0.03121603, 0.00218195, -0.00026904, -0.00263317]
    )

    writer = cv2.VideoWriter(
        filename=sys.argv[2],
        fourcc=(ord('X') << 24) + (ord('V') << 16) + (ord('I') << 8) + ord('D'),
        fps=fps,
        frameSize=(width,height),
        isColor=1)
    horizonf = open(sys.argv[2].rpartition(".")[0] + "-horizon.txt", "wb")

    for f in xrange(num_frames):
        success, img = video.read()
        if not success or img is None:
            print "Error on frame %s" % f
            break

        img = cv2.undistort(img, intrinsic, distortion)
        edges = cv2.Canny(
            cv2.resize(cv2.split(img)[0], (width / 4, height / 4), 0, 0, cv2.INTER_NEAREST),
            200, 600, apertureSize=3)
        vx, vy, x0, y0 = cv2.fitLine(np.argwhere(edges == 255), 2, 0, 0.01, 0.01)  # 2 = CV_DIST_L2
        horizonf.write("%.6f,%.6f,%.6f,%.6f\n" % (vx * 4, vy * 4, x0 * 4, y0 * 4))

        writer.write(img)

        if f % 2400 == 0:
            print 100.0 * (f / float(num_frames)), "%"

    horizonf.close()
    writer.release()

We weren’t able to get the horizon detection code running in real time within the pyglet-based viewer, so we amended the video undistortion script to run horizon detection on each frame and output a vector containing the horizon start point and direction. This file is read by the viewer at the same rate as the video, and the camera correction and pitch/roll conversion are done within the update loop:

def attitude_from_horizon(x0, y0, x1, y1, expected_roll):
    # Determine pitch and roll from a horizon line segment identified by the
    # points (x0, x1) and (y0, y1) in an image.
    # Uses the CAMERA_INTRINSICS values (fx, 0, cx, 0, fy, cy, 0, 0, 1) to
    # convert the point to a normalized (u, v) representation, then uses
    # formulas 27 and 33 from http://eprints.qut.edu.au/12839/1/3067a485.pdf
    # to determine pitch and roll.

    x, y = x0, y0
    mx, my = x1 - x0, y1 - y0
    fx, fy = CAMERA_INTRINSICS[0][0], CAMERA_INTRINSICS[1][1]
    cx, cy = CAMERA_INTRINSICS[0][2], CAMERA_INTRINSICS[1][2]

    # X and Y are flipped in the intrinsics matrix relative to the fitLine
    # output -- no, I don't know why.
    u = (y - cx) / fx
    v = (x - cy) / fy
    f = 1.0

    roll = math.atan2(-mx, my)
    # Ensure that roll is pointing in approx the same direction as
    # expected_roll
    if expected_roll - math.degrees(roll) > 135.0:
        roll += math.pi
    elif expected_roll - math.degrees(roll) < -135.0:
        roll -= math.pi

    pitch = math.atan((u * math.sin(roll) + v * math.cos(roll)) / f)

    return math.degrees(pitch), math.degrees(roll)

The main reason to do the horizon pitch/roll calculation in the view rather than the video undistort script is the need to pass in an expected roll value from the UKF to ensure attitude_from_horizon returns results that are the right way up. Since we don’t detect which side of the horizon is land and which is sky, the cv2.fitLines method sometimes returns results 180° from what we expect.

Video frame showing automatic horizon detection line and UKF artificial horizon Another video frame showing automatic horizon detection line and UKF artificial horizon

The viewer overlays the detected horizon on the video using the same approach as the artificial horizon overlay, to ensure things like the UAV-to-camera transform are shared. This functions mainly as a visual sanity check, since the purpose of automatic horizon detection is of course to enable the viewer to output UKF attitude error statistics.

Plot showing UKF pitch and roll error by video frame

The chart above shows the UKF pitch (blue) and roll (red) error per video frame (∼4.17 UKF iterations). The median pitch error is 2.6°, with the 75th percentile error being 4.4° and the 95th at 9.3°; the median roll error is 2.9°, with the 75th percentile at 5.4° and the 95th at 12.8°. The large spike up to 60° error at about frame 130,000 is due to a steep climb then dive making the horizon undetectable; as a result the edges of the oval and the treeline were being detected as the horizon.

Having set up a reasonably accurate test for UKF pitch and roll accuracy, I’m now looking at using AR marker patterns to determine full UAV pose (position and rotation); it seems like this should be relatively straightforward with a library like ArUco, and having a source of truth for AHRS tuning will make it much easier to assess the impact of any changes in sensor setup or algorithm.

For now, the main priority is finalisation of the CPU/DSP boards and main electronics enclosure; having all sensors mounted to a fixed structure will make it possible to calibrate them and determine alignment to within 1°, dramatically improving the UKF results. (The above results are derived from a set of sensors simply taped into the airframe, with misalignments of anywhere up to a few degrees—then of course there’s the ±4g vibration on the accelerometer, and note the 40km/h headwind in the first screenshot above.)

github.com/sfwa twitter.com/sfwa_uav youtube.com/user/sfwavideo