Extended Object Tracking - Groundwork
Background
While I was working with the Object Detection and Tracking team for the Next-Generation Vehicle at Ford, I was really challenged by the problem of short- and medium-range radar in an automotive context. Coming from aerospace, where multiple detections from a single object were exceedingly rare, I was able to straightforwardly associate measurements with tracks. In the world of automotive radar, this was no longer the case. I got to thinking about many approaches of how to solve this problem, and none were all that great.
After leaving Ford, I had a different, but related, problem where I needed to track multiple extended objects. I had limited time to implement a solution, and there was insufficient signal processing done beforehand. I had to process a signal with a limited feature set for discerning objects-to-track vs. stationary objects, track those objects reliably, and create collision avoidance logic using the tracks. While doing research initially, I had found this paper that discussed different approaches for tracking extended objects. I had originally intended to implement one of the algorithms the authors had discussed, but I didn’t have sufficient time. The approach was a tough sell to management as it was quite high risk.
Now that I am no longer working on that project, I have decided to revisit the algorithm and see what I can make of it.
The Problem
It’s probably best to just demonstrate the problem via a video.
You can see that a grouping of points, varying in number at each time step, moves with a high degree of correlation over time. It seems that the points themselves have a spread that lies along the apparent direction of motion. Furthermore, it seems one could encapsulate the points within a convex shape, the size, orientation, and speed of which could be tracked via standard methods, i.e. a Kalman filter. Well, that’s what I’m building up to, so far I just have the simulated sensor model up and running, so I’ll talk about that.
The Code
The testbed used to make the video referenced above is Juce, a platform comparable to QT in terms of its capabilities. Though Juce is typically used for audio applications, its graphical front-end makes it very appealing for design and analysis of other embedded systems. One such application I have seen it used for is autonomous vehicles, so I thought it would work well here. I was able to obtain a personal license to explore Juce’s capabilities towards the problem at-hand.
Typically, sensors update via udp on a regular clock cycle. To simulate the sensor, I tied the update of the sensor telemetry to a timer thread. The sensor “listens” to the timer thread; upon completion of the timer’s run()
method, which includes a sleep
, the sensor’s telemetry is updated via a simple motion model coupled with a random sampling about the surface of an extended object. The sleep-update paradigm is a way of enforcing a timing constraint for testing signal processing algorithms in real-time, with a reliable clock rate.
The measurement model itself comes from the author’s original Matlab implementation and can be summarized as follows:
- The number of detections on each frame follows a Poisson distribution
- The object is assumed to be aligned along it’s direction of travel
- The detections from the object are normally-distributed about an ellipse
What’s Coming Next?
I intend to update my repo with an extended Kalman filter model implementing the algorithm discussed in the previously mentioned paper. I don’t have an ETC on this at the moment, so stay tuned. Cheers!