Indy Robot Racing

In fall 2004, the SDA Lab got involved in the Indy Robot Racing project to enter a vehicle in the 2005 DARPA Grand Challenge race. In a nutshell, the Grand Challenge is about having an unmanned, autonomous vehicle (not using remote control) drive itself over 150-200 miles of desert-like terrain. The project was appealing to us because of its requirements in image processing (for computer vision), path finding, obstacle avoidance algorithms, and geographic information systems - including retrieving data from remote databases. It seemed like a terrific project to teach school children and the lay public about visualization, information technology, and simple robotics.

The 2004 Grand Challenge

The 2004 Grand Challenge race went across the Mojave Desert from Barstow, CA to Primm, NV (right across the border on I-15).


Data for the course

The course was defined in a data file containing a list of:
  • GPS "waypoints" in latitude, longitude (GPS=Global Positioning System)
  • maximum distance the vehicle is allowed to stray from a waypoint
  • maximum speed of the vehicle at a waypoint

    In these 2-D plots (click for larger versions), we simply plot the waypoints longitude(X) and latitude(Y) as small "x"s (left) and connected as a polygonal line (right). The start of the race is in the lower-left and the end at the upper-right. Note that when plotting the entire course, the waypoints are too dense to fully appreciate their separation. We will remedy this by zooming into particular regions, then by doing a movie of a simulated drive along the path. But first, we want to create a 3-D visualization of the path, using glyphs to represent both the maximum speeds and the boundary limits.


    First we plot some simple (orange) glyphs as width lines perpendicular to the path that show the allowed boundary of a vehicle. (Click on image for enlarged version).


    Next, we add some (cyan) cylindrical glyphs to represent the maximum speeds allowed at each waypoint. Note that the glyphs are scaled (in both height and radius) according to max speed. (The images are rendered in a perspective view, which means that in the top-down view you're seeing part of the sides of the cylinders).


    We also experimented with fitting splines through the waypoints, but this really gets to be a bigger question of determining the proper points to interpolate, based upon the waypoints and the allowed widths. I still like the idea of using splines to help determine speed/acceleration.

    Finally, we overlay satellite images from TerraServer.