Today at the International Conference on Robotics and Automation (ICRA), a paper by my colleagues at the University of Surrey and me, “Translating images into maps”, won the conference’s overall outstanding-paper award.

Our paper addresses the problem of constructing a top-down “bird’s-eye” view of a scene on the basis of standard sideways-on photographs. This is an important problem for autonomous vehicles, which need to build maps of their immediate environments to decide where it is safe to drive.

Our approach exploits the well-known fact that every column of pixels in a digital image corresponds to a single ray extending across a 2-D map of the field of view. Each pixel in the column, in turn, corresponds to a point along that ray.

Every column of pixels in a digital image corresponds to a ray extending across a 2-D map of the field of view. The ray’s origin is the location of the camera on the map.

Our insight was that, because of the one-to-one correspondence between pixels and points along a ray, the problem of translating images into maps has the same structure as sequence-to-sequence problems in natural-language processing (NLP), such as machine translation, which converts a sequence of words in one language into a sequence of words in another.

We exploited this idea, using the established machinery for sequence-to-sequence processing — particularly, Transformer-based models — to convert images to maps by directly translating each individual column of pixels into one ray along the map. In experiments we report in the paper, we compared this approach to a range of existing approaches on three different datasets, strongly improving over all existing methods on all datasets.

Focused attention

Related content

Deep learning to produce invariant representations, estimations of sensor reliability, and efficient map representations all contribute to Astro’s superior spatial intelligence.

The key to Transformers’ success is their use of attention mechanisms, which determine which elements of the input matter most to which elements of the output. So, for instance, if the input is a sentence in Hindi, and the output is a sentence in Spanish, the attention mechanism determines which words of the input are most relevant when determining each word of the output.

In general, however, Transformers require much more data for computer vision applications than for NLP applications. That’s because, in a large 2-D image — unlike a short, 1-D sequence of words — there are so many candidates for attention: any given pixel might contain information that alters how other pixels should be interpreted.

By constraining our use of Transformers to individual columns of pixels and individual rays, we avoid this combinatorial explosion and can efficiently train on existing, smaller datasets.

Semantic content

The analogy between our task and the sequence-to-sequence NLP tasks is quite precise. Many languages share a common structure, which means that words often — though not always — occur in similar places in source texts and their translations. In the same way, pixels further down an image column often — though not always — correspond to points closer to the camera along the associated ray. In both cases, Transformers can exploit this structure.

Related content

Measuring the displacement between location estimates derived from different camera views can help enforce the local consistency vital to navigation.

A significant hurdle in the computer vision case, however, is that single pixels along a ray contain little information. In a street scene, for example, a single black pixel on a ray could correspond to the asphalt, a tire, or the shoe of pedestrian. To help resolve such ambiguities, we generate features that capture local context by preprocessing input images with a convolutional neural network (CNN).

CNNs step through an image one block of pixels at a time, looking in each block for distinctive patterns, such as color gradations with particular orientations. Low-level patterns found by the bottom layers of the CNN are aggregated by higher layers, until they acquire semantic content — the curve of a dark tire, the parallel edges of a shiny signpost.

The inputs fed to our Transformer network, then, are not raw color values but pixel embeddings produced by a CNN. Those embeddings factor in information from pixels in other columns and include cues that can help determine depth along a ray — that a given pixel probably belongs to a car tire rather than a shoe, for instance.

We use a CNN pretrained on a standard image classification task, so it has already learned to recognize image features useful for computer vision task. But then we train our entire integrated model — CNN and Transformer — end to end, so that the CNN produces embeddings useful for image mapping.

In our experiments, we considered scenarios in which we were constructing maps from single images and from sequences of images (i.e., video). Our video-based maps were more accurate than those produced by a benchmark video-based model, and in general, they were more accurate than the maps our method produced from still images. But the margin of improvement was small, about 3% on average across all 14 classes.

Maps constructed from still images (leftmost column) by our still-image method (“our spatial”), our video method (“our spatiotemporal”), and three benchmarks (VPN, PON, and STA-S). The first column of maps is the ground-truth bird’s-eye-view map.

An intriguing topic for future research is whether we can better leverage perspectival information in the video stream to extract greater improvements in map accuracy relative to still images. We have also improved on this work by using novel graph-based methods to integrate 3-D object detection into our mapping algorithms. We describe these results in a paper we’re presenting at this year’s CVPR, “‘The pedestrian next to the lamppost’: Adaptive object graphs for better instantaneous mapping”.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *