Landmine or Coke Can: Deep Learning with GPR Data

Time Slices5_Grid9One of the projects that has taken up a lot of my time the past few months is that of a UAV (drone) based Ground Penetrating Radar (GPR) system. There are a number of applications for this but the one we have been focussing on initially is landmine and UXO clearance. The elements that make up such a system are quite broad. Ranging from sensor design, UAV integration, positioning, terrain following to data analysis. As with many drone projects most of the attention tends to go to the hardware and the flying. While that is certainly important and I have been working on those elements too, the whole system is only as good as the quality and interpretability of the data you get back. That is key. With this post I’ll aim to give a brief summary of the work I have been leading on this front.

Sanity Check

But lets get this out of the way first: GPR technology dates back to the 1920s and is a well understood, widely used, mature technology (Did you know Apollo 17 did a GPR survey of the moon in 1972?). Likewise, the analysis of GPR data, particularly for landmines and UXO, is also a well studied subject with a long history. This is not an easy problem and GPR hasn’t seen the same level of investment as, for example, seismic technology has had from the Oil & Gas industry.

Secondly let me be very clear that a system that can fully automatically, without human intervention, and with high confidence detect landmines is very much a fantasy for the foreseeable future. And perhaps longer, as the stakes are high and the physics of GPR is not on your side. There are also caveats to be made as to the relative costs of detection vs removal and pre-site surveying. As well as the difference between UXOs, IEDs, AP mines, AT mines, and the difference between military and humanitarian demining. The devil is in the details, and again, this is a hard problem.

So feet firmly on the ground but that does does not mean its not worth exploring. You just have to pick your battles. In my case Im particularly interested in something that can be used and deployed, however simple or niche. Machine learning for the sake of it or for simply writing up a paper is not that interesting to me. But you have to start somewhere.

A Pipeline

With that out of the way. Given I have one foot in the machine learning world and aware of how fast things are moving I could not help look for crossovers when faced with this project. Encouraged by randomly meeting Lance from ARA and reading his paper I decided it was worth having my own poke at this. Initially supported by Ken Williams through the ASI and then together with Skycap intern Max Jacobs and UCL MSc student Patrick Carson we started chipping away. Full credits to Ken, Patrick and Max for their hard work.

Currently a human expert will collect and analyse GPR data by carefully studying, and manually processing, 2D radargrams. As the expert knows what he is looking for and the datasets are usually fairly small that works quite well. But it definitely requires a certain level of skill and is hard to scale up.

Hence, from a machine learning point of view what we ideally would like to build (keeping in mind the caveats above) is the following:
mlaim

And thats fine if your data looks like this (clean homogenous dry sand, big metal object):

ideal

But in reality your data looks more like this (collected at a military test site):

reality And it gets worse. Variying soil composition and characteristics, temperature, vegetation, weather, cell towers, passing cars, etc. all add to the interference to create a messy picture of the subsurface. A short rain shower can destroy in minutes what took you days of GPU power to train. The fact that we are using an airborne antenna (vs a ground coupled one) doesn’t help either.

So we need to start simple. Fundamentally this is a supervised machine learning problem (though there are some ideas around unsupervised and active learning I would love to explore). This means you need labels. Unfortunately, flying or carrying about a sensor in a muddy field to collect data and get it labeled by a human expert is tedious and costly.

Luckily, doing some background research led me to gprMax, a project from the University of Edinburgh that implements a FTDT based GPR simulator. Define your sensor and soil characteristics in an input file and you can produce realistic looking radargrams. Simply insert your own landmines and voila, you have as much labeled ground truth as you care to simulate. While your simulated output is not going to be as good as real data, it’s a great testing ground for ideas and something to build on. Once you have something working you can start drip feeding in real data and see where it breaks.

With gprMax this meant we could now setup the following pipeline:

pipeline

Classification

To generate the data we settled on a particular landmine type and used the method from [Ratto2014] to generate a somewhat realistic looking radargram. The patches represent different soil characteristics.

soilmodel

Using this method we then created 3 datasets of roughly 1000 images with varying levels of difficulty (more complex soils) and approached it as a simple mine/no-mine binary classification problem.

The easy dataset contained no noise:

easy

The medium dataset had a base level of noise..

medium

..and the hard dataset contained most noise. We started off with a simple linear and non-linear SVM as a baseline, working our way up and eventually spending most of the time working with a fairly small convolutional net using Caffe.

Somewhat surprisingly the results were very good, with accuracies on an independent test set of around 80-98% on all three datasets and the convolution filters seemed to be picking up useful features.

filters

We then did some localisation tests using a simple sliding window and that too seemed promising, if the absolute accuracies were still too low.

localisation

I got more and more suspicious though and on closer examination of the hard dataset it became clear a subtle bug about how the images were scaled and the presence of the band at the top of every image (cased by the strong direct signal between the Tx and Rx) was masking the difficulty of the problem. Removing the band made this clear. See below an image with the band and two without (one of which contains a mine, which one?).

noband Re-running the models on a small dataset with the bands removed gave an accuracy of around 70%. So still not bad and could be improved further with more work. However at this point we were also reaching the limits of gprMax. We were getting quite a few simulation failures (which had to be automatically detected and removed) and while the images with the band removed were more difficult, they were not necessarily more realistic.

Fortunately around this time a beta was made avaialable of the new, much improved version of gprMax. The new version had native support for generating heterogeneous soils and rough surfaces using a nice mixture model. Spot the difference in the input file definitions:

newgprmax

The resulting radargrams also feel more realistic:

newgprmax-sim

And this is where we are currently. Rewrite the pipeline to work with the new gprMax version, generate a new set of datasets, and train some new models. If those then seem sensible I really look forward to running them on some data we collected with real ordnance at a military test site and see where things break.

Future

Overall it has been a really interesting project so far on many levels. This process has taught us a great deal about GPR and its applications and limits. There is however still a long road ahead. So far most of the work has been about beating the pipeline into shape and we have only scratched the surface on the machine learning front:

  • False positives vs False negatives: so far we have treated them equally. Obviously not the right thing to do in this application.
  • We have focussed on the 2D radargrams or vertical depth profiles. What we need to transition to is horizontal profiles (“birds eye view”) and full 3D volumes. That adds a whole new level of complexity.
  • Improved object localisation and handling of multiple objects (recurrent nets?).
  • Getting the domain expert into the loop, looking into reinforcement learning and active learning.
  • Include more types of ordnance and the effects from weather, vegetation, and terrain.
  • Fuse data with different sensor modalities.

Overall Im particularly interested in a Bayesian and generative approach to the problem and would love to get some experience with the recent developments in probabilistic programming.

Note that you don’t necessarily require highly accurate individual detections in order for a system to be useful. Simply being able to say with high confidence that there is nothing in a particular area is already very valuable.

That takes me back to the most important aspect for me: Build something that works and can be used and deployed. However simple, complex, or niche it needs to be. It has to provide a tangible and sustainable benefit, else, why bother?

But one step at a time.

–Dirk

3 thoughts on “Landmine or Coke Can: Deep Learning with GPR Data

  1. Pingback: An Exploration into Swarming | Machine Doing

  2. Pingback: Drone Based Metal Detector: Proof of concept | Machine Doing

  3. Pingback: Pivigo – Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s