Advancing Intelligence, Surveillance, and Reconnaissance Applications With Computer Vision

Using Deep Learning to classify airplanes, analyze movements and predict the aircraft’s orientation from satellite imagery.

By Gowdhaman Sadhasivam

There has been explosive growth in the use of geospatial data, particularly imagery in military intelligence, surveillance, and reconnaissance applications. Satellite imagery can provide a bird’s eye view of an airport or area of interest in a single image. However, the object size is the tradeoff between real-world objects and satellite images. For example, in satellite images, aircraft will look very small, similar, and highly analogous to other objects in the real world, making it challenging to analyze images. So how do you unlock the full potential of your imagery, identify aircraft categories and extract meaningful insights? That's where computer vision and object detection come into the picture.

                    Objects in satellite images

At Orbital Insight, we have a novel way to solve the aircraft detection problem. In addition to detecting the aircraft, we can also predict the aircraft's orientation in real-world coordinates. We can tell whether the plane is on the runway, ready to fly, or parked for storage.

             An example image that shows aircraft angle orientation to the compass rose scale                        

To solve the problem, we carefully curated the area of interest (AOIs) that covered airports, air bases, and known classified regions, thanks to our Solution Engineer(s), Content team, and Computer Vision team members who helped identify these AOIs.

We encountered real challenges after we identified the AOIs. For some of the AOIs, we could not obtain the satellite images. We curated 1k+ AOIs spread globally across multiple continents and various countries. However, we could not use all the 1k+ AOIs during the data collection process. After dropping the unusable AOIs, we had around 800+ AOIs that resulted in 100k+ scenes. Each scene provided on an average of 100 tiles, i.e., approximately 10MM tiles! We can’t use all the tiles since it is expensive to produce the labels.

We developed an approach of using multi-level campaigns to remove tiles that were not useful. We randomly selected ~7k scenes and looked for tiles with at least one target object. This approach helped us significantly reduce the volume of the data from 700k tiles to ~5k tiles that came from the selected scenes. We also selected scenes that included adverse weather conditions such as snow, rain, ice, haze, sun angle, and shadows.

We designed the campaign to give us more freedom to solve the problem differently. This was one of the hardest lessons we learned during earlier days. We never compromised in designing the campaign to collect as many details as possible. At the same time, we had to be mindful of our available resources and project timeline. For example, drawing a tight boundary of the object will take more time than drawing a bounding box. There is always a tradeoff between time and available resources.

The campaign was tightly controlled by our Content team to ensure objects were correctly identified and classified. However, there were scenarios where we couldn’t achieve the desired quality. This was particularly due to the image resolution, object visibility, and weather conditions. For example, C-5 Galaxy aircraft belong to the Other Large Military category but it looks like a Bomber in the satellite image. Similarly, aircraft belonging to the Small Bomber category look like Small Aircraft.

In this image, Small Bomber objects (blue) are predicted as Small Aircraft (orange), Other Large Military (red), and Fighter (purple). This was due to the similarity of the objects in the satellite image. However, in the real world, aircraft are used for different purposes.

As the problem states, this was a multiclass problem, and we expected to have a class imbalance in the dataset. We started with a simple baseline model CenterNet with an Hourglass backbone network using default hyperparameters. We identified the noisy labels with the help of the model. We continued this process to quality control the dataset until we were satisfied with the ground truth labels.

After the rigorous validation of the dataset quality, we built a baseline model that was evaluated on the validation split and benchmarked on the test split dataset. The model was fine-tuned with hyper-parameters such as learning rate, minimum box overlap, objectness loss weight, and common augmentation techniques. The data augmentation techniques such as random crop, random padding, bounding box jittering, and random scale didn’t help much. However, the horizontal flip along with random square crop by scaling techniques worked better than other augmentations.

Weather conditions posed significant challenges in pushing the model performance on the test set. Under different weather conditions, the same object looked like a different object category.

For example, aircraft in sunlight, shadows, under heavy haze, and in the snow were not able to be identified. This resulted in more false negatives, and it impacted the overall model F1 score.

In addition, there were rare aircraft that were not widely available in the real world. Also, intraclass variance exists within an aircraft category. There were multiple types of aircraft within an aircraft category. For example, the Bomber category has Small Bomber, Large Bomber, and Strategic Bombers. We considered all types of bombers under one single category in the current work. 

Sample Predictions on the test dataset

    Performance of the model in predicting Large Commercial Aircraft (pink) on images with                          Clouds and Haze

                                   Performance of the model in predicting Small Aircraft (orange)

Overall, the benchmarked F1 score of the final model was significantly higher than the baseline model. With a mindful selection of AOIs, creative campaign design, SOTA post-processing techniques, continuous model iteration, and rigorous quality control work, we solved the mystery of understanding aircraft movements on planet Earth!

Interested to learn more about object detection capabilities? Check out our best-in-class algorithms for automated detection of multiclass ships, aircraft, vehicles: