In this article, Phil Schneider, Director of Research Development at ACV Auctions, takes us behind the scenes of the first mobile undercarriage vehicle imaging tool.

What is ACV Auctions and what does it do?

ACV Auctions is a company based in Buffalo, New York, and sells used vehicles on an online platform.

Every time you go outside and see a car driving down the road, you're looking at a used car. And each one of those used vehicles will be bought and sold about three times during its lifetime.

ACV facilitates the transaction of used vehicles from dealership to dealership. We have 1,000 inspectors across the United States that go to dealerships every day to inspect vehicles and create what’s called a condition report. We then list the vehicles on our online platform to be sold to other dealerships.

We started this company back in 2015, and right now we have over 2,000 employees in all states of the US. We have tens and thousands of dealerships inspecting hundreds of thousands of cars every single year.


Now, why is that important?

Traditionally, used car sales have been done at physical auctions, so when we came up with the idea for this company we were trying to digitize the vehicle. Creating this concept took a number of different forms. The challenge was to go from physically touching, smelling, and sitting inside a car to give its value, to doing this digitally.

How can you detect if there's damage? How do you put a make, model, and year on the vehicle? How do you tell its condition? Were there any repairs associated with that vehicle?

To answer these questions, we can use photos. We can also look at the history of the vehicle or the CarFax, but ultimately all of that stuff gets generated into one report that's sent out to our customers, who can then make an informed decision about the vehicle's value.

Based on market trends, we then try to put a valuation on the car. We get very good tracking metrics and we’ve developed a strong database of the whole concept of a digital vehicle.

We've also created technology to help with this process because digitizing a car on an online auction is a relatively new concept for the industry, and when we broke into the market space we realized there were some gaps to fill.

Introducing the Virtual Lift

The Virtual Lift is an undercarriage imaging system, meaning that you no longer have to get on your hands and knees or put the car on a lift to see what's underneath it. It gave us a proprietary vantage point that wasn't seen in the industry before.

To put it plainly, the Virtual Lift is a piece of metal with a mirror on it. It's about 35 inches long and three inches tall, and it can sit under any vehicle. And as you roll over that vehicle, there are two things happening.

One, there's a mirror on the Virtual Lift that's reflecting the underside of the vehicle which gives you that undercarriage reflection.

Two is the camera. Every Virtual Lift comes with a camera that slides into a phone holder and takes pictures of the mirror. So as you're rolling over the Virtual Lift, that mirror is reflecting the underside of the vehicle and the camera’s taking a series of photos and then reconstructing them into an image.

It takes about 2,000 pictures in a series of about a 32nd rollover and gives us various resolutions. But the unique part of this is that we now have a new vantage point. We have never seen a before image of a vehicle, and that brings its own data.

A picture is worth a thousand words, but for us, a Virtual Lift photo is worth a million data points. This development was something that the industry really lacked in terms of seeing what's underneath a vehicle. Does it have rust or frame damage? Does it have aftermarket modifications? Did someone trick out this vehicle and add a lot of value to it? Or did they run over a curb and dent every piece of the underside?

With Virtual Lift, we’re able to get that perspective and put a valuation on the vehicle based on it.

What’s really interesting is what we can do with the data, because centered around the Virtual Lift is millions of pictures of undercarriages of vehicles. These vehicles are all different shapes, sizes, dimensions, and conditions. All in all, there are tens of thousands of different vehicles out there that we have amassed an undercarriage photo.

So what do we do with these?


Applying Virtual Lift to undercarriage inspections

Buying a car online is a foreign concept to the average buyer. Most buyers want to go and sit behind the wheel, take the car for a test drive, and understand the ins and outs of it before they purchase it. My argument is that you don't need to do that these days.

You can put in a request online for the specific type of car you want and you’ll get an email with 30 different available cars that fit your criteria. But how do you know what the right car is? And how do you know if that car truly is represented well?

We've sold many cars in the past that have had missing catalytic converters. There’s a huge problem with catalytic converter thefts because they have precious metals inside of them that are easily taken and resold on the aftermarket. Each catalytic converter can cost $2,000-$3,000.

If you get your catalytic converter stolen, you've just lost value on your car. Fortunately, the Virtual Lift can now tell you if there's a catalytic converter there or not.

We can also detect if you have a rear differential leak or a mechanical failure happening underneath.

But the application I really want to focus on is rust detection. Rust can take so many different forms, and it’s a fun problem from a computer vision standpoint. And I think when you put all the applications together, you start to get an accurate representation of the undercarriage of a vehicle.

Understanding the different types of rust

I’ve learned more about rust than I've ever wanted to in my lifetime. I'm not particularly interested in cars, but what I am interested in is the science and technology behind them. And so when I looked at these problems, I started to pick out different areas where we could apply novel computer vision techniques and machine learning algorithms to solve some industry-based problems.

But what is rust? If you have rust on a vehicle, you basically have oxidation and corrosion of that metal. But not all rust is bad and not all rust is created equally.

Penetrating rust is rust that makes a hole in the frame of the vehicle and affects its integrity. The other type of rust is surface rust. This is cosmetic and doesn’t affect the structural integrity of the vehicle. When you compare these two different types of rust, one will result in a much different valuation or sale price than the other.

So how do we train a machine learning model or a computer to understand what type of rust it is? Classification, as well as identifying the location of that rust.

Using image enhancement for model improvement

When I first started looking at the Virtual Lift images, I couldn't tell you the difference between a rusted hole and an intentional hole. Some of the rust could be dimmed out in photos because of poor lighting. It could be a gray day out or it could be raining or snowing, so not every photo is going to have a high-quality resolution.

Therefore, we implemented a number of image enhancement techniques to try and get a better identification of rust. The three images below show some very trivial image post-processing techniques we can leverage to make the rust stand out a bit more for the model to detect.

Image enhancement for model improvement

But how do we identify how much rust is on a vehicle and what type of rust it is?

With this imaging post-processing technique, you take a picture of the undercarriage of the vehicle using the Virtual Lift, and then apply a series of filters.

In the pictures below, you'll see that after a little bit of filtering, you’ll be able to pick up more rust. That's going to be a critical pathway to eventually getting the performance of our machine learning model accuracy a little higher.

Not only are you enhancing the image, but you’re making it easier for someone inspecting the vehicle to see where that rust is along with those major pain points.

Before and after


Before and after image 2

Results gained from image enhancement

We created a rust computer vision detection model to automate the process. Our inspectors in the field are cranking up tens of thousands of cars every single month, so there's no way they can do manual inspections. As such, we developed an automated solution that utilizes basic computer vision techniques to identify the rust for them and help them figure out if the rust is severe, and how it affects the value.

After doing some of the post-processing techniques, we found that we were three times more likely to catch rust on the vehicle. In the industry, arbitration is when a car is misrepresented and has to be returned to us. We found that cars are more likely to get arbitrated based on that rust.

The curated dataset that we created of over a million Virtual Lift images with all different types of rust was then manually labeled by our inspectors as they do the data collection. We were then able to use them to do some computer vision and get increases in true positive rates.

Leveraging the power of AI in rust detection

The image of the rocker below is color-coded based on both the quantity and severity of rust. Green indicates severe types of rust, blue indicates surface rust, and red is something in the middle of a high-density area of rust. This lets us create a heat map to determine where that rust is located and better identify where we should inspect that vehicle.

Pictures of rust in undercarriage

User error is a major point of this, just like with any computer vision model. There are a number of papers for industrial machinery applications that say they can capture the perfect image, throw this rust detection model on it, and pull out where that part’s going to fail because they can tell the rust is going to degrade the vehicle’s mechanical integrity.

But that's only if you have a good photo. If you've done any type of computer vision or machine learning, you’ll know that data is massively important, as well as the quality of that data.

We needed to create different models to handle different types of issues seen in the field. When we deployed the Virtual Lift, we gave it a basic rest detection model, but we quickly found there were a number of false positives due to bad imaging. And so we created a number of computer vision models to classify what's going wrong with the picture.

Is there glare? Is the reflective coat or the reflective surface of the Virtual Lift scratched? And so how do we give that feedback to the user? How do we make a better dataset?

Seven models were put into production, and all of a sudden their data became a lot clearer.

Looking at the whole picture with image data fusion and CR data

From a high-level standpoint, we could really understand the amount of rust on a vehicle. But rust only gets us so far. You can't condition a vehicle nor can you condition if a machine is going to break down by one single component. You have to look at the whole picture.

A car could have a clean undercarriage with no rust, scratches, or dents, but it could have another problem. For example, the Chevy Silverado is notorious for having one specific spot of rust over, the front right rocker. This affects the value of the vehicle and affects how we can condition it to resell on our platform.

So we try to find trends within the whole data. We create a condition report, a holistic view of what that car looks like. When you look at the holistic view, you may see that you have a 10-year-old car that has more rust than say a five-year-old car. And that make of car may have commonality, historical trends, or patterns with other types of vehicles. When looking at this data alone, it looks very fragmented.

If you were just going to look at the written report of how an inspector would condition a vehicle, you’d identify some groupings if you knew all the cars were Chevy Silverados. But there's no tying together based on imagery or any intelligence. As a result, you don't see any great trends or patterns.

But if you can fuse the data from that inspector with our computer vision model, you start to see a much closer grouping of vehicles on our platform.

You get much better trends and you start to see a better pattern when we take in what our rust detection model found and what our condition report set. Maybe that condition report said there was no rust on it, but it also said it was a 2015 Chevy Silverado, so it automatically got grouped a bit closer. Then when you combine this with our sensor data, it pushes it over the edge more.

We were able to get much higher axes when looking at the metadata or the condition report of the vehicle and pairing it with our computer vision.


So, what's so fancy about a model that can detect rust?

The answer is ‘nothing’.

We used many basic rust detection models taken from whitepapers and tweaked them to the automotive application. But at the end of the day, we took a mirror with a phone and added a very simplistic algorithm to it.

Cars have such complex mechanical structures. There are multiple failure points, scratches, dents, aftermarket modifications, rust, and missing components. All of these are manually inspected from an industry that’s very technologically antiquated.

At ACV, we're changing that. We’re adding that level of intelligence to better shape how we inspect vehicles. And we're just scratching the surface. I'd like to say we're the first ones here, but actually, we're just the first ones starting to open the curtains on this. There's a tonne of research left to do in a tonne of different applications that a lot of people can pick up when they have the data