Home Insights Search Identifying screws, a practical case study for visual search
Identifying screws, a practical case study for visual search

Identifying screws, a practical case study for visual search

In the middle of your home improvement project, you realize that you’ve lost a very important screw. You have the matching screw that you need in your hands, and simply need to make another trip to the hardware store. However, a trip to the local hardware store is a hassle, as many of the screws look alike, and the staff there isn’t always helpful. Traditional approaches to matching these kinds of parts are all frustrating and slow, and usually involve visually comparing your item to a huge catalog of choices. Wouldn’t it be nice if you could just point your smartphone to the screw that you need, and have it easily show up online?

Grid Dynamics partnered with one of our customers, a large retailer, who focuses on selling hardware directly to consumers and contractors. Seeing an opportunity to improve this part of the buying process, our customer asked us to find a way to use computer vision to do this laborious, visual matching process more quickly and easily.

In this article, we will walk you through the steps on how to develop a visual search application using deep learning techniques, and discuss both approaches that worked, as well as approaches that didn’t.

Previous convolutional neural network examples

Our earlier blog post, “Building a reverse image search engine with convolutional neural networks”, described a typical visual search solution similar to this example. The neural network was trained to map input images into “feature vectors”, which allowed us to search for similar images in the so-called “feature space”. This approach produced great results for visual recommendations, yet did not guarantee an exact match. We could adjust various hyperparameters, like thresholds and distance metrics, which improved results — but the perfect match remained elusive. For this application, we decided to take a classification approach. We identified all of the key visual features of the image before searching the catalog for similar products.

Identifying features of a screw

Important features for screws

Important features for screws

Screws are visually very similar to each other, and therefore hard to distinguish. We needed to understand what characteristics made the screw types unique, and define the minimum feature set that makes each screw unique. After investigating the screw’s taxonomy, we decided to choose seven key visual attributes that fully identify a screw:

Screw features

The difficulty with identifying screws is that there are about 1,500 unique types of screws, and a certain screw can often be distinguished only through a single feature. To ensure high-quality results, input images should pass through multiple processing stages before the catalog can be searched. We had to localize and segment the image, determine object dimensions and extract visual attributes.

merchandising_image-15
High-level processing workflow

Localization – Finding the object in the photo

First, we must locate the object of interest in the photo, in this case, the screw. Because the user supplied the photo, we didn’t have any control over the background or object location of the photo. Ideally, we wanted to select a minimum area in the photo containing the object of interest, and ignore the rest of the image. To solve this problem we tried the Object Detection and Semantic Segmentation approaches.

The object detection approach to localization

Object Detection models help to localize the object of a particular class and find the minimum rectangular area around the object. Such models consist of multiple steps: they start by proposing candidate location regions, then extract features by running convolutional neural networks (ConvNet) on top of each region and finally, classify those regions by tightening the bounding box of the object. Let’s review some of the pros and cons of using Object Detection models:

Pros:

  • Detects multiple objects in an image
  • Detects different classes in an image
  • There are a lot of pre-trained models for different frameworks

Cons:

  • The bounding box around an object still contains background imagery and shadows which negatively affect the dimensions algorithm’s accuracy
  • Keeping the bounding boxes tight while still avoiding false-negatives is non-trivial for real images
  • High recall must be kept. At the same time, avoiding false positives sometimes failed for real cases
  • The Object Detection model is computationally heavy. Various architectures have vastly different precision, recall and performance tradeoffs. As this is a real-time application, model performance is important
  • Training requires diverse labeled datasets of different types of screws with their bounding boxes

The semantic segmentation approach to localization

Considering these limitations, we chose to abandon Object Detection nets, and instead considered the Semantic Segmentation approach (see U-Net). This approach worked much better for our application. U-Net models classify each pixel that belongs to the object of interest. The main advantage of the U-Net architecture is that it relies on a lot of data augmentation, which allows for the use of a small training dataset, and generalizes well. This approach, however, has one drawback: it can not differentiate the instances of the same object class in the image.
A U-Net architecture supports multiple object classes at once by adding channels. The U-Net model produced a very accurate mask by avoiding shadows and various background distortions. However, instance segmentation restriction was a limitation that we needed to solve, so we had to add custom post-processing of the masks to separate instances of the same class.

u-net output
U-Net output

Choosing a classification technology

Since 2014, ConvNet has shown great results in image classification for thousands of classes, as it supports all kinds of viewpoints and distortions. Our task in this case was much simpler, as we did not have thousands of classes, and we already segmented an object so that background and shadows did not interfere. We used catalog images and their attributes to train relatively simple ConvNet models to predict each of the key attributes.

Deep training to classify screw features

We chose the MobileNetV2 Keras model for all of the classification features except color. We then used transfer learning with pre-trained models using ImageNet weights. This fine-tuned a few final layers of ConvNet for each of the extracted features. The average accuracy for all models was about 94%. To learn more about Convolutional Neural Networks, refer to references [1, 2, 3, 4].

Keras pre-trained models
Keras pre-trained models (https://keras.io/applications/)

Classifying the screw’s finish

A screw’s finish can be classified by color. Before applying deep learning classification, we decided to try a few simple algorithmic approaches as a baseline.

We ignored shape-related features, and focused only on color. We experimented with some different algorithms to establish a color baseline. Initially, we tried several ground truth colors. We extracted colors from the image using the color histogram, and searched for the closest ground truth colors by calculating the intersection of histograms. Another approach we tried was to map object colors to HUE/LAB color spaces, and calculate the distance between all ground truth points and the object color. The problem is that two identical colors for human perception that differ only slightly in tone may have radically different values in any representation, making machine learning a difficult solution for this issue.

Turning to a deep learning approach, we decided to use a CNN model with a few convolutional layers to avoid learning shape-related features, focused the network attention on the color and ignored light conditions. We drew our inspiration from “Vehicle Color Recognition using Convolutional Neural Network”, an in-depth research article on the subject. We implemented a similar architecture with some simplifications, and achieved 93% accuracy for 9 color classes. We also used the imgaug library for data augmentation. It integrates seamlessly, and was able to give us a very good performance boost.

Extracting metrical feature values and dimensions

Aside from classification of visual features, there are metrical features such as length, major diameter and thread pitch. For each of these features, we used a different image processing technique. Before extracting metrical features, we had to do some preprocessing, which included these steps:

  1. Segment the image to get the best possible mask, minimizing background interference. This created an image contour
  2. Orient the screw horizontally. This was done by fitting the line or ellipse to contour points. The correct rotation angle produced the maximum contour area
  3. Find the scale factor for dimensions. 2D images do not provide information about the distance between the camera and the objects, so we had to use a reference object (in this case, a quarter coin)
  4. Partition the image into a head and body. This was done by finding the convex hull of this contour. Using the convexity defects, the maximum distance between the contour and the convex hull, we found two points on the opposite sides of the body where the body meets the head. We then used the maximum x-coordinate of these two points as the point at which we could split the image.
    Output of the image processing algorithms
    Output of the image processing algorithms

Once these steps were complete, we could calculate the major diameter, total length and screw length.

The thread pitch was difficult to extract. The best approach was to calculate a Fourier transform of the threads to get a spectrum from which to derive the pitch. In order to improve spectrum quality, a Hann window was used, and a discrete Fourier transform for every row was averaged by column. The last step was to find the distance between the central peak and the closest non-central peak on the spectrum.

After all these steps were completed, we had predicted all of the screw features’ values. The last step was to find the target product based on the predicted features.

merchandising_image-18

Models ensemble: The final prediction

The final step was to find the products that matched the predicted features. In an ideal case, if all features were predicted correctly, we could find the proper unique product by filtering the catalog. However, feature predictions are not ideal, and this approach may easily result in incorrect or no results.

Model numberHead typeMaterialLength, width, pitchThread coverageTip type
#146258ovalzinc#10 x 3/4 infully_threadedcone
#496728flatzinc#8 – 32 x 2 infully_threadeddie
#529714panbrass#6 – 32 x 1/2 infully_threadeddie
#128032one wayzinc platted#12 x 1 – 1/2 infully_threadedcone
#103972panzinc platted1/4 in – 20 x 3/4 infully_threadeddie
#104071panzinc platted#10 – 24 x 3/4 infully_threadeddie
#916468panstainless#12 x 5/8 infully_threadedcone
#194109type pzinc platted1/4 in – 20 x 1/2 infully_threadeddie
#129586one wayzinc platted#8 x 1 infully_threadedcone
#194383type pzinc platted#10 – 24 x 1/2 infully_threadeddie
#386687roundzinc#10 x 1 – 1/2 inthreaded_on_one_enddie
#841215ovalzinc#6 x 1 infully_threadedcone
#464339screw eyesstainless#212threaded_on_one_endcone
#302180dowelzinc5/32 in x 1 – 1/4 inthreaded_on_both_endscone
#40921flatzinc#8 x 1 inthreaded_on_one_endcone

Sample of the screws with attributes

Conclusion

A great application is one that assists users in a process that is unwieldy for humans. Screw identification certainly falls into this category. A user can get easily frustrated identifying an item like a screw, but it is essential to find the right one. These visual search techniques utilizing deep learning can used for many applications where there is a large variety of similar-looking items. We hope this example of a visual search application illustrates the potential of an efficient and fun way to assist users in their e-commerce experience.

Grid Dynamics specializes in visual search and deep learning solutions. We can make your product catalog manageable for your online customers. Contact us at (650) 523-5000 to speak to a Grid Dynamics Representative.

References

  1. A Beginner’s Guide To Understanding Convolutional Neural Networks
  2. Understanding Convolution in Deep Learning
  3. CS231n Convolutional Neural Networks for Visual Recognition
  4. Fine tuning in Keras
  5. Keras application
  6. U-Net: Convolutional Networks for Biomedical Image Segmentation

Get in touch

We'd love to hear from you. Please provide us with your preferred contact method so we can be sure to reach you.

    Identifying screws, a practical case study for visual search

    Thank you for getting in touch with Grid Dynamics!

    Your inquiry will be directed to the appropriate team and we will get back to you as soon as possible.

    check

    Something went wrong...

    There are possible difficulties with connection or other issues.
    Please try again after some time.

    Retry