Fruit Classification using Colorized Depth Images
Abstract
Fruit classification is a computer vision task that aims to classify fruit classes correctly, given an image. Nearly all fruit classification studies have used RGB color images as inputs, a few have used costly hyperspectral images, and a few classical ML-based have used colorized depth images. Depth images have apparent benefits such as invariance to lighting, less storage requirement, better foreground-background separation, and more pronounced curvature details and object edge discon-tinuities. However, the use of depth images in CNN-based fruit classification remains unexplored. The purpose of this study is to investigate the use of colorized depth images in fruit classification with four CNN models, namely, AlexNet, GoogleNet, ResNet101, and VGG16, and compare their performance and computational efficiency, as well as the impact of transfer learning. Depth images of apple, orange, mango, banana and rambutan (Nephelium Lappaceum) were manually collected using a depth sensor with sub-millimeter accuracy and subjected to jet, uniform, and inverse colorization to produce three sets of dataset. Results show that depth images can be used to train CNN models for fruit classification with ResNet101 achieving the best accuracy of 96% on the inverse dataset. It achieved 100% accuracy after transfer learning. GoogleNet showed the most significant improvement after transfer learning on the uniform dataset, at 12.27%. It also exhibited the lowest training and inference times. The results show the potential use of depth images for fruit classification and similar computer vision tasks. © (2023), All Rights Reserved.
Author keywords
CNN; depth colorization; depth image; Fruit classification; transfer learning
Link to the Article