SUMMARY
This paper introduces a method to classify Japanese Sign Language (JSL) using a combined gathered image generation technique and a convolutional neural network (CNN) approach. The gathered image generation method generates gathered images based on mean images. Herein, the maximum difference value is between blocks of mean images and images of JSL motions. The gathered images comprise blocks having the calculated maximum difference value. After information on all images has been gathered into single words, CNNs are used to extract features to classify the JSL words. A multi-class support vector machine and a multilayer perceptron are then used to classify words related to greeting and enquiries. The mean and standard deviation of the proposed method’s recognition accuracy are experimentally shown to be 94.1% and 1.6%, respectively. These results suggest that information can be obtained to classify the 20 sample JSL words using the proposed combined gathered image generation and CNN approach.