Название | Machine Vision Inspection Systems, Machine Learning-Based Approaches |
---|---|
Автор произведения | Группа авторов |
Жанр | Программы |
Серия | |
Издательство | Программы |
Год выпуска | 0 |
isbn | 9781119786108 |
The main contributions of the study:
Propose a novel capsule-based Siamese network architecture to perform one-shot learning,
Improve energy function of Siamese network to grab complex information output by Capsules,
Evaluate and analyse the performance of the model to identify characters which are previously not seen,
Extend Omniglot dataset by adding new characters from Sinhala language.
The chapter is structured as follows. Section 2.2 explores related learning techniques. Section 2.3 describes the design and implementation aspects of the proposed solution for the capsule layers-based Siamese network. Section 2.4 evaluates the methodology using several experiments and analyzing the results. Section 2.5 discusses the contribution of the proposed solution with the existing studies and concludes the chapter.
2.2 Background Study
2.2.1 Convolutional Neural Networks
Convolutional neural networks have been commonly used in computer vision research and applications [12] due to their ability to process a large amount of data and extract meaningful and powerful representations from it [13–15]. Before the era of CNNs, computer vision tasks largely relied on handcrafted features and mathematical modeling. There a large number of applications that relies on features Gabor wavelets [16–18], fractal dimensions [19–21], symmetric axis chords [22].
However, when it comes to handwritten character classification for low resource languages, the deep neural network’s this ability becomes more of a limitation, as not much of labeled training data available.
An ideal solution for handwritten character recognition should be based on zero-shot learning, where no previous sample used to classify or one- shot learning, where only one or few samples are used for training [23]. Several attempts have been made to modify different deep neural networks to match requirements of one-shot learning [24–26].
2.2.2 Related Studies on One-Shot Learning
Initial attempts on one-shot learning in computer vision domain are based on probabilistic approaches. Fei-Fei et al. [4] in 2003, have introduced a model to learn visual concepts and then use that knowledge to learn new categories. They have used a variational Bayesian framework. Here, the probabilistic models have used to represent the object groups and a probability density function has used to denote the prior knowledge. Their model has supported to learn four visual concepts, human faces, aeroplanes, motorcycles, and spotted cats. Initially, abstract knowledge is learned by training on many samples belong to three categories. Then this knowledge is used to understand the remaining category with the help of a small number of examples (1 to 5 training examples).
Lately, neural networks came in as a solution to the one-shot learning problem. The two main types of networks used in the one-shot learning tasks are memory augmented neural networks [26, 27] and Siamese neural networks [7, 24, 28]. Memory augmented neural networks are similar to Recurrent neural networks (RNN), but they have an external memory and try to separate the computation from memory [29]. Siamese networks have two similar branches of networks, and the output of those compared to get a decision on one-shot task. Most of the time, Siamese network branches are built on convolutional layers or fully connected layers.
2.2.3 Character Recognition as a One-Shot Task
Lake et al. [6] in 2013, has introduced Omniglot dataset and defined a one- shot learning problem there as a handwritten character recognition task. Omniglot is a handwritten character dataset similar to digit dataset named MNIST, which stands for Modified National Institute of Standards and Technology database [30]. However, in contrast to MNIST, Omniglot has 1,600 characters belonging to 50 alphabets. Each character has only 20 samples where MNIST has only ten classes and thousands of samples for each class. In order to accurately categorize characters in Omniglot, Lake et al. have proposed a one-shot learning approach named, Hierarchical Bayesian Program Learning (HBPL) [6]. Their approach is based on decomposing characters into strokes and determining a structural description for the detected pixels. Here, the strokes in different characters have identified using the knowledge gained from the previous characters. However, this method cannot be applied to complex images, since it uses stroke data to determine class. Further, inference under HBPL is difficult because it has a vast parameter space [7]. In the proposed solution with the capsule layers-based Siamese network, we borrow the problem defined by Lake et al. and propose a novel solution that works a more human-like way.
The above-mentioned methods needed some manual feature engineering, but in human cognition, the required features are learned along with the process of learning new visual concepts. For example, when we observe a car, we decompose it to wheels, body, and internal parts spontaneously. Moreover, to differentiate it from a bicycle, we use those learned features. A similar process can be replicated in machines using capsule neural networks.
Koch et al. [7] in 2015, have proposed a model using Siamese neural networks as a solution to the one-shot learning problem. They have used the same dataset and approach as Lake et al. [6], but their model has used convolutional units in neural networks to achieve understanding about the image. According to Hinton et al. [11], CNNs are misguided in what they are trying to achieve and far from how human visual perception works; hence, they have proposed capsules instead of convolutions.
In this chapter, we present a Siamese neural network based on Capsule networks to solve one-shot learning problem. The idea of the capsule first proposed by Hinton et al. in 2011 and later used for numerous applications [31, 32]. Generally, CNNs aim for viewpoint invariance of the “neuron” activities, so that the characters can be recognized irrespective of the viewing angle. This can be performed by a single scalar output to recap the tasks of replicated feature detectors [9]. In contrast to CNN, capsule networks use local “capsules” that can perform computations on the inputs, internally. These results are encapsulated into an informative output vector [11]. Sabour et al. [9], have proposed an algorithm to train capsule networks based on the concept of routing by agreement between capsules. Dynamic routing helps to achieve equivariance, while CNNs can only achieve invariance by the pooling layers.
Table 2.1 summarizes the techniques used in the related studies for One- shot learning. Accordingly, most studies have used capsule-based techniques in recent years. This could be because capsule networks show better generalization with small datasets.
In this chapter, we design a Siamese network similar to Koch et al. but with useful modifications to accommodate more complex details grabbed by capsules. Siamese network is a twin network, which takes two images as input and feeds to the weight sharing twin network. Our contributions in this chapter include exploring the applicability of capsules in Siamese networks, introducing novel architecture to handle intricate details of capsule output, and integrating recent advancement to go deep with capsule networks [33, 34] into Siamese networks.
Table 2.1 Comparison of related studies.