Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets
Applied Computer Systems 2017
Vadim V. Romanuke

A technique of DropOut for preventing overfitting of convolutional neural networks for image classification is considered in the paper. The goal is to find a rule of rationally allocating DropOut layers of 0.5 rate to maximise performance. To achieve the goal, two common network architectures are used having either 4 or 5 convolutional layers. Benchmarking is fulfilled with CIFAR-10, EEACL26, and NORB datasets. Initially, series of all admissible versions for allocation of DropOut layers are generated. After the performance against the series is evaluated, normalized and averaged, the compromising rule is found. It consists in non-compactly inserting a few DropOut layers before the last convolutional layer. It is likely that the scheme with two or more DropOut layers fits networks of many convolutional layers for image classification problems with a plenty of features. Such a scheme shall also fit simple datasets prone to overfitting. In fact, the rule “prefers” a fewer number of DropOut layers. The exemplary gain of the rule application is roughly between 10 % and 50 %.


Keywords
CIFAR-10, convolutional neural network, DropOut, EEACL26, image classification, NORB, overfitting
DOI
10.1515/acss-2017-0018

Romanuke, V. Appropriateness of Dropout Layers and Allocation of Their 0.5 Rates across Convolutional Neural Networks for CIFAR-10, EEACL26, and NORB Datasets. Applied Computer Systems, 2017, 22, pp.54-63. ISSN 2255-8683. e-ISSN 2255-8691. Available from: doi:10.1515/acss-2017-0018

Publication language
English (en)
The Scientific Library of the Riga Technical University.
E-mail: uzzinas@rtu.lv; Phone: +371 28399196