LightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classification
Forfatter
Jha, Debesh; Yazidi, Anis; Riegler, Michael Alexander; Johansen, Dag; Johansen, Håvard D.; Halvorsen, PålSammendrag
Deep Neural Networks (DNNs) have become the de-facto standard in computer vision, as well as in many other pattern recognition tasks. A key drawback of DNNs is that the training phase can be very computationally expensive. Organizations or individuals that cannot afford purchasing state-of-the-art hardware or tapping into cloud hosted infrastructures may face a long waiting time before the training completes or might not be able to train a model at all. Investigating novel ways to reduce the training time could be a potential solution to alleviate this drawback, and thus enabling more rapid development of new algorithms and models. In this paper, we propose LightLayers, a method for reducing the number of trainable parameters in DNNs. The proposed LightLayers consists of LightDense and LightConv2D layers that are as efficient as regular Conv2D and Dense layers but uses less parameters. We resort to Matrix Factorization to reduce the complexity of the DNN models resulting in lightweight DNN models that require less computational power, without much loss in the accuracy. We have tested LightLayers on MNIST, Fashion MNIST, CIFAR 10, and CIFAR 100 datasets. Promising results are obtained for MNIST, Fashion MNIST, and CIFAR-10 datasets whereas CIFAR 100 shows acceptable performance by using fewer parameters.
Forlag
Springer NatureSitering
Jha, Yazidi, Riegler, Johansen, Johansen, Halvorsen: LightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classification. In: Zhang, Xu, Tian. Parallel and Distributed Computing, Applications and Technologies: 21st International Conference, PDCAT 2020, Shenzhen, China, December 28–30, 2020, Proceedings, 2021. Springer NatureMetadata
Vis full innførselSamlinger
Copyright 2021 The Author(s)