CAPTURE: An End-to-End Mobile Implementation for a Computationally Optimized Deep Learning Framework
Through modern artificial intelligence techniques, Convolutional Neural Networks (CNNs) are one of the most effective methods for computer vision tasks such as image classification. However, the computation of CNN models remains extremely costly, highly limiting applications in areas such as mobile devices with limited computation resources. To resolve this issue, we investigate local optimization and propose CAPTURE, a fully offline CNN computing framework for mobile deployment, with significant reductions in model size and computation load while maintaining optimal accuracy. Since most CNN-based mobile applications classify only a specific subset of images, this project’s objective includes compressing pre-trained CNN models into specialized models. Techniques include distilling critical pathways with mean activation and gradient calculations of convolutional filters as well as reconstructing and retraining the model with class-specific critical paths. The specialized model trained on CIFAR10 had a size decrease of 96% compared to the original VGG16. The implementation of the NVIDIA Jetson Nano mobile platform shows that the inference time of the specialized VGG model is reduced by up to 83%. Accuracy is consistently maintained at 90.33% compared to the VGG16 model’s accuracy of 93.13%. These metrics display the efficiency and functionality of CAPTURE: a capable and effective implementation that can be put into use easily with real-world applications such as autonomous drones and security systems.
Copyright (c) 2019 Rayan Yu, Lekha Punya Punya, Kenneth Wang, Zhuwei Qin, Xiang Chen
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.