Co-Design of Deep Neural Nets and Neural Net Accelerators for Embedded Vision Applications

Authors:  Alon AmidKiseok Kwon, Amir Gholami, Bichen Wu, Krste Asanovic, Kurt Keutzer

 

Deep Learning is arguably the most rapidly evolving research area in recent years. As a result, it is not surprising that the design of state-of-the-art deep neural net models often proceeds without much consideration of the latest hardware targets, and the design of neural net accelerators proceeds without much consideration of the characteristics of the latest deep neural net models. Nevertheless, in this article, we show that there are significant improvements available if deep neural net models and neural net accelerators are co-designed. In particular, we show that a co-designed neural net model can yield an improvement of 2.6/8.3x in inference speed and 2.25/7.5x in energy as compared to SqueezeNet/AlexNet, while improving the accuracy of the model. We also demonstrate that a careful tuning of the neural net accelerator architecture to a deep neural net model can lead to a 1.9–6.3x improvement in inference speed.