An Efficient Mechanism for Computational Improvement in Machine Learning Using Approximate Computing

نوع: Type: thesis

مقطع: Segment: masters

عنوان: Title: An Efficient Mechanism for Computational Improvement in Machine Learning Using Approximate Computing

ارائه دهنده: Provider: leila mirzaei mosbat

اساتید راهنما: Supervisors: Dr. Mahdi Abbassi

اساتید مشاور: Advisory Professors:

اساتید ممتحن یا داور: Examining professors or referees: 2023

زمان و تاریخ ارائه: Time and date of presentation: 1402/07/15 ساعت 10:30

مکان ارائه: Place of presentation: Engineering amphitheater

چکیده: Abstract: The design and implementation of neural networks for deep learning is currently the focus of many industries and academics. However, the computational overhead, speed, and resource consumption of neural networks are the main bottlenecks for model implementation in edge computing platforms such as mobile devices and the Internet of Things. Hence, the use of methods that allow improving the energy efficiency and operational speed of neural networks without compromising program accuracy or increasing hardware costs are critical for the widespread deployment of neural networks. One of the methods used to improve efficiency and increase speed in neural networks is the method of approximate calculations. Approximate computing has emerged as a new approach for energy efficient design as well as increasing the performance speed of a computing system, by reducing limited accuracy. Due to the iterative nature of the learning process, neural networks show inherent flexibility against small errors and make the use of the approximate computing model a promising technique to improve the characteristic of speed and power consumption. In neural networks, calculations based on multiplication and addition are used in the layers of the neural network, which requires a lot of time and energy to perform multiplication calculations. In this thesis, the aim is to examine the time, speed and resource consumption requirements of neural networks and to reduce these requirements by presenting a new computational method called DeepAdd. In this method, the operation of multiplying the weights at the input of different layers of the neural network has been reduced to an approximate addition operation. Finally, the performance of the conducted research will be evaluated and compared with previous existing methods. The use of the proposed approximate architecture of this research has been investigated in several neural networks. The obtained experimental results indicate that the use of the proposed DeepAdd mechanism, compared to the original model and the approximate model based on bit shift, improves the parameters of time and speed in the training and inference phases of neural networks and also reduces the consumption of resources. According to the experimental results, the DeepAdd architecture compared to the basic architecture and the approximate shift-based architecture in terms of time and speed in multi-layer perceptron simple neural networks in the model training phase is improved by 3.38% and 0.44%, respectively, and in the inference phase as well. It has improved by 9.30% and 2.5% respectively. Also, the architecture based on the DeepAdd aggregate compared to the basic architecture and the approximate architecture based on the shift in terms of time and speed in CNN networks also improved by 2.91% and 0.6% respectively in the model training phase and by 0.6% in the inference phase respectively. 5.51 and 2.45% improved

فایل: ّFile: Download فایل