A Novel Throughput Enhancement Method for Deep Learning Applications on Mobile Devices With Heterogeneous Processors
A Novel Throughput Enhancement Method for Deep Learning Applications on Mobile Devices With Heterogeneous Processors
Blog Article
Contemporary smartphones integrate Torches dedicated AI accelerators alongside CPUs and GPUs, in response to the growing demand for deep learning applications.While existing software development kits (SDKs) for these devices provide neural network optimization techniques, they often lack system-level optimizations, specifically in distributing layers across heterogeneous processors.This paper introduces a novel approach to enhance the throughput of deep learning applications through the utilization of quantization and pipelining techniques.The proposed technique employs different quantization schemes for activation data and filter weights to minimize accuracy drop.A genetic algorithm is employed to explore the extensive design space of layer-wise mapping and pipelining, aiming to find the best pipelining solution.
To estimate performance of each solution candidate, the actual execution time of the application on the device is measured, accounting for unique smartphone characteristics, such as dynamic voltage and frequency scaling (DVFS) and OS scheduling.The impact of thermal throttling on throughput is also investigated by running benchmark applications continuously for 10 minutes.Our technique is validated through experiments conducted on Google Pixel 6 and Samsung Galaxy S22.Throughput enhancements, ranging from
6$