Efficient edge inference benchmarking for AI-driven applications
Deep learning (DL) algorithms have achieved phenomenal success in different AI applications in recent times. Training DL algorithms require huge computational resources. Therefore, cloud or high-performance computing at the edge are obvious choices for this task. However, during inference cloud computing is not a suitable choice because of latency issues. There are billions of devices and sensors connected to the Internet, and data generated from these cannot be transferred and processed in geographically distant cloud data centers without incurring delays. Currently we are bringing computation closer to the edge of the network near the data source using intelligent edge devices. However, the edge devices have significant constraints on energy use, size and cost; constraints which point back to a need for effective performance analysis, which in turn requires an effective benchmark. Several benchmarks exist in the literature for evaluating performance of AI applications in edge devices. Each of these benchmarks has made unique contributions. The benchmark will reflect standard practices to help the ecosystem to choose among hardware solutions depending on their power usage constraints and inference performance requirements for efficient edge AI deployments.