Jingwen's Homepage

Low-Latency Proactive Continuous Vision
Yiming Gan, Yuxian Qiu, Lele Chen, Jingwen Leng, Yuhao Zhu
In International Conference on Parallel Architectures and Compilation Techniques (PACT), 2020

Continuous vision is the cornerstone of a diverse range of intelligent applications found on emerging computing platforms such as autonomous machines and Augmented Reality glasses. A critical issue in today's continuous vision systems is their long end-to-end frame latency, which significantly impacts the system agility and user experience. We find that the long latency is fundamentally caused by the serialized execution model of today's continuous vision pipeline, whose key stages---sensing, imaging, and vision computations---execute sequentially, leading to long frame latency.

This paper seeks to reduce the end-to-end latency of continuous vision tasks. Our key idea is a new \textit{proactive} vision execution model that breaks the sequential execution of the vision pipeline. Specifically, we propose to allow the pipeline front-end (sensing and imaging) to predict future frames; the pipeline back-end (vision algorithms) then predictively operates on the future frames to reduce frame latency. While the proactive execution model is generally applicable to any vision systems, we demonstrate its effectiveness using an implementation on resource-constrained mobile Systems-on-a-chips (SoC). Our system, \proj, incorporates two techniques to overcome key challenges that arise in deploying proactive vision in mobile systems: it \textit{enables multiple outstanding speculative frames} by exploiting the hardware heterogeneities in mobile SoCs; it \textit{reduces the energy overhead of prediction} by exploiting the error-resilient nature of vision algorithms. We show that \proj reduces the frame latency by up to 92\% under the same energy.