Abstract
Recent advances in computer vision---in the form of deep neural networks---have made it possible to query increasing volumes of video data with high accuracy. However, neural network inference is computationally expensive at scale: applying a state-of-the-art object detector in real time (i.e., 30+ frames per second) to a single video requires a $4000 GPU. In response, we present N
o
S
cope
, a system for querying videos that can reduce the cost of neural network video analysis by up to three orders of magnitude via
inference-optimized model search.
Given a target video, object to detect, and reference neural network, N
o
S
cope
automatically searches for and trains a sequence, or cascade, of models that preserves the accuracy of the reference network but is specialized to the target video and are therefore far less computationally expensive. N
o
S
cope
cascades two types of models:
specialized models
that forego the full generality of the reference model but faithfully mimic its behavior for the target video and object; and
difference detectors
that highlight temporal differences across frames. We show that the optimal cascade architecture differs across videos and objects, so N
o
S
cope
uses an efficient cost-based optimizer to search across models and cascades. With this approach, N
o
S
cope
achieves two to three order of magnitude speed-ups (265-15,500x real-time) on binary classification tasks over fixed-angle webcam and surveillance video while maintaining accuracy within 1--5% of state-of-the-art neural networks.
Subject
General Earth and Planetary Sciences,Water Science and Technology,Geography, Planning and Development
Cited by
259 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献