Person Re-identification Using Multiple Egocentric Views

Anirban Chakraborty, Bappaditya Mandal, Junsong Yuan
IEEE Transactions on Circuits and Systems for Video Technology (CSVT)
Publication Date: 
5 Oct 2016
Strategic Thrust: 
Media, Intelligence
Development of a robust and scalable multi-camera surveillance system is the need of the hour to ensure public safety and security. Being able to re-identify d track one or more targets over multiple non-overlapping camera Field-of-Views in a crowded environment remains an important and challenging problem because of occlusions, large change in the viewpoints and illumination across cameras. However, the rise of wearable imaging devices has led to new avenues in solving the re-identification (re-id) problem. Unlike static cameras, where the views are often restricted or low resolution and occlusions arecommon scenarios, egocentric/first-person-views (FPVs) mostly get zoomed in, un-occluded face images. In this paper, we present a person re-identification framework designed for a network of multiple wearable devices. The proposed framework builds on commonly used facial feature extraction and similarity computation methods between camera pairs and utilizes a data association method to yield globally optimal and consistent re-id results with much improved accuracy. Moreover, to ensure its utility in practical applications where large amount of observations are available every instant, an online scheme is proposed as a direct extension of the batch method. This can dynamically associate new observations to already observed and labelled targets in an iterative fashion. We tested both the offline and online methods on realistic FPV video databases, collected using multiple wearable cameras in a complex office environment and observed large improvements in performance when compared to the state-of-the-arts.