There are many applications that require knowledge of the location and/or orientation of people or objects moving through space, such as motion tracking for virtual reality applications. Systems using cameras as sensors have the advantages of non-intrusiveness and immunity to ferromagnetic distortion. A network of cameras allows for a larger working volume, higher tracking accuracy, more robustness and greater flexibility, but also introduces interesting problems. The architecture of such systems must be scalable to the number of cameras. Approaches are needed for scalable calibration of cameras into a single global coordinate frame, and also for the placement of cameras to optimize tracking performance.
We have designed and implemented M-Track, a scalable architecture for real-time motion tracking with tens of cameras. M-Track enables parallel processing of high-bandwidth image data. The employment of a Kalman filter based central estimator allows the asynchronous integration of information from camera-processor pairs. The architecture also enables the employment of cameras with different resolutions and frame rates, and supports tracking and automatic labeling of multiple features, even during temporary periods of occlusion. Three applications built upon this architecture demonstrate the usefulness of the system.
Next, we present a scalable wide-area multi-camera calibration scheme. Many asynchronous cameras can be calibrated into a single consistent coordinate frame by simply waving a bright light in front of them, even when cameras are arranged with non-overlapping working volumes and without initial estimates of camera poses. The construction of a universally visible physical calibration object is not necessary, and the method is easily adaptable to working volumes of variable size and shape.
We then propose a quantitative metric for evaluating the quality of multi-camera placement configurations. Previous work only uses the 3D uncertainty caused by limited camera resolution to evaluate quality and ignores occlusion. Our metric considers both camera resolution and the likelihood of target occlusion. The metric is based on a novel probabilistic model that estimates the dynamic self-occlusion of targets. We verify its validity through experimental data and analysis of various camera placement configurations.
Cited By
- Ilie A and Welch G (2014). Online control of active camera networks for computer vision tasks, ACM Transactions on Sensor Networks (TOSN), 10:2, (1-40), Online publication date: 1-Jan-2014.
- Fehr D, Fiore L and Papanikolopoulos N Issues and solutions in surveillance camera placement Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, (3780-3785)
- Bauer M, Schlegel M, Pustka D, Navab N and Klinker G Predicting and estimating the accuracy of n-occular optical tracking systems Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, (43-51)
- Hörster E and Lienhart R On the optimal placement of multiple visual sensors Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks, (111-120)
- Allen B and Welch G A general method for comparing the expected performance of tracking and motion capture systems Proceedings of the ACM symposium on Virtual reality software and technology, (201-210)
- Guskov I, Klibanov S and Bryant B Trackable surfaces Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, (251-257)
Index Terms
- Design of many-camera tracking systems for scalability and efficient resource allocation
Recommendations
Hand-Eye Camera Calibration with an Optical Tracking System
ICDSC '18: Proceedings of the 12th International Conference on Distributed Smart CamerasThis paper presents a method for hand-eye camera calibration via an optical tracking system (OTS) faciltating robotic applications. The camera pose cannot be directly tracked via the OTS. Because of this, a transformation matrix between a marker-plate ...
Efficient, causal camera tracking in unprepared environments
This paper addresses the problem of tracking the 3D pose of a camera in space, using the images it acquires while moving freely in unmodeled, arbitrary environments. A novel feature-based approach for camera tracking is proposed, intended to facilitate ...
Intelligent Cooperative Tracking in Multi-camera Systems
ISDA '09: Proceedings of the 2009 Ninth International Conference on Intelligent Systems Design and Applicationsn this paper, an approach for intelligent integration of indoor visual tracking system for event detection and movement is proposed. This surveillance system is composed of a stationary camera and a pan tilt zoom (PTZ) camera, where the two cameras have ...