Binocular image sequence analysis : integration of stereo disparity and optic flow for improved obstacle detection and tracking
Huang, Yingping and Young, Ken. (2008) Binocular image sequence analysis : integration of stereo disparity and optic flow for improved obstacle detection and tracking. Eurasip Journal on Advances in Signal Processing, Vol.2008 (No.843232). ISSN 1687-6172
WRAP_Huang_Binocular_843232.pdf - Published Version - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Available under License Creative Commons Attribution.
Official URL: http://dx.doi.org/10.1155/2008/843232
Binocular vision systems have been widely used for detecting obstacles in advanced driver assistant systems (ADASs). These systems normally utilise disparity information extracted from left and right image pairs, but ignore the optic flows able to be extracted from the two image sequences. In fact, integration of these two methods may generate some distinct benefits. This paper proposes two algorithms for integrating stereovision and motion analysis for improving object detection and tracking. The basic idea is to fully make use of information extracted from stereo image sequence pairs captured from a stereovision rig. The first algorithm is to impose the optic flows as extra constraints for stereo matching. The second algorithm is to use a Kalman filter as a mixer to combine the distance measurement and the motion displacement measurement for object tracking. The experimental results demonstrate that the proposed methods are effective for improving the quality of stereo matching and three-dimensional object tracking.
|Item Type:||Journal Article|
|Subjects:||Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software|
|Divisions:||Faculty of Science > WMG (Formerly the Warwick Manufacturing Group)|
|Library of Congress Subject Headings (LCSH):||Computer vision, Binocular vision, Automobile driving -- Computer programs|
|Journal or Publication Title:||Eurasip Journal on Advances in Signal Processing|
|Date:||28 March 2008|
|Number of Pages:||10|
|Access rights to Published version:||Open Access|
|References:|| Y.Huang, S. Fu, and C. Thompson, “Stereovision-based object segmentation for automotive applications,” EURASIP Journal on Applied Signal Processing, vol. 2005, no. 14, pp. 2322–2329, 2005.  Y. Huang, “Obstacle detection in urban traffic using stereovision,” in Proceedings of the 8th IEEE International Conference on Intelligent Transportation Systems (ITSC ’05), pp. 357–362, Vienna, Austria, September 2005.  U. Franke and A. Joos, “Real-time stereo vision for urban traffic scene understanding,” in Proceedings of IEEE Intelligent Vehicles Symposium (IV ’00), pp. 273–278, Dearborn, Mich, USA, October 2000.  Y. Kimura, T. Kato, and M. Ohta, “Stereo vision for obstacle detection,” in Proceedings of the 13th World Congress & Exhibition on Intelligent Transportation Systems and Services, London, UK, October 2006.  T. Suzuki and T. Kanada, “Measurement of vehicle motion and orientation using optical flow,” in Proceedings of the IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems (ITSC ’99), pp. 25–30, Tokyo, Japan, October 1999.  Z. Hu and K. Uchimura, “Tracking cycle: a new concept for simultaneous tracking of multiple moving objects in a typical traffic scene,” in Proceedings of IEEE Intelligent Vehicles Symposium (IV ’00), pp. 233–239, Dearborn, Mich, USA, October 2000.  N. Ancona, “A fast obstacle detection method based on optical flow,” in Proceedings of the 2nd European Conference on Computer Vision (ECCV ’92), pp. 267–271, Santa Margherita Ligure, Italy, May 1992.  D. Willersinn and W. Enkelmann, “Robust obstacle detection and tracking by motion analysis,” in Proceedings of IEEE International Conference on Intelligent Transportation Systems (ITSC ’97), pp. 717–721, Boston,Mass, USA, November 1997.  A. M. Waxman and J. H. Duncan, “Binocular image flows: steps towards stereo-motion fusion,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 8, no. 6, pp. 715– 729, 1986.  L. Li and J. H. Duncan, “3-D translational motion and structure from binocular image flows,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 7, pp. 657–667, 1993.  W. Wang and J. H. Duncan, “Recovering the threedimensional motion and structure of multiple moving objects from binocular image flows,” Computer Vision and Image Understanding, vol. 63, no. 3, pp. 430–446, 1996.  Y. Zhang and C. Kambhamettu, “On 3-D scene flow and structure recovery from multiview image sequences,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 33, no. 4, pp. 592–600, 2003.  G. Sudhir, S. Banarjee, K. Biswas, and R. Bahl, “Cooperative integration of stereopsis and optical flow computation,” Journal of the Optical Society of America A, vol. 12, no. 12, pp. 2564–2572, 1995.  S. P. Clifford and N. M. Nasrabadi, “Integration of stereo vision and optical flow using Markov randomfields,” in Proceedings of IEEE International Conference on Neural Networks, vol. 1, pp. 577–584, San Diego, Calif, USA, July 1988.  A. P. Tirumalai, B. G. Schunck, and R. C. Jain, “Dynamic stereo with self-calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 12, pp. 1184–1189, 1992.  T. Dang, C. Hoffmann, and C. Stiller, “Fusing optical flow and stereo disparity for object tracking,” in Proceedings of the 5th IEEE International Conference on Intelligent Transportation Systems (ITSC ’02), pp. 112–117, Singapore, September 2002.  B. McCane, K. Novins, D. Crannitch, and B. Galvin, “On benchmarking optical flow,” Computer Vision and Image Understanding, vol. 84, no. 1, pp. 126–143, 2001.|
Actions (login required)