Mathematical, Computing & Information Sciences
We describe a system under development for the 3D fusion of multi-sensor surface surveillance imagery, including electro-optical (EO), IR, SAR, multispectral and hyperspectral sources. Our approach is founded on biologically-inspired image processing algorithms. We have developed an image processing architecture enabling the unified interactive visualization of fused multi-sensor site data which utilizes a color image fusion algorithm based on retinal and cortical processing of color. We have also developed interactive Web-based tools for training neural network search agents that are capable of automatically scanning site data for the fused multi-sensor signatures of targets and/or surface features of interest. Each search agent is an interactively trained instance of a neural network model of cortical pattern recognition called a fuzzy ARTMAP. The utilization of 3D site models is central to our approach because it enables the accurate multi-platform image registration that is necessary for color image fusion and the designation, learning and searching for multi-sensor fused pixel signatures. Interactive stereo 3D viewing and fly-through tools enable efficient and intuitive site exploration and analysis. Web-based remote visualization and search agent training and utilization tools facilitate rapid, distributed and collaborative site exploitation and dissemination of results. © 2000 Int. Soc. Inf. Fusion.
Ross, W. D.; Waxman, A. M.; Streilein, W. W.; Aguilar, M.; Verly, J.; Liu, F.; Braun, M. I.; Harmon, P.; and Rak, S., "Multi-sensor 3D Image Fusion and Interactive Search" (2000). Research, Publications & Creative Work. 80.