back to funding overview.

Learning and Using Models of Geo-Temporal Appearance

Overview

Billions of geotagged and time-stamped images are publicly available via the Internet, providing a rich record of the appearance of people, places, and things across the globe. These images are a largely untapped resource that could be used to improve our understanding of the world and how it changes over time. This project develops automated methods of extracting useful information from this imagery and fusing it into high-resolution global models that capture geo-temporal trends. Once the trends have been captured, these models are used to improve performance on computer vision tasks and make geotagged imagery a usable and navigable resource for education and research in other disciplines. The project includes an education and outreach component that brings real-world problems to computer science (CS) students, mentors students across the educational spectrum, and makes the research accessible to the public.

This project develops computer vision technologies to capture spatial and temporal appearance trends and is organized into four main research thrusts: (1) investigating novel methods for extracting information from Internet imagery using weakly supervised learning, (2) developing techniques that integrate ground-level imagery with aerial and satellite data to model the expected image appearance anywhere in the world at any time, (3) evaluating methods for using such models to improve the performance of computer vision algorithms, and (4) automatically creating visual representations that make it possible for novice users to explore the learned geo-temporal trends via the Internet.

See the NSF Award Announcement for additional details.

Related Publication(s)

  1. PDF Hamraz H., Jacobs NB., Contreras MA., Clark CH. 2018. Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees. arXiv preprint arXiv:1802.08872. bibtex
  2. Jacobs N., Kraft A., Rafique MU., Sharma RD. 2018. A Weakly Supervised Approach for Estimating Spatial Density Functions from High-Resolution Satellite Imagery. In: ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL). bibtex
  3. PDF Schulter S., Zhai M., Jacobs N., Chandraker M. 2018. Learning to Look around Objects for Top-View Representations of Outdoor Scenes. In: European Conference on Computer Vision (ECCV). bibtex
  4. PDF Zhai M., Salem T., Greenwell C., Workman S., Pless R., Jacobs N. 2018. Learning Geo-Temporal Image Features. In: British Machine Vision Conference (BMVC). bibtex
  5. Greenwell C., Workman S., Jacobs N. 2018. What Goes Where: Predicting Object Distributions from Above. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS). website bibtex
  6. PDF Salem T., Zhai M., Workman S., Jacobs N. 2018. A Multimodal Approach to Mapping Soundscapes. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS). website bibtex
  7. Song W., Workman S., Hadzic A., Souleyrette R., Green E., Chen M., Zhang X., Jacobs N. 2018. FARSA: Fully Automated Roadway Safety Assessment. In: IEEE Winter Conference on Applications of Computer Vision. bibtex
  8. PDF Vo N., Jacobs N., Hays J. 2017. Revisiting IM2GPS in the Deep Learning Era. In: IEEE International Conference on Computer Vision (ICCV). website bibtex
  9. Workman S., Zhai M., Crandall D., Jacobs N. 2017. A Unified Model for Near/Remote Sensing. In: IEEE International Conference on Computer Vision (ICCV). website bibtex
  10. Workman S., Souvenir R., Jacobs N. 2017. Understanding and Mapping Natural Beauty. In: IEEE International Conference on Computer Vision (ICCV). website bibtex
  11. PDF Zhai M., Bessinger Z., Workman S., Jacobs N. 2017. Predicting Ground-Level Scene Layout from Aerial Imagery. In: IEEE Computer Vision and Pattern Recognition (CVPR). bibtex

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. IIS-1553116. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.