EARTHVISION 2024

EARTHVISION 2024

June 17th, Seattle, USA

Aims and Scope

Earth Observation (EO) and remote sensing are ever-growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion and regression, to name just a few. It is motivated by numerous  applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows. 

Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations ( https://sdgs.un.org/goals  ). The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change.

A non exhaustive list of topics of interest includes the following:

  • Super-resolution in the spectral and spatial domain

  • Hyperspectral and multispectral image processing

  • Reconstruction and segmentation of optical and LiDAR 3D point clouds

  • Feature extraction and learning from spatio-temporal data 

  • Analysis  of UAV / aerial and satellite images and videos

  • Deep learning tailored for large-scale Earth Observation

  • Domain adaptation, concept drift, and the detection of out-of-distribution data

  • Data-centric machine learning

  • Evaluating models using unlabeled data

  • Self-, weakly, and unsupervised approaches for learning with spatial data

  • Foundation models and representation learning in the context of EO

  • Human-in-the-loop and active learning

  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing

  • Fusion of machine learning and physical models

  • Explainable and interpretable machine learning in Earth Observation applications

  • Uncertainty quantification of machine-learning based prediction from EO data

  • Applications for climate change, sustainable development goals, and geoscience

  • Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.

All manuscripts will be subject to a double-blind review process. Accepted EarthVision papers will be included in the CVPR2024 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEEXplore. Publication in IEEEXplore will be granted only if the paper meets IEEE publication policies and procedures.

Important Dates

All deadlines are considered end of day anywhere on Earth.

March 8, 2024Submission deadline 
April 5, 2024Notification to authors 
April 12, 2024Camera-ready deadline 
June 17, 2024Workshop 

Organizers

  • Ronny Hänsch, German Aerospace Center, Germany,
  • Devis Tuia, EPFL, Switzerland,
  • Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
  • Bertrand Le Saux, ESA/ESRIN, Italy
  • Loïc Landrieu, ENPC ParisTech, France
  • Charlotte Pelletier, UBS Vannes, France
  • Hannah Kerner, Arizona State University, USA

Technical Committee

  • Akhil Meethal, ETS Montreal
  • Alexandre Boulch, valeo.ai
  • Amanda Bright, National Geospatial-Intelligence Agency
  • Ankit Jha, Indian Institute of Technology Bombay
  • Begum Demir, TU Berlin
  • Bertrand Le Saux, ESA/Phi-lab
  • Biplab Banerjee, Indian Institute of Technology
  • Caleb Robinson, Microsoft
  • Camille Couprie, Facebook
  • Camille Kurtz, Université Paris Cité
  • Christian Heipke, Leibniz Universität Hannover
  • Christopher Ratto, JHUAPL
  • Claudio Persello, University of Twente
  • Clement Mallet, IGN, France
  • Dalton Lunga, Oak Ridge National Laboratory
  • Damien Robert, IGN
  • Daniel Iordache, VITO, Belgium
  • David Rolnick, McGill University
  • Diego Marcos, Inria
  • Dimitri Gominski, University of Copenhagen
  • Elliot Vincent, École des ponts ParisTech/Inria
  • Emanuele Dalsasso, EPFL
  • Esther Rolf, Google Research
  • Ewelina Rupnik, Univ Gustave Eiffel
  • Ferda Ofli, Qatar Computing Research Institute
  • Flora Weissgerber, ONERA
  • Franz Rottensteiner, Leibniz Universitat Hannover, Germany
  • Gabriel Tseng, NASA Harvest
  • Gabriele Moser, Università di Genova
  • Gedeon Muhawenayo, Arizona State University
  • Gemine Vivone, CNR-IMAA
  • Gencer Sumbul, EPFL
  • Georgios Voulgaris, University of Oxford
  • Gülşen Taşkın, İstanbul Teknik Üniversitesi
  • Gustau Camps-Valls, Universitat de València
  • Hamed Alemohammad, Clark University
  • Helmut Mayer, Bundeswehr University Munich
  • Jacob Arndt, Oak Ridge National Laboratory
  • Joëlle Hanna, University of St. Gallen
  • Jonathan Prexl, University of the Bundeswehr Munich
  • Konstantin Klemmer, Microsoft Research
  • Linus Scheibenreif, University of St. Gallen
  • Loic Landrieu, ENPC
  • M. Usman Rafique, Kitware Inc.
  • Manil Maskey, NASA MSFC
  • Marc Rußwurm, École Polytechnique Fédérale de Lausanne
  • Marco Körner, Technical University of Munich
  • Mareike Dorozynski, Institute of Photogrammetry and Geoinformation
  • Martin Weinmann, Karlsruhe Institute of Technology
  • Mathieu Aubry, École des ponts ParisTech
  • Matt Leotta, Kitware
  • Matthieu Molinier, VTT Technical Research Centre of Finland Ltd
  • Michael Schmitt, University of the Bundeswehr Munich
  • Miguel-Ángel Fernández-Torres, Universitat de València
  • Myron Brown, JHU
  • Nicolas Audebert, CNAM
  • Nicolas Gonthier, IGN
  • Nicolas Longepe, ESA
  • Nikolaos Dionelis, ESA
  • Patrick Ebel, ESA
  • Raian Maretto, University of Twente
  • Redouane Lguensat, IPSL
  • Ribana Roscher, Forschungszentrum Jülich
  • Ricardo Torres, Norwegian University of Science and Technology (NTNU)
  • Roberto Interdonato, CIRAD
  • Saurabh Prasad, University of Houston
  • Scott Workman, DZYNE Technologies
  • Seyed Majid Azimi, Ternow AI GmbH
  • Sophie Giffard-Roisin, Univ. Grenoble Alpes
  • Sudipan Saha, Indian Institute of Technology Delhi
  • Sylvain Lobry, Université Paris Cité
  • Tanya Nair, Floodbase
  • Teng Wu, Univ Gustave Eiffel
  • Thibaud Ehret, Centre Borelli
  • Valerio Marsocci, Conservatoire national des arts et métiers
  • Veda Sunkara, Cloud to Street
  • Vincent Lepetit, Université de Bordeaux
  • Wei He, Wuhan University
  • Yifang Ban, KTH Royal Institute of Technology
  • Zhijie Zhang, University of Arizona
  • Zhuangfang Yi, Regrow

Sponsors

 

 

 

Affiliations

       

Submissions

1. Prepare the anonymous, 8-page (references excluded) submission using the ev2024-template and following the paper guidelines. 

2. Submit at cmt3.research.microsoft.com/EarthVision2024.

Policies

A complete paper should be submitted using the EarthVision templates provided above.

Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper earthvision.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.

Papers are to be submitted using the dedicated submission platform: cmt3.research.microsoft.com/EarthVision2024. The submission deadline is strict.

By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.

Program

9:00 – 9:15

Welcome and Awards Announcement

9:15-10:00 

Keynote 1 –  Dan Morris, Google AI for Nature and Society

“Geospatial ML work you want to do for conservation”

Abstract

In this talk, I’ll highlight several translational research areas where the work that the EarthVision community is already doing is poised to have an immediate impact on conservation, specifically around EUDR compliance/enforcement, urban forestry, and aerial wildlife surveys, and I’ll try to convince everyone to get involved in translational work. If I don’t give the audience clear calls to action, I owe the whole room fancy coffee.

Bio

Dan Morris is a researcher in the Google AI for Nature and Society program, where he works on AI tools that help conservation scientists spend less time doing boring things and more time doing conservation. This includes tools that accelerate urban forest canopy assessments and image-based wildlife surveys. Prior to joining Google, he directed the AI for Earth program at Microsoft, and prior to that he spent approximately a zillion years in the medical devices group at Microsoft Research, working on signal processing and machine learning tools for wearable devices that supported cardiovascular monitoring, fitness tracking, and gesture interaction. He received his PhD from Stanford, where he worked on haptics and physical simulation for virtual surgery.

10:00 – 10:30

Morning Coffee Break

10:30-11:15

Keynote 2 – Marta Yebra, Australian National University

“Remote sensing technologies to support wildfire management and secure our future”

Abstract

As climate change worsens fire weather and increases the number of areas burned by wildfires worldwide, high-tech solutions are critical to changing how wildfires are battled. Remote sensing technologies provide access to dynamic and real-time information that helps fire managers to better understand the likelihood of a catastrophic bushfire, the whereabouts and intensity of active fires and accurately assess wildfire scale and environmental impacts. This remote sensing-derived information is critical to plan, prepare and respond to future wildfires. In my talk, I will provide an overview of the technological trends including sensor integration, planned satellite earth observing missions and state-of-the-art modelling approaches.

Bio

Dr. Yebra is a Professor in Environmental Engineering at the Australian National University (ANU) and the Director of the Bushfire Research Centre of Excellence which is undertaking advanced interdisciplinary research to develop an innovative system that aims to detect bushfires as soon as they start and suppress them within minutes. Her research focuses on developing applications of remote sensing to the management of fire risk and impact. Yebra led the Australian Flammability Monitoring System development, which informs landscape flammability across Australia in near real-time. It is now designing Australia’s first satellite mission to help forecast vulnerable areas where bushfires are at the highest risk of starting or burning out of control. She has served on several advisory government bodies including the Australian Space Agency’s Earth Observation Technical Advisory Group (2019-2021) and the Australian Capital Territory Multi Hazards Council (Since 2021). Dr Yebra has been awarded several awards for her contributions to bushfire management, including the Australian Space Awards Academic of the Year (2023), the Bushfire and Natural Hazards Cooperative Research Centre’s Outstanding Achievement in Research Utilization award (2019) and the Inaugural Max Day Fellowship by the Australian Academy of Science (2017).

11:15 – 11:45

Posters spotlights I

  • “SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex Lighting Decomposition”
    Nikhil Behari (Harvard)*; Akshat Dave (MIT Media Lab ); Kushagra Tiwary (MIT); William Y Yang (Camera Culture Group MIT Media Lab); Ramesh Raskar (Massachusetts Institute of Technology)
  • “Radar Fields: An Extension of Radiance Fields to SAR”
    Thibaud Ehret (Centre Borelli, ENS Paris-Saclay)*; Roger Marí (ENS Paris-Saclay); Dawa Derksen (CNES); Nicolas Gasnier (CNES); Gabriele Facciolo (ENS Paris – Saclay)
  • “Let me show you how it’s done – Cross-modal knowledge distillation as pretext task for semantic segmentation”
    Rudhishna Nair (TU Berlin); Ronny Haensch (DLR)*
  • “UrbanSARFloods: Sentinel-1 SLC-Based Benchmark Dataset for Urban and Open-Area Flood Mapping”
    Jie Zhao (Technical University of Munich)*; Zhitong Xiong (TUM); Xiaoxiang Zhu (Technical University of Munich, Germany)
  • “Cross-sensor super-resolution of irregularly sampled Sentinel-2 time series”
    Aimi Okabayashi (Université Bretagne Sud); Nicolas Audebert (IGN); Simon Donike (Université Bretagne Sud ); Charlotte Pelletier (Université de Bretagne du Sud)*
  • “Contrastive Pretraining for Visual Concept Explanations of Socioeconomic Outcomes”
    Ivica Obadic (Technical Univerisity of Munich)*; Alex Levering (VU Amsterdam); Lars Pennig (TU Munich); Dario Augusto Borges Oliveira (Technische Universität München); Diego Marcos (Inria); Xiaoxiang Zhu (Technical University of Munich, Germany)
  • “(Street) Lights Will Guide You: Georeferencing Nighttime Astronaut Photography of Earth”
    Alex H Stoken (Jacobs/NASA JSC)*; Peter Ilhardt (Jacobs/NASA JSC); Mark D Lambert (Jacobs); Kenton Fisher (NASA)
  • “Exploring Robust Features for Few-Shot Object Detection in Satellite Imagery”
    Xavier Bou Hernandez (Centre Borelli, ENS Paris-Saclay)*; Gabriele Facciolo (ENS Paris – Saclay); Rafael Grompone von Gioi (Centre Borelli, ENS Paris-Saclay); Jean-Michel Morel (City University of Hong Kong); Thibaud Ehret (Centre Borelli, ENS Paris-Saclay)
  • “Efficient local correlation volume for unsupervised optical flow estimation on small moving objects in large satellite images”
    Sarra Khairi (Inria); Etienne Meunier (Inria, Centre Rennes); Renaud Fraisse (Airbus Defence & Space); Patrick Bouthemy (INRIA)*

11:45 – 13:15

Lunch Break

13:15 – 13:45

Posters spotlights II

  • “Implicit Assimilation of Sparse In Situ Data for Dense & Global Storm Surge Forecasting”
    Patrick Ebel (ESA)*; Brandon Victor (La Trobe University); Peter Naylor (ESA); Gabriele Meoni (TU Delft); Federico Serva (Consiglio Nazionale delle Ricerche); Rochelle Schneider (European Space Agency)
  • “GeoSynth: Contextually-Aware High-Resolution Satellite Image Synthesis”
    Srikumar Sastry (Washington University in St Louis)*; Subash Khanal (Washington University in Saint Louis); Aayush Dhakal (Washington University in St Louis); Nathan Jacobs (Washington University in St. Louis)
  • “Detecting Out-Of-Distribution Earth Observation Images with Diffusion Models”
    Georges Le Bellier (Conservatoire National des Arts et Metiers)*; Nicolas Audebert (IGN)
  • “SyntStereo2Real: Edge-Aware GAN for Remote Sensing Image-to-Image Translation while Maintaining Stereo Constraint”
    Vasudha Venkatesan (Albert Ludwigs University of Freiburg)*; Daniel Panangian (DLR); Mario Fuentes Reyes (German Aerospace Center); Ksenia Bittner (German Aerospace Center)
  • “Unsupervised Domain Adaptation Architecture Search with Self-Training for Land Cover Mapping”
    Clifford Broni-Bediako (RIKEN)*; Junshi Xia (RIKEN); Naoto Yokoya (The University of Tokyo)
  • “Good at captioning, bad at counting: Benchmarking GPT-4V on Earth observation data”
    Chenhui Zhang (Institute for Data, Systems, and Society, Massachusetts Institute of technology)*; Sherrie Wang (MIT)
  • “Charting New Territories: Exploring the Geographic and Geospatial Capabilities of Multimodal LLMs”
    Jonathan Roberts (University of Cambridge)*; Timo Lüddecke (University of Göttingen); Rehan Sheikh (University of Cambridge); Kai Han (The University of Hong Kong); Samuel Albanie (University of Cambridge)
  • “GeoLLM-Engine: A Realistic Environment for Building Geospatial Copilots”
    Simranjit Singh (Microsoft)*; Michael Fore (Microsoft); Dimitrios Stamoulis (Microsoft)

13:45 – 14:15

Best paper presentations

  • Deep Generative Data Assimilation in Multimodal Setting
    Yongquan Qu (Columbia University)*; Juan Nathaniel (Columbia University); Shuolin Li (Columbia University); Pierre Gentine (Columbia University)
  • Sat2Cap: Mapping Fine-Grained Textual Descriptions from Satellite Images
    Aayush Dhakal (Washington University in St Louis)*; Adeel Ahmad (Taylor Geospatial Institute); Subash Khanal (Washington University in Saint Louis); Srikumar Sastry (Washington University in St Louis); Hannah R Kerner (Arizona State University); Nathan Jacobs (Washington University in St. Louis)

14:15 – 15:15

Poster session

15:15 – 15:45

Afternoon coffee break

15:45 – 17:15

Panel discussion: Foundation models and remote sensing

Moderator – Hannah Kerner, Arizona State University

Speakers – Esther Rolf (Harvard University / CU Boulder), Ritwik Gupta (Berkeley), Begüm Demir (TU Berlin), Gabriel Tseng (NASA Harvest / McGill / Mila), Boran Han (Amazon Research)

Abstract


EV24 will feature a panel discussion on “Unlocking Unlabeled Data: Approaches, Challenges, and Future Work in Remote Sensing Foundation Models”

This panel at the EarthVision workshop at CVPR 2024 will discuss various approaches to developing self-supervised “foundation” models for remote sensing data. The discussion will highlight diverse motivations, approaches, and considerations to model design as well as the challenges that remain for future work in this area.

17:15 – 17:30

Closing remarks

CVPR 2024

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Learn More: CVPR 2024