EARTHVISION 2023
June 18th, Vancouver, Canada
in conjuction with the Computer Vision and Pattern Recognition (CVPR) 2023 Conference
- Aims and Scope
- Important Dates
- People
- Challenge
- Sponsors
- Submission
- Program
- CVPR 2023
- Previous Workshops
Aims and Scope
Earth Observation (EO) and remote sensing are ever-growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion and regression, to name just a few. It is motivated by numerous applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows.
Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations ( https://sdgs.un.org/goals ). The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change.
A non exhaustive list of topics of interest includes the following:
Super-resolution in the spectral and spatial domain
Hyperspectral and multispectral image processing
Reconstruction and segmentation of optical and LiDAR 3D point clouds
Feature extraction and learning from spatio-temporal data
Analysis of UAV / aerial and satellite images and videos
Deep learning tailored for large-scale Earth Observation
Domain adaptation, concept drift, and the detection of out-of-distribution data
Evaluating models using unlabeled data
Self-, weakly, and unsupervised approaches for learning with spatial data
Human-in-the-loop and active learning
Multi-resolution, multi-temporal, multi-sensor, multi-modal processing
Fusion of machine learning and physical models
Explainable and interpretable machine learning in Earth Observation applications
Applications for climate change, sustainable development goals, and geoscience
Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.
All manuscripts will be subject to a double-blind review process. Accepted EarthVision papers will be included in the CVPR2023 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEEXplore. Publication in IEEEXplore will be granted only if the paper meets IEEE publication policies and procedures.
Important Dates
March 9, 2023 | Submission deadline | |
March 30, 2023 | Notification to authors | |
April 6, 2023 | Camera-ready deadline | |
June 18, 2023 | Workshop |
Organizers
- Ronny Hänsch, German Aerospace Center, Germany,
- Devis Tuia, EPFL, Switzerland,
- Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
- Bertrand Le Saux, ESA/ESRIN, Italy
- Nathan Jacobs, Washington University in St.Louis, USA
- Loïc Landrieu, ENPC ParisTech, France
- Charlotte Pelletier, UBS Vannes, France
- Hannah Kerner, Arizona State University, USA
- Beth Tellman, University of Arizona, USA
Technical Committee
- Amanda Bright, National Geospatial-Intelligence Agency
- Armin Hadzic, DZYNE Technologies
- Caleb Robinson, Microsoft
- Camille Kurtz, Université Paris Cité
- Christian Heipke, Leibniz Universität Hannover
- Claudio Persello, University of Twente
- Clement Mallet, IGN, France
- Dalton Lunga, Oak Ridge National Laboratory
- Damien Robert, IGN
- Daniel Iordache, VITO, Belgium
- Dimitri Gominski, University of Copenhagen
- Elliot Vincent, École des ponts ParisTech / Inria
- Ewelina Rupnik, IGN France
- Flora Weissgerber, ONERA
- Franz Rottensteiner, Leibniz Universitat Hannover, Germany
- Gabriele Moser, Università di Genova
- Gaetan Bahl, NXP
- Gellert Mattyus, Continental ADAS
- Gordon Christie, BlackSky, USA
- Gülşen Taşkın, İstanbul Teknik Üniversitesi
- Gustau Camps-Valls, Universitat de València
- Hamed Alemohammad, Clark University
- Helmut Mayer, Bundeswehr University Munich
- Hoàng-Ân Lê, IRISA
- Jacob Arndt, Oak Ridge National Laboratory
- Javiera Castillo Navarro, EPFL
- Jing Huang, Facebook
- Jonathan Giezendanner, University of Arizona
- Jonathan Sullivan, University of Arizona
- Krishna Regmi, University of Oklahoma
- Kuldeep Kurte, Oak Ridge National Laboratory
- Luc Baudoux, National French mapping agency (IGN)
- M. Usman Rafique, Kitware Inc.
- Manil Maskey, NASA MSFC
- Marc Rußwurm, École Polytechnique Fédérale de Lausanne
- Martin Weinmann, Karlsruhe Institute of Technology
- Martin R. Oswald, ETH Zurich
- Mathieu Bredif, IGN
- Matt Leotta, Kitware
- Matthieu Molinier, VTT Technical Research Centre of Finland Ltd
- Michael Mommert, University of St. Gallen
- Michael Schmitt, Bundeswehr University Munich
- Miguel-Ángel Fernández-Torres, Universitat de València
- Minh-Tan Pham, IRISA
- Myron Brown, JHU
- Nicolas Audebert, CNAM
- Philipe Ambrozio Dias, Oak Ridge National Laboratory
- Redouane Lguensat, IPSL
- Ribana Roscher, University of Bonn
- Ricardo da Silva Torres, Wageningen University and Research
- Roberto Interdonato, CIRAD
- Rodrigo Caye Daudt, ETH Zurich
- Rohit Mukherjee, The University of Arizona
- Roman Loiseau, École des ponts ParisTech
- Ryan Mukherjee, BlackSky
- Sara Beery, Caltech
- Saurabh Prasad, University of Houston
- Scott Workman, DZYNE Technologies
- Subit Chakrabarti, Floodbase
- Sudipan Saha, Indian Institute of Technology Delhi
- Sylvain Lobry, Université Paris Cité
- Tanya Nair, Floodbase
- Valérie Gouet-Brunet, LASTIG/IGN-UGE
- Veda Sunkara, Cloud to Street
- Vincent Lepetit, Université de Bordeaux
- Vivien Sainte Fare Garnot, IGN
- Yakoub Bazi, King Saud University – Riyadh KSA
- Yifang Ban, KTH Royal Institute of Technology
- Zhijie Zhang, University of Arizona
- Zhuangfang Yi, Regrow
Challenge
Reliable, large-scale biomass estimation is a big challenge for the African continent. If solved accurately and cost-efficiently, it can help develop the entire African continent by enabling use cases like reforestation, sustainable agriculture or green finance. For this reason, the organizers of the African Biomass Challenge (GIZ, BNETD, data354, University of Zurich, ETH Zurich, the University of Queensland) have decided to launch one of the largest African AI and data science competitions, whose ultimate goal is to accurately estimate aboveground biomass on any part of the continent using remote sensing data. For this first version of the challenge, they have put together a dataset consisting of ESA Sentinel-2 images, NASA GEDI data and ground truth biomass collected in different cocoa plantations in Côte d’Ivoire. All AI practitioners and enthusiasts are invited to take part in the competition organized on Zindi.
Sponsors
Gold
Silver
Bronze
Affiliations
Submissions
1. Prepare the anonymous, 8-page (references excluded) submission using the ev2023-template and following the paper guidelines.
2. Submit at cmt3.research.microsoft.com/EarthVision2023.
Policies
A complete paper should be submitted using the EarthVision templates provided above.
Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper earthvision.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.
Papers are to be submitted using the dedicated submission platform: cmt3.research.microsoft.com/EarthVision2023. The submission deadline is strict.
By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.
Program
Additionally to presentations of the accepted papers and the featured ABC Challenge, we are excited to have the following keynote speakers at EarthVision 2023:
9:00 – 9:15 | Welcome and Awards Announcement |
9:15 – 10:00 | Keynote 1 – “Geospatial Distribution Shifts in Ecology: Mapping the Urban Forest” Generalization to novel domains is a fundamental challenge for computer vision. Near-perfect accuracy on benchmarks is common, but these models do not work as expected when deployed outside of the training distribution. To build computer vision systems that solve real-world problems at global scale, we need benchmarks that fully capture real-world complexity, including geographic domain shift, long-tailed distributions, and data noise. We propose urban forest monitoring as an ideal testbed for studying and improving upon these computer vision challenges, while working towards filling a crucial environmental and societal need. The Auto Arborist dataset joins public tree censuses from 23 cities with a large collection of street level and aerial imagery and contains over 2.5M trees from more than 300 genera, enabling the large-scale analysis of generalization with respect to geographic distribution shifts and across data modalities, vital for such a system to be deployed at-scale. Sara Beery will join MIT as an assistant professor in their Faculty of Artificial Intelligence and Decision-Making in September 2023. She is currently a visiting researcher at Google, working on large-scale urban forest monitoring as part of the Auto Arborist project. Beery received her PhD in Computing and Mathematical Sciences at Caltech in 2022, where she was advised by Pietro Perona and awarded the Amori Doctoral Prize for her thesis. Her research focuses on building computer vision methods that enable global-scale environmental and biodiversity monitoring across data modalities, tackling real-world challenges including geospatial and temporal domain shift, learning from imperfect data, fine-grained categories, and long-tailed distributions. She partners with industry, nongovernmental organizations, and government agencies to deploy her methods in the wild worldwide. She works toward increasing the diversity and accessibility of academic research in artificial intelligence through interdisciplinary capacity building and education, and has founded the AI for Conservation slack community, serves as the Biodiversity Community Lead for Climate Change AI, and founded and directs the Summer Workshop on Computer Vision Methods for Ecology. |
10:00 – 10:30 | Coffee Break |
10:30 – 11:15 | Keynote 2 – ”Using time series satellite data to map and monitor Canada’s forests” Satellite remote sensing has developed rapidly in recent years. Image data is increasingly available from multiple and compatible sensors. Data is increasingly analysis ready. Algorithms are also increasingly amenable to diverse data types and volumes. The forest ecosystems of Canada occupy over 650 million hectares with thematic interests in land cover, land change, and forest structure to inform sustainable forest management practices, national and international reporting, as well as science question. Using experiences from the mapping and multidecadal monitoring of Canada’s forest ecosystems, the aim of this talk is to provide context on our ongoing work (information needs offering motivation), some conceptual opportunities for refined data processing and manipulation, to some actual methods and open data products generated. We see the methods developed and used as provisional and open to update and revision over time as new data and analysis approaches, such as from the Earthvision community, emerge. Dr. Michael Wulder is a Senior Research Scientist with the Canadian Forest Service of Natural Resources Canada. He uses remotely sensed and spatial data to study and monitor forests across Canada, over a range of scales, contributing to national and international programs. Mike received his Ph.D. in 1998 from the University of Waterloo and has worked at the Canadian Forest Service, Pacific Forestry Centre, in Victoria, British Columbia since then (25 years!). His research efforts cover a range of scales from the plot- to national scale, with data sources including LiDAR to a range of optical satellites, leading to novel and open access map products of Canada’s forest land cover, change, and structure. Dr. Wulder’s major research publications include the book Remote Sensing of Forest Environments: Concepts and Case Studies (2003) and Forest disturbance and spatial pattern: Remote sensing and GIS approaches (2006); with publication of over 400 articles in peer reviewed journals that have been cited >40,000 times with an h-index of 100 (Scholar Google). Dr. Wulder is an adjunct professor the Department of Forest Resources Management of the University of British Columbia. Dr. Wulder is a member of the USGS/NASA Landsat Science Team (since 2006) and played a lead role in initiating the SilviLaser series of international conferences focused on laser remote sensing of forest environments, and co-chaired the 2002 and 2012 editions. |
11:15 – 11:45 | Posters spotlights I ”Handheld Burst Super-Resolution Meets Multi-Exposure Satellite Imagery” @InProceedings{Lafenetre_2023_CVPR, author = {Lafenetre, Jamy and Nguyen, Ngoc Long and Facciolo, Gabriele and Eboli, Thomas}, title = {Handheld Burst Super-Resolution Meets Multi-Exposure Satellite Imagery}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2056-2064} } “Deep unfolding for hypersharpening using a high-frequency injection module” @InProceedings{Mifdal_2023_CVPR, author = {Mifdal, Jamila and Tom\’as-Cruz, Marc and Sebastianelli, Alessandro and Coll, Bartomeu and Duran, Joan}, title = {Deep Unfolding for Hypersharpening Using a High-Frequency Injection Module}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2106-2115} } “DeepSim-Nets: Deep Similarity Networks for Stereo Image Matching” @InProceedings{Chebbi_2023_CVPR, author = {Chebbi, Mohamed Ali and Rupnik, Ewelina and Pierrot-Deseilligny, Marc and Lopes, Paul}, title = {DeepSim-Nets: Deep Similarity Networks for Stereo Image Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2097-2105} } “Multi-Date Earth Observation NeRF: The Detail Is in the Shadows” @InProceedings{Mari_2023_CVPR, author = {Mar{\’\i}, Roger and Facciolo, Gabriele and Ehret, Thibaud}, title = {Multi-Date Earth Observation NeRF: The Detail Is in the Shadows}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2035-2045} } “Inferring the past: a combined CNN-LSTM deep learning framework to fuse satellites for historical inundation mapping” @InProceedings{Giezendanner_2023_CVPR, author = {Giezendanner, Jonathan and Mukherjee, Rohit and Purri, Matthew and Thomas, Mitchell and Mauerman, Max and Islam, A.K.M. Saiful and Tellman, Beth}, title = {Inferring the Past: A Combined CNN-LSTM Deep Learning Framework To Fuse Satellites for Historical Inundation Mapping}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2155-2165} } “Seasonal Domain Shift in the Global South: Dataset and Deep Features Analysis” @InProceedings{Voulgaris_2023_CVPR, author = {Voulgaris, Georgios and Philippides, Andy and Dolley, Jonathan and Reffin, Jeremy and Marshall, Fiona and Quadrianto, Novi}, title = {Seasonal Domain Shift in the Global South: Dataset and Deep Features Analysis}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2116-2124} } “GeoMultiTaskNet: remote sensing unsupervised domain adaptation using geographical coordinates” @InProceedings{Marsocci_2023_CVPR, author = {Marsocci, Valerio and Gonthier, Nicolas and Garioud, Anatol and Scardapane, Simone and Mallet, Cl\’ement}, title = {GeoMultiTaskNet: Remote Sensing Unsupervised Domain Adaptation Using Geographical Coordinates}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2075-2085} }
|
11:45 – 13:15 | Lunch Break |
13:15 – 13:45 | Posters spotlights II “UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series” @InProceedings{Ebel_2023_CVPR, author = {Ebel, Patrick and Garnot, Vivien Sainte Fare and Schmitt, Michael and Wegner, Jan Dirk and Zhu, Xiao Xiang}, title = {UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical Satellite Time Series}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2086-2096} } “Cascaded Zoom-in Detector for High Resolution Aerial Images” @InProceedings{Meethal_2023_CVPR, author = {Meethal, Akhil and Granger, Eric and Pedersoli, Marco}, title = {Cascaded Zoom-In Detector for High Resolution Aerial Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2046-2055} } “Masked Vision Transformers for Hyperspectral Image Classification” @InProceedings{Scheibenreif_2023_CVPR, author = {Scheibenreif, Linus and Mommert, Michael and Borth, Damian}, title = {Masked Vision Transformers for Hyperspectral Image Classification}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2166-2176} } “Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation” @InProceedings{Hanna_2023_CVPR, author = {Hanna, Jo\”elle and Mommert, Michael and Borth, Damian}, title = {Sparse Multimodal Vision Transformer for Weakly Supervised Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2145-2154} } “Comprehensive quality assessment of optical satellite imagery using weakly supervised video learning” @InProceedings{Pasquarella_2023_CVPR, author = {Pasquarella, Valerie J. and Brown, Christopher F. and Czerwinski, Wanda and Rucklidge, William J.}, title = {Comprehensive Quality Assessment of Optical Satellite Imagery Using Weakly Supervised Video Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2125-2135} } “Multi-Modal Multi-Objective Contrastive Learning for Sentinel-1/2 Imagery” @InProceedings{Prexl_2023_CVPR, author = {Prexl, Jonathan and Schmitt, Michael}, title = {Multi-Modal Multi-Objective Contrastive Learning for Sentinel-1/2 Imagery}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2136-2144} } “APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization using CLIP” @InProceedings{Singha_2023_CVPR, author = {Singha, Mainak and Jha, Ankit and Solanki, Bhupendra and Bose, Shirsha and Banerjee, Biplab}, title = {APPLeNet: Visual Attention Parameterized Prompt Learning for Few-Shot Remote Sensing Image Generalization Using CLIP}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2024-2034} } |
13:45 – 14:15 | Best paper presentations |
14:15 – 15:15 | Poster session |
15:15 – 15:45 | Afternoon coffee break |
15:45 – 16:30 | Keynote 3 – ”Lifelong visual representation learning” Abstract Bio |
16:30 – 17:30 | Presentation of the African Biomass Challenge |
17:30 | Closing of the workshop |
CVPR 2023
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.