Search
Close this search box.

EARTHVISION 2022

EARTHVISION 2022

June 19th, New Orleans, Louisiana - hybrid/virtual

Aims and Scope

Earth Observation (EO)/Remote Sensing is an ever-growing field of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of the domain is to provide large-scale, homogeneous information about processes occurring at the surface of the Earth exploiting data collected by airborne and spaceborne sensors. Earth Observation covers a broad range of tasks, ranging from detection to registration, data mining, multi-sensor, multi-resolution, multi-temporal, and multi-modality fusion, and regression, to name just a few. It is motivated by numerous applications such as location-based services, online mapping services, large-scale surveillance, 3D urban modelling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modelling, etc. The sheer amount of data calls for highly automated scene interpretation workflows. Earth Observation and in particular the analysis of spaceborne data directly connects to 34 indicators out of 40 (29 targets and 11 goals) of the Sustainable Development Goals defined by the United Nations. The aim of EarthVision to advance the state of the art in machine learning-based analysis of remote sensing data is thus of high relevance. It also connects to other immediate societal challenges such as monitoring of forest fires and other natural hazards, urban growth, deforestation, and climate change. Submissions are invited from all areas of computer vision and image analysis relevant for, or applied to, environmental remote sensing. Topics of interest include, but are not limited to:

  • Super-resolution in the spectral and spatial domain
  • Hyperspectral and multispectral image processing
  • 3D reconstruction from aerial optical and LiDAR acquisitions
  • Feature extraction and learning from spatio-temporal data
  • Semantic classification of UAV / aerial and satellite images and videos
  • Deep learning tailored for large-scale Earth observation
  • Domain adaptation, concept drift, and the detection of out-of-distribution data
  • Self-, weakly, and unsupervised approaches for learning with spatial data
  • Human-in-the-loop and active learning
  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing
  • Fusion of machine learning and physical models
  • Explainable and interpretable machine learning in Earth Observation applications
  • Applications for climate change, sustainable development goals, and geoscience
  • Public benchmark datasets: Training data standards, testing & evaluation metrics, as well as open source research and development.

All manuscripts will be subject to a double-blind review process. Accepted EARTHVISION papers will be included in the CVPR2021 workshop proceedings (published open access on the Computer Vision Foundation website) and submitted to IEEE for publication in IEEE Xplore. Publication in IEEE Xplore will be granted only if the paper meets IEEE publication policies and procedures.

Important Dates

March 16, 2022Full paper submission11:59 pm
April 11, 2022Decision notification to authors 
April 19, 2022Camera-ready paper 
June 19, 2022WorkshopFull day

Organizers

  • Ronny Hänsch, German Aerospace Center, Germany,
  • Devis Tuia, EPFL, Switzerland,
  • Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland,
  • Bertrand Le Saux, ESA/ESRIN, Italy
  • Naoto Yokoya, Univ. of Tokyo & RIKEN, Japan
  • Nathan Jacobs, Univ. of Kentucky, USA
  • Fabio Pacifici, Maxar, USA
  • Mariko Burgin, NASA JPL, USA
  • Loïc Landrieu, IGN, France
  • Charlotte Pelletier, UBS Vannes, France

Technical Committee

  • Philipe Ambrozio Dias, Oak Ridge National Laboratory
  • Jacob Arndt, Oak Ridge National Laboratory
  • Nicolas Audebert, CNAM
  • Seyed Majid Azimi, Ternow AI GmbH
  • Gaetan Bahl, INRIA
  • Luc Baudoux, National French maping agency (IGN)
  • Alexandre Boulch, Valeo.ai
  • Amanda Bright, National Geospatial-Intelligence Agency
  • Myron Brown, JHU
  • Javiera Castillo Navarro, ONERA
  • Rodrigo Caye Daudt, ETH Zurich
  • Gordon Christie, JHU
  • Ricardo da Silva Torres, Wageningen University and Research
  • Charles Della Porta, The Johns Hopkins University Applied Physics Lab
  • Begum Demir, TU Berlin
  • Miguel-Ángel Fernández-Torres, Universitat de València
  • Friedrich Fraundorfer, Graz University of Technology
  • Sophie Giffard-Roisin, Univ. Grenoble Alpes
  • Connor Greenwell, University of Kentucky
  • Armin Hadzic, Johns Hopkins University Applied Physics Laboratory
  • Wei He, Wuhan University
  • Jing Huang, Facebook,
  • Daniel Iordache, VITO, Belgium
  • Nathan Jacobs, University of Kentucky
  • Benjamin Kellenberger, Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Marco Körner, Technical University of Munich
  • Kuldeep Kurte, Oak Ridge National Laboratory
  • loic landrieu, IGN
  • Hoàng-Ân Lê, University of South Brittany
  • Matt Leotta, Kitware
  • Sylvain Lobry, Université Paris Cité
  • Romain Loiseau, École des ponts ParisTech
  • Dalton Lunga, Oak Ridge National Laboratory
  • Nikolay Malkin, Mila
  • Murari Mandal, National University of Singapore
  • Diego Marcos, Wageningen University
  • Raian Maretto, University of Twente
  • Manil Maskey NASA MSFC
  • Helmut Mayer, Bundeswehr University Munich
  • Lichao Mou, DLR&TUM
  • Ryan Mukherjee, BlackSky
  • Ferda Ofli, Qatar Computing Research Institute HBKU
  • Charlotte Pelletier, Université de Bretagne du Sud
  • Claudio Persello, University of Twente
  • Minh-Tan Pham, IRISA
  • Rongjun Qin, The Ohio State University
  • M. Usman Rafique, Kitware Inc.
  • Damien Robert, IGN
  • Caleb Robinson, Microsoft
  • Ribana Roscher, University of Bonn
  • Franz Rottensteiner, Leibniz Universitat Hannover, Germany
  • Ewelina Rupnik, IGN France
  • Stefania Russo, ETH Zurich
  • Marc Rußwurm, École Polytechnique Fédérale de Lausanne
  • Rose Rustowicz, Descartes Labs
  • Sudipan Saha, Technical University of Munich
  • Vivien Sainte Fare Garnot, IGN
  • Michael Schmitt, Bundeswehr University Munich
  • Jake Shermeyer, Capella Space
  • Gülşen Taşkın, İstanbul Teknik Üniversitesi
  • Beth Tellman, Cloud to Street
  • Tatsumi Uezato, Hitachi Ltd
  • Ujjwal Verma, MIT MAHE Manipal
  • Martin Weinmann, Karlsruhe Institute of Technology
  • Scott Workman, DZYNE Technologies
  • Junshi Xia, Riken
  • Yonghao Xu, Institute of Advanced Research in Artificial Intelligence (IARAI)
  • Quanming Yao, Tsinghua University

Challenge

We are pleased to announce that EarthVision 2022 will feature the upcoming SpaceNet 8 Challenge. Details will be announced soon. Stay tuned!

Sponsors

 

 

 

 

 

 

Affiliations

             

Submissions

1. Prepare the anonymous, 8-page (references excluded) submission using the ev2022-template  and following the paper guidelines. 
2. Submit on cmt3.research.microsoft.com/EARTHVISION2022.

Policies

A complete paper should be submitted using the EarthVision templates provided above.

Reviewing is double blind, i.e. authors do not know the names of the reviewers and reviewers do not know the names of the authors. Please read Section 1.7 of the example paper earthvision.pdf for detailed instructions on how to preserve anonymity. Avoid providing acknowledgments or links that may identify the authors.

Papers are to be submitted using the dedicated submission platform: cmt3.research.microsoft.com/EARTHVISION2022. The submission deadline is strict.

By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in a substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.”

Program

CVPR’22 and therefore also EarthVision’22 are hybrid events, i.e. a mix of in-person and virtual presentations. The times below are in New Orleans local (CDT).

8:30 – 8:45

Welcome

8:45 – 9:15  

Keynote 1 – John Quinn, Google
“Building footprints for the African continent” (virtual)

Bio

John Quinn is a Senior Research Software Engineer at Google Research in Ghana, and a director of the non-profit Sunbird AI in Uganda. He has worked on African applications of machine learning and data technology since 2007, having previously been a Senior Lecturer in Computer Science at Makerere University, Uganda, and the Africa technical lead for United Nations Global Pulse, an initiative using data science and machine learning to support UN development and humanitarian work.

9:15 – 10:00

Oral Session 1 – Semantic Analysis of Geospatial Data

“Cross-dataset Learning for Generalizable Land Use Scene Classification”, Dimitri Gominski (University of Copenhagen); Valérie Gouet-Brunet (LASTIG/IGN-UGE); Liming Chen (Ecole Centrale de Lyon); (in person)
(paper)

“Single-Shot End-to-end Road Graph Extraction”, Gaetan Bahl (INRIA); Mehdi Bahri (Imperial College London); Florent Lafarge (INRIA); (in person)
(paper)

“Self-supervised Vision Transformers for Land-cover Segmentation and Classification”, Linus M. Scheibenreif (University of St. Gallen); Joëlle Hanna (University of St. Gallen); Michael Mommert (University of St. Gallen); Damian Borth (University of St. Gallen); (in person)
(paper)

“Fast building segmentation from satellite imagery and few local labels”, Caleb Robinson (Microsoft AI for Good Research Lab); Anthony Ortiz (Microsoft); Hogeun Park (World Bank); Nancy Lozano (World Bank); Jon Kher Kaw (World Bank); Tina Sederholm (Microsoft AI for Good Research Lab); Rahul  Dodhia (Microsoft); Juan M Lavista Ferres (Microsoft); (virtual)
(paper)

10:00 – 10:30

Coffee Break

10:30 – 11:00

Keynote 2 – Rich Caruana, Microsoft Research (virtual)

”Can Machines Learn to Predict the Weather? Using Deep Learning Instead of Physical Simulation for Weather Forecasting”

Abstract

Most weather forecasting is done via computer simulation of massive physical models on supercomputers. Instead of using physical simulation, we train deep neural nets to predict the weather from historical data. Our model uses ensembles of deep convolutional neural nets trained with multitask learning to simultaneously predict the weather at all locations on the globe. Although the accuracy of the model is not yet as good as ECMWF, which is currently the best model for 1-14 day forecasts, it lags behind ECMWF by only a few days at two weeks, and is as accurate (or possibly more accurate) for sub-seasonal (2-6 week) forecasts. Remarkably, while physical models such as ECMWF required decades of development and advances in physics and fluid dynamics, the CNN model can be trained on a single GPU in less than a week. Moreover, the deep CNN makes predictions a 100X times faster on a single GPU compared to the massive supercomputers required for physical weather forecasting, making it possible to generate large ensemble forecasts.

Bio

Rich Caruana is a senior principal researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in CS at Cornell University, at UCLA’s Medical School, and at CMU’s Center for Learning and Discovery. Rich’s Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon, and his thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning. Rich received an NSF CAREER Award in 2004, best paper awards in 2005 (with Alex Niculescu-Mizi, 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), and co-chaired KDD in 2007. His current research focus is on learning for medical decision making, transparent modeling, and deep learning for weather forecasting.more

11:00 – 11:45

Oral Session 2 –Emerging Applications in Remote Sensing

“Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient Objects and Shadow Modeling Using RPC Cameras”, Roger Marí Molas (ENS Paris – Saclay); Gabriele Facciolo (ENS Paris – Saclay); Thibaud Ehret (Centre Borelli, ENS Paris-Saclay); (in person)
(paper)

“Self-Supervised Learning to Guide Scientifically Relevant Categorization of Martian Terrain Images”, Tejas Panambur (University of Massachusetts, Amherst); Deep Chakraborty (University of Massachusetts Amherst); Melissa J Meyer (Brown University); Ralph Milliken (Brown University); Erik Learned-Miller (University of Massachusetts, Amherst); Mario Parente (“Uni. Mass. Amherst, MA”); (in person)
(paper)

“Prompt–RSVQA: Prompting visual context to a language model for Remote Sensing Visual Question Answering”, Christel Chappuis (EPFL); Valérie Zermatten (EPFL); Sylvain Lobry (Université de Paris); Bertrand Le Saux (European Space Agency (ESA)); Devis Tuia (EPFL); (in person)
(paper)

“Towards assessing agricultural land suitability with causal machine learning”, Georgios Giannarakis (National Observatory of Athens); Vasileios Sitokonstantinou (National Observatory of Athens); Roxanne Suzette Lorilla (National Observatory of Athens); Charalampos Kontoes (National Observatory of Athens); (virtual)
(paper)

 

11:45 – 13:15

Lunch Break

13:15 – 13:45

Keynote 3 – Tanya Berger-Wolf, Ohio State University

Imageomics: images as the source of information about life (in person)

Abstract

Images are the most abundant, readily available source for documenting life on the planet. Coming from field studies, camera traps, wildlife surveys, laboratory collections, autonomous vehicles on the land, water, and in the air, as well as tourists’ cameras, citizen scientists’ platforms, and posts on social media, there are millions of images of living organisms. Recent developments in computer vision and machine learning have enabled species classification, individual animal identification, population counting, tracking and much more. But the power of images is yet to be fully harnessed for science and conservation. Even the traits of organisms cannot be readily extracted from them. The analysis of traits, the integrated products of genes and environment, is critical for biologists to predict effects of environmental change or genetic manipulation and to understand the significance of patterns in the four billion year evolutionary history of life.

Bio

Dr. Tanya Berger-Wolf is a Professor of Computer Science Engineering, Electrical and Computer Engineering, and Evolution, Ecology, and Organismal Biology at the Ohio State University, where she is also the Director of the Translational Data Analytics Institute. As a computational ecologist, her research is at the unique intersection of computer science, wildlife biology, and social sciences. She creates computational solutions to address questions such as how environmental factors affect the behavior of social animals (humans included). Berger-Wolf is also a director and co-founder of the conservation software non-profit Wild Me, home of the Wildbook project, which brings together computer vision, crowdsourcing, and conservation. It has been featured in media, including Forbes, The New York Times, CNN, National Geographic, and most recently The Economist.

Berger-Wolf has given hundreds of talks about her work, including at TEDx and UN/UNESCO AI for the Planet.

Prior to coming to OSU in January 2020, Berger-Wolf was at the University of Illinois at Chicago. Berger-Wolf holds a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. She has received numerous awards for her research and mentoring, including University of Illinois Scholar, UIC Distinguished Researcher of the Year, US National Science Foundation CAREER, Association for Women in Science Chicago Innovator, and the UIC Mentor of the Year.

13:45 – 14:30

Oral Session 3 – Time Series, Forecasting, Change

“Generalized Classification of Satellite Image Time Series with Thermal Positional Encoding”, Joachim Nyborg (Aarhus University); Charlotte Pelletier (Université de Bretagne du Sud); Ira Assent (Aarhus University); (in person)
(paper)

“Transforming Temporal Embeddings to Keypoint Heatmaps for Detection of Tiny Vehicles in Wide Area Motion Imagery (WAMI) Sequences”, Farhood Negin (Inria); Mohsen Tabejamaat (Inria); Renaud Fraisse (Airbus Defence & Space); Francois Bremond (Inria Sophia Antipolis, France)
(paper)

“Understanding the Role of Weather Data for Earth Surface Forecasting using a ConvLSTM-based Model”, Codrut-Andrei Diaconu (DLR); Sudipan Saha (Technical University of Munich); Stephan Günnemann (Technical University of Munich); Xiaoxiang Zhu (Technical University of Munich (TUM); German Aerospace Center (DLR)); (virtual)
(paper)

“Unsupervised Change Detection Based on Image Reconstruction Loss”, Hyeoncheol Noh (Hanbat National University); Jingi Ju (Hanbat National University); Minseok Seo (si-analytics); Jongchan Park (Lunit); Dong-Geol Choi (Hanbat National University)”; (virtual)
(paper)

 

14:30 – 16:00

Coffee Break / Poster Session

16:00 – 16:30

SpaceNet 8 Challenge

16:30 – 17:15

Oral Session 4 – New Benchmarks for Machine Learning in Earth Observation

“Multi-Layer Modeling of Dense Vegetation from Aerial LiDAR Scans”, Ekaterina Kalinicheva (IGN); loic landrieu (IGN); Clement Mallet (“IGN, France”); Nesrine Chehata (ENSEGID); (in person)
(paper)

“OpenSentinelMap: A Large-Scale Land Use Dataset using OpenStreetMap and Sentinel-2 Imagery”, Noah Johnson (Vision Systems, Inc.); Wayne Treible (Vision Systems, Inc.); Daniel E Crispell (Vision Systems, Inc.); (in person)
(paper)

“Hephaestus: A large scale multitask dataset towards InSAR understanding”, Nikolaos I Bountos (National Observatory of Athens); IOANNIS PAPOUTSIS (National Observatory of Athens); Dimitrios Michail (Harokopio University of Athens); Andreas Karavias (Harokopio University of Athens); Panagiotis Elias (National Observatory of Athens); Isaak Parcharidis (Harokopio University of Athens); (in person)
(paper)

“Urban Building Classification (UBC) – A Dataset for Individual Building Detection and Classification from Satellite Imagery”, Xingliang Huang (Aerospace Information Research Institute, Chinese Academy of Sciences); Libo Ren (Aerospace Information Research Institute, Chinese Academy of Sciences); Chenglong Liu (University of Chinese Academy of Sciences); Yixuan Wang (Technische Universität München); Hongfeng Yu (Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China); Michael Schmitt (Bundeswehr University Munich); Ronny Hänsch (German Aerospace Center); Xian Sun (Aerospace Information Research Institute, Chinese Academy of Sciences); Hai Huang (Bundeswehr University Munich); Helmut Mayer (Bundeswehr University Munich); (virtual)University)”; (virtual)
(paper)

 

17:15 – 17:30

Closing

CVPR 2022

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

Learn More: CVPR 2022