
Advanced Topics in Vision-Language Models for Earth Observation
Participants of this seminar acquire knowledge in advancements in the field of Vision-Language Models (VLMs) for Earth observation. After completing this course, the students are capable of conducting a literature search, contextualizing scientific papers and presenting topics in the context of the vision-language models for Earth observation. Moreover, students participate in scientific discussions and have an opportunity to sharpen their critical thinking skills.
The continuous operation of Earth-orbiting satellites generates vast and ever-growing archives of satellite images. Natural language presents an intuitive interface for accessing, querying, and interpreting the data from such archives. The seminar is concerned with recent topics in vision-language models for Earth observation. In particular, the topics include, yet are not limited to: 1) training of large vision-language models; 2) deploying large vision-language models for downstream tasks (e.g. visual question answering, image captioning, etc.); 3) adaptation strategies, prompt tuning and visual instruction tuning; 4) few-shot and continual learning.
- Trainer/in: Begüm Demir
- Trainer/in: Martin Hermann Paul Fuchs
- Trainer/in: Pegah Golchin
- Trainer/in: Johann-Ludwig Herzog