
Graph neural networks, embeddings, and foundation models in spatial data science
2025-12-10


Basic ideas:
Source: https://distill.pub/2021/gnn-intro/
Types of GNNs:
“A bunch of numbers representing an idea”
JN
Embedding are created by training models to learn compact representations that capture essential information from high-dimensional data. They are often a byproduct of foundation models.
Usage:

Preprint: https://arxiv.org/pdf/2507.22291
Advantages:
Challenges:
Large, pre-trained models that learn general-purpose spatiotemporal and multimodal representations from massive amounts of unlabeled Earth observation data.
Characteristics:
Based on Self-Supervised Learning:
Fine-Tuning: Small labeled datasets are added to specialize for tasks such as:
land cover mapping, segmentation, change detection, object extraction, and more.
Embeddings: Foundation models output feature vectors reusable across tasks and regions.
Example models: Terramind, AnySat, Prithvi, AlphaEarth Foundations, etc.
Strengths:
Challenges:
Transformer-based model for tabular data (not only spatial!)

How It Works:
(Stated) advantages:
Scope: