Mapping Ecosystem Services Using CORINE

Last updated: 2025-08-29

Knit directory: nbs-workflow/

This page is built with R Markdown and workflowr 1.7.1. See Past versions for the development history.


FileVersionAuthorDateMessage
Rmd 7226f26 Desmond Lartey 2025-08-29 wflow_publish("analysis/Mapping_Ecosystem_services.html.Rmd")
html 76f52ce Desmond Lartey 2025-08-29 Build site.
Rmd 0ce17c3 Desmond Lartey 2025-08-29 wflow_publish("analysis/Mapping_Ecosystem_services.html.Rmd")

Contributors: Author 1 Author 2 Author 3 Author 5 Author 4 Author 6

This guideline is based on the NBRACER project Deliverable 5.3 - Quantification of Ecosystems and NbS Hotspot Identification. The deliverable adopted two complementary approaches:

  • Coarser roadmap - qualitative/quantitative, using open-source spatial datasets and replicable mapping techniques.
  • Finer roadmap - quantitative, using context-specific datasets and models.
This tutorial focuses on the coarser roadmap approach, designed for practitioners and researchers who wish to identify Boidiversity and Ecosystem services to identify potential hotspot for NbS. For a full understanding of the conceptual development behind both approaches, see the Deliverable 5.3 document and read the associated scientific publication.

Introduction

Mapping and Quantifying the spatial distribution of ecosystem services (ES) is very useful for climate-resilient policy and planning for nature-based solutions (NbS). Normally, one will do this by modelling and assessing the supply of ecosystem services (ESS) at a given location. However, direct monitoring of every service is rarely feasible at larger or even at local scales where local measurement or insotu data capacity is not present. Most Ecosystem service assessment often relies on land cover as a proxy due to the absence of primary field-based data.

Several scholarly works (Burkhard et al., 2009, Jacobs et al., 2015, Maes et al., 2012) have established land cover as a reasonable proxy for estimating ecosystem service supply, especially where direct measurements are unavailable. Here, we modified the approach of (Burkhard et al., 2009) and present a methodology that leverages the CORINE Land Cover (CLC) inventory with expert/sound judgement approximations to generate both regulating and provisioning ES layers.

Background & Literature-Informed Rationale

ES mapping traditionally follows four families of approaches (Burkhard et al., 2009; Jacobs et al., 2015; Maes et al., 2012):

  1. Proxy-based methods - assign ES scores to land-cover types.
  2. Process-based models - simulate biophysical flows (e.g., InVEST).
  3. Empirical/statistical models - derive ES from field or RS observations.
  4. Participatory & benefit-transfer approaches - capture socio-cultural values.

We realised that Proxy methods dominate large-area studies because they are quick and inexpensive, but they risk sizable errors by assuming uniform service supply within a land-cover class (Eigenbrod et al., 2010a; Eigenbrod et al., 2010b). To mitigate those shortcomings, we: (i) use weighted score tables calibrated from recent European studies and best practices on improving ESS mapping (Martínez-Harms et al., 2016; Egoh et al., 2008) and (ii) implemented optional probability layers following Nedkov & Burkhard (2012); and (iii) provide UI tools for rapid sensitivity checks.

Common ES-Mapping Paradigms

Paradigm Typical Data Core Strength Key Limitation
Proxy / Matrix Land-cover maps (CORINE, GlobCover) Fast, transparent, scalable Assumes equal service across class; static
Process-based Climate, DEM, soils, hydrology Mechanistic; scenario-ready High data & expertise demand
Empirical / ML Field plots, crowdsourced, RS metrics Data-driven accuracy Often local; expensive to upscale
Participatory Workshops, interviews Cultural relevance; stakeholder buy-in Subjective; coarse spatial detail
Hybrid (this guide) CORINE + weighted scores + optional probabilities Continental coverage; reproducible; GEE-native Relies on proxy assumptions; field validation advised

Guideline Road-Map

We will walk you through:

  • 1. Score-Table Design (30 ESS) - link CLC codes to weighted service scores.
  • 2. Service Grouping - flag each ESS as Regulating or Provisioning
  • 3. Reclassification - create per-ESS intensity layers via .remap().
  • 4. (Advanced) Probabilistic ESS - integrate confidence bands for uncertainty analysis.

Each step is designed to be modular and replicable, allowing you to adapt the workflow to your specific needs or local conditions. To handle these datasets and computation effiecntly, We will use Earth Engine to do this.

At the end of this guide, we will produce a replicable pipeline that can be deploy to NbS potential implementation, evaluate land-use trade-offs, or track ES produced by these areas and what hazards they can support or mitigate it as well as what potential hzards can affect those ecosystems.

Step 1 - Define the Study Area (Administrative Boundaries)

Step 1 - Define the Study Area (Administrative Boundaries)

Before mapping ecosystem services (ESS), we need a study area-the geographic extent where our calculations run and our maps render. In GIS terms, this is just a polygon (or set of polygons). We’ll show two common routes:

  1. Use ready-made administrative boundaries (fast and reproducible).
  2. Use your own boundary (a shapefile/GeoJSON you uploaded as an asset).

Why this matters

ESS scoring and any later hazard–mitigation overlays must be computed over a consistent footprint. Keeping a well-defined boundary helps with reproducibility, performance, and interpretation (e.g., comparing results across regions). If you’re new to Earth Engine, a FeatureCollection is simply a set of vector features (points/lines/polygons) with attributes-perfect for holding country/region boundaries.

Data we’ll use

  • geoBoundaries (via a public Earth Engine mirror) for ADM0/ADM1/ADM2 levels.
  • Your own uploaded boundary (optional)-useful if you have official planning units or study zones.
Glossary (click to expand)
  • ADM0/ADM1/ADM2: Administrative levels (country / first-level regions / second-level districts).
  • FeatureCollection: A table of vector features with geometry and properties in Earth Engine.
  • ee.Filter.eq: Builds an equality filter for attribute queries.
  • filterBounds(): Spatial filter-keeps features that intersect a geometry.
  • Map.centerObject(): Recenters the map on a given geometry.

Code: Load admin boundaries and pick Spain → Cantabria

The snippet below loads country and regional boundaries, filters to Spain (ADM0), then narrows to Cantabria (ADM1). This route is great when you don’t yet have a local shapefile.

// =========================================
// 1) Load all boundary levels (geoBoundaries mirror)
// =========================================
var adm0 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM0");
var adm1 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM1");
var adm2 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM2");

// =========================================
// 2) Spain – Cantabria (ADM1 via ADM0)
// =========================================
var spain = adm0.filter(ee.Filter.eq('shapeName', 'Spain'));        // attribute filter
var adm1_spain = adm1.filterBounds(spain);                          // spatial filter (intersects Spain)
var cantabria = adm1_spain.filter(ee.Filter.eq('shapeName', 'Cantabria')); // attribute filter

// =========================================
// 3) Display Cantabria boundary
// =========================================
Map.addLayer(cantabria, {color: 'blue'}, 'Spain – Cantabria (ADM1)', false);
Map.centerObject(cantabria, 10);

What did we just do?

  • Filtered by attributes using ee.Filter.eq('shapeName','Spain') to isolate Spain.
  • Filtered by spatial intersection using filterBounds(spain) so we only keep ADM1 regions that touch Spain’s geometry.
  • Filtered again by name to grab Cantabria specifically.

Alternative: Use your own boundary asset

If you already uploaded a shapefile/GeoJSON (see Earth Engine guide: Upload tables as assets), load it directly. Important: Asset paths must be strings.
Wrong: ee.FeatureCollection(projects/ee-desmond/assets/cantabria)
Right: ee.FeatureCollection("projects/ee-desmond/assets/cantabria")

// If you have an uploaded boundary asset, load it like this:
var cantabria = ee.FeatureCollection("projects/ee-desmond/assets/cantabria");

// (Optional) Visualize
Map.addLayer(cantabria, {color: 'blue'}, 'Cantabria (asset)', false);
Map.centerObject(cantabria, 10);
Tip - Names vs. Codes: Attribute names (e.g., shapeName) can vary by dataset. Use the Code Editor’s Inspector to click a feature and confirm property keys before filtering.
Performance note: Keep boundaries as simple as needed. Overly detailed geometries slow down reclassification and zonal stats later. Consider simplifying geometries (carefully!) when working at relatively larger regions or continental scales.

How this connects to ESS mapping

With the study area fixed, every subsequent step-linking land cover to ESS scores, raster reclassification, uncertainty bands, and hazard overlays-will be clipped and summarized against this boundary. That ensures your ESS intensity maps and your future NbS hotspot analysis remain spatially consistent and comparable across regions.

Step 2 - Build the Relational Table - from concept to implementation

At the heart of NbS spatial planning is a relational table linking land-cover (CORINE) codes to ecosystem services (ESS), each scored for its potential to mitigate specific hazards such as floods, erosion, or heat. This is the widely used matrix approach (Burkhard et al., 2009), where land-use classes are mapped to ESS with a score representing their functional capacity.

Design principles (keep it defensible)

  • Transparency: Store the mapping as a table (JSON/CSV/FeatureCollection) with comments and literature sources.
  • Consistency: Use official CLC codes and names (CORINE docs).
  • Expert-informed: Base scores on literature; note where expert knowledge adjusts them (e.g., local wetlands outperforming the average).
  • Hazard-aware: Explicitly link each ESS to relevant hazards (e.g., “Hydrological flow regulation → flood”).

We use a 0–1 scale (0 = no potential, 1 = very high potential) for scoring to ensure clarity and compatibility with later weighting and combination steps. Each ESS will form a separate raster band representing its spatial distribution of potential.

Workflow preview: (a) Load CORINE; (b) Map codes → ESS scores (0–1 scale); (c) Create one band per ESS; (d) Export/visualize layers; (e) Overlay with hazard layers to identify NbS opportunity hotspots.
Common practice vs. best practice:
Normally, you can (i) assign scores ad-hoc in code, (ii) skip documenting the rationale, and (iii) leave missing classes undefined. That works, but it’s hard to audit and reproduce. For best practice, we will:
  1. Use a consistent 0–1 scale across hazards/ESS,
  2. Comment each important code with the ecological reason, and
  3. Set a safe default of 0 for any unmapped class (explicit in remap(..., 0)).

2a. Pick the scope (hazards/ESS)

We start with five mitigation themes commonly linked to regulating services: Flood, Fire, Drought, Heat wave, and Erosion. You can add more later (e.g., water quality regulation, coastal protection). See matrix-style mapping ideas in Nedkov & Burkhard (2012).

2b. Load CORINE and clip to the study area

// =========================================
// CORINE Land Cover 2018 (100 m) - clip to your boundary
// Docs: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_CORINE_V20_100m_2018
// =========================================
var corine = ee.Image('COPERNICUS/CORINE/V20/100m/2018').select('landcover').clip(cantabria);

// (Optional) quick look
Map.addLayer(corine, {}, 'CORINE 2018 (raw)', false);

2c. Define 0–1 mitigation scores (adjustable)

We are going to encode the scores as JavaScript objects. Every value you see reflect potential (not measured flow). Unlisted classes will be mapped to 0 by default to avoid accidental inflation. Use the official CLC code list to extend/refine (reference).

// =========================================
// FLOOD mitigation (0–1)
// =========================================
var floodMitigationScore = {
  411: 0.95, // Inland marshes - storage, roughness, attenuation
  412: 0.95, // Peatbogs - high retention
  421: 0.75, // Salt marshes
  311: 0.75, // Broad-leaved forest - canopy, infiltration
  312: 0.60, // Coniferous forest
  213: 0.40, // Rice fields - temporary water retention
  244: 0.60, // Agroforestry - perennial cover increases infiltration
  321: 0.60, // Natural grasslands - surface roughness
  323: 0.75, // Sclerophyllous vegetation - structure, soils
  324: 0.40, // Transitional woodland/shrub - mixed, variable
  523: 0.90, // Estuaries - storage/expansion areas
  521: 0.75, // Coastal lagoons
  511: 0.40, // Water courses - conveyance; limited attenuation on-channel
  141: 0.20, // Green urban areas - local infiltration/cooling
  112: 0.10  // Discontinuous urban fabric - some permeable fraction
};

// =========================================
// FIRE mitigation (0–1)
// =========================================
var fireMitigationScore = {
  231: 0.40, // Pastures - lower fuel continuity vs dense forest
  241: 0.30, // Annual crops - seasonal moisture
  243: 0.40, // Mosaic vegetation - breaks continuity
  311: 0.20, // Broad-leaved forest - can carry fire under drought
  322: 0.50, // Heathland - mixed; can reduce spread with mosaics
  324: 0.30, // Transitional woodland/shrub
  411: 0.80, // Inland marshes - high moisture
  512: 0.90, // Lakes - strong barrier
  141: 0.30, // Green urban areas - breaks, irrigated
  112: 0.20  // Discontinuous urban - mixed materials
};

// =========================================
// DROUGHT mitigation (0–1)
// =========================================
var droughtMitigationScore = {
  411: 0.95, // Marshes - buffering water availability
  412: 0.95, // Peat bogs - storage
  311: 0.80, // Broad-leaved forest - rooting depth/ET shading
  312: 0.70, // Coniferous forest
  321: 0.60, // Natural grasslands
  241: 0.75, // Agroforestry / perennial systems
  211: 0.20, // Arable land - exposure/high demand
  142: 0.05, // Sports/leisure - manicured turf, water demand
  521: 0.70  // Coastal lagoons - local buffering
};

// =========================================
// HEAT WAVE mitigation (0–1)
// =========================================
var heatWaveMitigationScore = {
  311: 0.90, // Broad-leaved forest - canopy cooling
  312: 0.80, // Coniferous forest
  313: 0.85, // Mixed forest
  141: 0.60, // Green urban areas - shading/ET
  142: 0.50, // Sports/leisure
  112: 0.30, // Discontinuous urban fabric - partial greening
  243: 0.40, // Mosaic vegetation
  321: 0.65, // Natural grasslands
  324: 0.50, // Transitional woodland/shrub
  322: 0.40, // Heathland
  512: 0.80, // Lakes - evap cooling
  411: 0.70  // Inland marshes - ET cooling
};

// =========================================
// EROSION mitigation (0–1)
// =========================================
var erosionMitigationScore = {
  244: 0.60, // Agroforestry - root systems, cover
  321: 0.60, // Natural grasslands - continuous cover
  323: 0.75, // Sclerophyllous - deep roots, structure
  324: 0.40, // Transitional woodland/shrub
  211: 0.20, // Arable land - exposed soil
  243: 0.50, // Mosaic crops/natural - patchiness reduces runoff
  222: 0.25, // Permanently irrigated - management-dependent
  231: 0.30, // Pastures
  312: 0.50, // Coniferous forest
  311: 0.60, // Broad-leaved forest
  313: 0.55  // Mixed forest
};

2d. Remap helper & build multi-band mitigation image

We will now convert the dictionaries to parallel lists and use image.remap(). Note the explicit default 0 for any classes not in the table (transparent and conservative).

// =========================================
// Helper: remap CORINE codes → scores (default 0 if missing)
// =========================================
function remapMitigation(image, scoreDict, newName) {
  var fromCodes = Object.keys(scoreDict).map(function(k){ return parseInt(k, 10); });
  var toScores  = fromCodes.map(function(k){ return scoreDict[k]; });
  return image.remap(fromCodes, toScores, 0).rename(newName);
}

// =========================================
// Base (per-hazard) layers
// =========================================
var floodBase    = remapMitigation(corine, floodMitigationScore,    'Flood_Mitigation');
var fireBase     = remapMitigation(corine, fireMitigationScore,     'Fire_Mitigation');
var droughtBase  = remapMitigation(corine, droughtMitigationScore,  'Drought_Mitigation');
var heatBase     = remapMitigation(corine, heatWaveMitigationScore, 'HeatWave_Mitigation');
var erosionBase  = remapMitigation(corine, erosionMitigationScore,  'Erosion_Mitigation');

// Stack into a single multi-band image
var mitigationImage = floodBase.addBands([fireBase, droughtBase, heatBase, erosionBase]);
// Bands: Flood_Mitigation, Fire_Mitigation, Drought_Mitigation, HeatWave_Mitigation, Erosion_Mitigation

2e. Quick visualization (0–1)

Keep a consistent palette and {min:0, max:1} for comparability across hazards.

// =========================================
// Visualization (0–1). Toggle in Layers panel.
// =========================================
var vis01 = {min: 0, max: 1, palette: ['#f7fbff','#deebf7','#c6dbef','#9ecae1','#6baed6','#4292c6','#2171b5','#084594']};

Map.addLayer(mitigationImage.select('Flood_Mitigation'),    vis01, 'Mitigation · Flood', true);
Map.addLayer(mitigationImage.select('Fire_Mitigation'),     vis01, 'Mitigation · Fire', false);
Map.addLayer(mitigationImage.select('Drought_Mitigation'),  vis01, 'Mitigation · Drought', false);
Map.addLayer(mitigationImage.select('HeatWave_Mitigation'), vis01, 'Mitigation · Heat Wave', false);
Map.addLayer(mitigationImage.select('Erosion_Mitigation'),  vis01, 'Mitigation · Erosion', false);

2f. Interpreting 0–1 scores

Values are relative potentials inferred from land cover - not measured service flows. Use them for screening (e.g., locating NbS opportunities) and comparative analysis. Where decisions carry high stakes, combine with process-based or empirical checks (e.g., hydrologic models, local monitoring).

Tip - Sensitivity testing: try ±0.1 on key classes (e.g., 411/412/311) and inspect how hotspot patterns shift. This informs how robust your prioritization is to expert judgement.
Optional: In remote sensing, there is an approach called probability distributions. You can apply that logic here. But we will not treat this topic in this workflow. Do it only if you are not sure if a certain LULC to ESS can mitgate a certain hazard, so what you will do is to include a prpoerty score or band for each LULC or ESS you will map. So that even though certain multiple ESS can mitigate a particular hazard, you can determine which one will do the most for the same hazard or which of the LULC will provide the most of that ESS. You can then add confidence bands per hazard (e.g., 0.33/0.67/1.0)
Glossary (click to expand)
  • Matrix approach: A lookup between LULC classes and ecosystem service supply; see Burkhard et al., 2009.
  • Potential vs. flow: Potential is capacity inferred from landscape unit/traits; flow is realized service under demand/conditions.
  • Image.remap: Reclassifies pixel values using lists of from and to values.
  • Zonal statistics: calculates statistics of a raster dataset within the boundaries of zones defined by another dataset (we’ll use this later).

Step 3 - Enrich the Matrix with Spatial Modifiers (Elevation · Slope · Aspect)

Assigning 0–1 scores with expert judgement already yields a useful relative map of ESS potential or habitat ranking. However, where a service sits in the landscape also matters: the same class performs differently on a floodplain than on a ridge. This is the key critique of pure proxy mapping highlighted by Eigenbrod et al., 2010a and 2010b.

Common practice vs. best practice:
Normally, you can apply a single score per LULC class everywhere (simple and fast). That’s fine for screening. For better realism without full biophysical models, we’ll modulate scores using easy, defensible spatial covariates: elevation, slope, and aspect.

3a. Topographic data (SRTM) and basic derivatives

We use 30 m SRTM for elevation, then derive slope and aspect using Earth Engine’s ee.Terrain.slope and ee.Terrain.aspect. We also build simple elevation zones to reflect lowland vs. upland dynamics.

// =============================
// 4) Topographic data
// =============================
var srtm  = ee.Image('USGS/SRTMGL1_003').clip(cantabria); // elevation (m)
var slope = ee.Terrain.slope(srtm);                        // degrees
var aspect = ee.Terrain.aspect(srtm);                      // degrees from north

// (Optional preview)
Map.addLayer(srtm,  {min:0, max:2500, palette:['blue','green','brown','white']}, 'Elevation (m)', false);
Map.addLayer(slope, {min:0, max:60,   palette:['white','yellow','red']},         'Slope (°)', false);
Map.addLayer(aspect,{min:0, max:360,  palette:['blue','green','yellow','red','blue']}, 'Aspect (°)', false);

// Normalize slope to 0–1 (assume 60° ~ "very steep" cap)
var slopeNorm = slope.divide(60).clamp(0, 1);

// Aspect to radians and derive northness/eastness in [0,1]
// (northness=1 at north-facing, 0 at south-facing)
var aspectRad = aspect.multiply(Math.PI/180);
var northness = aspectRad.cos().unitScale(-1, 1); // [-1..1] → [0..1]
var eastness  = aspectRad.sin().unitScale(-1, 1); // [-1..1] → [0..1]

// Elevation zones (tune thresholds for your region)
var elev = srtm;
var lowland = elev.lt(300);
var midland = elev.gte(300).and(elev.lte(800));
var upland  = elev.gt(800);

// Hazard-specific elevation weights (multiplicative factors)
// Note: returns ee.Image with zone-wise multipliers that sum "piecewise"
function elevWeight(hazard) {
  if (hazard === 'Flood') {
    // Higher flood-mitigation relevance in lowlands (floodplains)
    return lowland.multiply(1.10).add(midland.multiply(1.00)).add(upland.multiply(0.80));
  }
  if (hazard === 'Erosion') {
    // Erosion control more critical in uplands/steep terrains
    return lowland.multiply(0.90).add(midland.multiply(1.00)).add(upland.multiply(1.20));
  }
  if (hazard === 'Fire') {
    // Slightly higher importance away from valley bottoms
    return lowland.multiply(0.95).add(midland.multiply(1.05)).add(upland.multiply(1.10));
  }
  if (hazard === 'Drought') {
    // Some uplift in mid/uplands (cooler/more storage), penalize low hot flats
    return lowland.multiply(0.90).add(midland.multiply(1.05)).add(upland.multiply(1.10));
  }
  if (hazard === 'Heat') {
    // Slight boost in low/mid (where people heat-stressed), neutral in high uplands
    return lowland.multiply(1.05).add(midland.multiply(1.05)).add(upland.multiply(1.00));
  }
}

3b. Apply combined adjustments (multiplicative, intuitive)

We start from the 0–1 base layers and apply: (i) a slope factor, (ii) an aspect factor (where relevant), and (iii) the elevation-zone weight. Factors are gentle (±5–50%) to avoid overpowering the base LULC signal. Finally, we clamp to a reasonable upper bound (e.g., 1.5).

// =============================
// 5) Apply combined adjustments
// =============================

// Flood: flatter terrain ↑ (reduce with slope), lowlands ↑ (elevation weight)
var floodAdj = floodBase
  .multiply(ee.Image(1).subtract(slopeNorm.multiply(0.50)))  // steeper → lower potential
  .multiply(elevWeight('Flood'))
  .clamp(0, 1.5);

// Erosion: steeper terrain ↑ (increase with slope), uplands ↑
var erosionAdj = erosionBase
  .multiply(ee.Image(1).add(slopeNorm.multiply(0.50)))       // steeper → higher mitigation need/value
  .multiply(elevWeight('Erosion'))
  .clamp(0, 1.5);

// Fire: steep slopes ↓; south-facing (warmer/drier) ↓; slight elev boost mid/uplands
// northness ∈ [0..1]: 1=north-facing (cooler), 0=south-facing (warmer)
// Factor varies ~ [0.9 .. 1.1] by aspect; adjust weights if your region differs.
var fireAdj = fireBase
  .multiply(ee.Image(1).subtract(slopeNorm.multiply(0.20)))                  // steep → lower effectiveness
  .multiply(ee.Image(0.90).add(northness.multiply(0.20)))                    // boost north-facing, reduce south
  .multiply(elevWeight('Fire'))
  .clamp(0, 1.5);

// Drought: north-facing (cooler/moister) ↑; mid/uplands ↑
var droughtAdj = droughtBase
  .multiply(ee.Image(1).add(northness.multiply(0.10)))
  .multiply(elevWeight('Drought'))
  .clamp(0, 1.5);

// Heat wave: north & east aspects ↑ (shading/wind), low/mid slight ↑
var heatAdj = heatBase
  .multiply(ee.Image(1).add(northness.multiply(0.05)))
  .multiply(ee.Image(1).add(eastness.multiply(0.05)))
  .multiply(elevWeight('Heat'))
  .clamp(0, 1.5);

// =============================
// 6) Stack adjusted layers (bands keep the same names)
// =============================
var allMitigation = floodAdj
  .addBands([fireAdj, droughtAdj, heatAdj, erosionAdj])
  .rename([
    'Flood_Mitigation',
    'Fire_Mitigation',
    'Drought_Mitigation',
    'HeatWave_Mitigation',
    'Erosion_Mitigation'
  ]);

3c. Visualize adjusted mitigation (0–1.5)

We keep a simple 2-color ramp here for contrast. You can reuse the sequential palette from Step 2 as well.

// =============================
// 7) Display
// =============================
var twoTone = ['white', 'red'];
Map.addLayer(floodAdj,   {min:0, max:1.5, palette: twoTone}, 'Flood Mitigation (Adj)',   true);
Map.addLayer(fireAdj,    {min:0, max:1.5, palette: twoTone}, 'Fire Mitigation (Adj)',    false);
Map.addLayer(droughtAdj, {min:0, max:1.5, palette: twoTone}, 'Drought Mitigation (Adj)', false);
Map.addLayer(heatAdj,    {min:0, max:1.5, palette: twoTone}, 'HeatWave Mitigation (Adj)',false);
Map.addLayer(erosionAdj, {min:0, max:1.5, palette: twoTone}, 'Erosion Mitigation (Adj)', false);

Method notes & guardrails

  • Multiplicative logic: We scale the base 0–1 score by gentle factors (e.g., 1±0.05..0.50) so land-cover still drives the signal, while terrain nuances nudge it toward realism.
  • Ranges & clamping: Because factors can lift values above 1, we cap at 1.5 for display. For indices, you can renormalize later to 0–1 if needed.
  • Aspect convention: northness here is rescaled to [0,1] where 1 = north-facing, 0 = south-facing. Tweak aspect weights for your climate (e.g., in Southern Hemisphere, invert the sign).
  • Zones are simple by design: Thresholds (e.g., 300/800 m) are placeholders; replace with local hypsometry or floodplain delineations for better realism.
Best practice: Keep a small table next to your code documenting each factor and its rationale (with citations, e.g., Eigenbrod 2010a). This makes adjustments auditable and easier to tune with stakeholders.
Glossary (click to expand)
  • Northness/Eastness: Cosine/sine transforms of aspect that capture orientation relative to cardinal directions.
  • Clamping: For visualization, capping a value range to avoid extreme colors and ease comparison across maps.
  • Proxy vs. process: Proxy maps infer capacity from land cover; process models (e.g., hydrological) simulate mechanisms.

Step 4 - Export Results & Compute Summary Statistics

Now that we have adjusted mitigation layers (0–1, optionally up to 1.5 after modifiers), let’s extract region-level statistics and export both tables (CSV) and maps (GeoTIFF). This makes it easy to report results, plot charts elsewhere, and archive the layers for your team.

Common practice vs. best practice:
Normally, you can print a single number in the Console and move on. That’s fine for a quick look. For reproducible analysis, we’ll export: (1) mean mitigation scores per hazard, (2) a simple “effective contributing area %” (see note below), and (3) the multi-band mitigation raster clipped to the study area.
What does “effective contributing area %” mean?

It’s the mean of mitigation scores across pixels that have any mitigation signal (>0), expressed as a percentage. Concretely, we sum pixel scores (0–1), divide by the count of pixels with score > 0, and multiply by 100. This is NOT a land area percentage of a strict class- rather, it’s a normalized measure of how much of the “maximum possible” mitigation is realized across the pixels that contribute at all. If you instead want area above threshold (e.g., % of pixels with score ≥ 0.7), we show how to modify the reducer in a note below.

4.1 Select the boundary (single feature)

// =============================
// 1) Cantabria boundary (single feature)
// =============================
var adm1 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM1");
var cantabria = ee.Feature(
  adm1.filter(ee.Filter.eq('shapeName', 'Cantabria')).first()
);

4.2 Mean mitigation scores (per hazard)

We compute region-wide means for all mitigation bands using reduceRegion with Reducer.mean().

// =============================
// 2) Compute mean mitigation scores for Cantabria
// =============================
var mitigationStats = cantabria.set(
  allMitigation.reduceRegion({
    reducer: ee.Reducer.mean(),
    geometry: cantabria.geometry(),
    scale: 100,
    maxPixels: 1e9
  })
);
print('Mitigation scores – Cantabria (mean per band):', mitigationStats);

4.3 Export the means to CSV

// =============================
// 3) Export mean scores to CSV
// =============================
Export.table.toDrive({
  collection: ee.FeatureCollection([mitigationStats]),
  description: 'Mitigation_Scores_Cantabria',
  folder: 'desirmed',              // change or remove as you prefer
  fileFormat: 'CSV'
});

4.4 “Effective contributing area %” per hazard

The helper below computes the percentage defined as: 100 × mean(score | score > 0). It’s a compact indicator of how strong the mitigation signal is across pixels that contribute at all.

// =============================
// 4) Function: "Effective contributing area %" per hazard
//     = 100 * mean(score | score > 0)
// =============================
function computeMitigationContribution(region, hazardImage, bandName) {
  // Sum of scores over the region
  var stats = hazardImage.reduceRegion({
    reducer: ee.Reducer.sum(),
    geometry: region.geometry(),
    scale: 100,
    maxPixels: 1e9
  });
  var contributingSum = ee.Number(stats.get(bandName)); // sum of 0..1 values

  // Count of pixels that actually contribute (>0)
  var countContrib = ee.Number(
    hazardImage.select(bandName).gt(0).reduceRegion({
      reducer: ee.Reducer.count(),
      geometry: region.geometry(),
      scale: 100,
      maxPixels: 1e9
    }).get(bandName)
  );

  // 100 * mean(score | score > 0). Guard against division by zero.
  var percent = ee.Algorithms.If(
    countContrib.gt(0),
    contributingSum.divide(countContrib).multiply(100),
    0
  );

  return region.set(bandName + '_Percent', percent);
}

4.5 Apply to all mitigation bands & export CSV

// =============================
// 5) Apply to all hazard bands
// =============================
var allBands = [
  'Flood_Mitigation',
  'Fire_Mitigation',
  'Drought_Mitigation',
  'HeatWave_Mitigation',
  'Erosion_Mitigation'
];

var cantabriaPercent = cantabria;
allBands.forEach(function(band) {
  cantabriaPercent = computeMitigationContribution(
    cantabriaPercent,
    allMitigation.select(band),
    band
  );
});

print('Hazard mitigation contribution percentages – Cantabria:', cantabriaPercent);

// =============================
// 6) Export percentages to CSV
// =============================
Export.table.toDrive({
  collection: ee.FeatureCollection([cantabriaPercent]),
  description: 'Hazard_Mitigation_Contribution_Percentages_Cantabria',
  folder: 'desirmed',
  fileFormat: 'CSV'
});
Alternative: export % of area above a threshold (e.g., score ≥ 0.7)
// Example: percent of area with score >= 0.7
function percentAreaAboveThreshold(region, hazardImage, bandName, thr) {
  var pxArea = ee.Image.pixelArea(); // m²/pixel
  var band = hazardImage.select(bandName);

  var areaAbove = pxArea.updateMask(band.gte(thr)).reduceRegion({
    reducer: ee.Reducer.sum(),
    geometry: region.geometry(),
    scale: 100, maxPixels: 1e13
  }).get('area');

  var areaAll = pxArea.updateMask(band.gte(0)).reduceRegion({
    reducer: ee.Reducer.sum(),
    geometry: region.geometry(),
    scale: 100, maxPixels: 1e13
  }).get('area');

  return ee.Number(areaAbove).divide(ee.Number(areaAll)).multiply(100);
}

4.6 Export the multiband mitigation map (GeoTIFF)

Export the adjusted, multi-band mitigation image for use in GIS or modelling pipelines. You can include a CRS if you need a specific projection.

// =============================
// 7) Export multiband mitigation GeoTIFF for Cantabria
// =============================
Export.image.toDrive({
  image: allMitigation.clip(cantabria.geometry()),
  description: 'Mitigation_Map_Cantabria',
  folder: 'desirmed',
  fileNamePrefix: 'Mitigation_Map_Cantabria',
  region: cantabria.geometry().bounds(),
  scale: 100,
  maxPixels: 1e13,
  fileFormat: 'GeoTIFF'
  // , crs: 'EPSG:3035' // (optional) set a CRS if your workflow requires it
});
Best practice tips:
  • Use pixelArea() when you truly want area in m²/ha (as in the threshold example).
  • Document thresholds (e.g., “≥ 0.7 = high potential”) and keep them consistent across regions.
  • Keep raw & adjusted exports so others can trace how modifiers affected the signal.
Up next: NbS hotspot identification. We’ll intersect mitigation potential with hazard layers (e.g., flood extent/depth, heat anomalies, erosion susceptibility), introduce confidence bands, and build a clear, explainable hotspot index for prioritizing nature-based solutions.

Step 5 - NbS Hotspot Identification

Purpose: Hazard layers are a crucial component in spatial ecosystem service (ESS) assessments because they reveal where valuable ecosystem functions are at risk from natural disturbances. By combining satellite-derived hazard footprints with ESS maps, we can identify locations where ecosystem benefits (e.g., flood regulation, habitat provision) are both present and historically exposed to hazards such as floods or wildfires. This enables more targeted Nature-based Solutions (NbS) planning that prioritises areas where interventions can simultaneously protect ecosystem function and reduce hazard impacts.

In this section, we integrate two global hazard datasets:

  • MODIS Global Flood Database (GFD) – A record of 913 flood events (2000–2018) mapped from MODIS satellite imagery, with per-event duration and extent (Tellman et al., 2021).
  • MODIS Burned Area (MCD64A1) – Global monthly fire/burn scars from MODIS (2000–present), mapped using changes in surface reflectance (Giglio et al., 2018).
These datasets are processed into binary hazard masks for integration with ESS maps.

Code Implementation with Step-by-Step Explanations

// 1. Load MODIS Global Flood Database (Tellman et al., 2021)
var gfd = ee.ImageCollection('GLOBAL_FLOOD_DB/MODIS_EVENTS/V1');

// 2. Select a single flood event (Hurricane Isaac, USA, 2012)
//    The 'id' property is a unique Dartmouth Flood Observatory identifier
var hurricaneIsaacDartmouthId = 3977;
var hurricaneIsaacUsa = ee.Image(
    gfd.filterMetadata('id', 'equals', hurricaneIsaacDartmouthId).first()
);

// Set map view and basemap
Map.setOptions('SATELLITE');
Map.setCenter(-90.2922, 29.4064, 9);

// 3. Flood extent (binary water mask for event duration)
Map.addLayer(
  hurricaneIsaacUsa.select('flooded').selfMask(),
  {min: 0, max: 1, palette: '001133'},
  'Hurricane Isaac - Inundation Extent'
);

// 4. Flood duration (number of days flooded during the event)
var durationPalette = ['c3effe', '1341e8', '051cb0', '001133'];
Map.addLayer(
  hurricaneIsaacUsa.select('duration').selfMask(),
  {min: 0, max: 4, palette: durationPalette},
  'Hurricane Isaac - Duration'
);

// 5. Aggregated historical flood footprint (sum of all events, 2000–2018)
var gfdFloodedSum = gfd.select('flooded').sum();
Map.addLayer(
  gfdFloodedSum.selfMask(),
  {min: 0, max: 10, palette: durationPalette},
  'GFD Satellite Observed Flood Plain'
);

// 6. Permanent water from JRC Global Surface Water dataset (to mask out rivers/lakes)
var jrc = gfd.select('jrc_perm_water').sum().gte(1);
Map.addLayer(
  jrc.selfMask(),
  {min: 0, max: 1, palette: 'C3EFFE'},
  'JRC Permanent Water'
);

// 7. Load MODIS Burned Area Product (MCD64A1) for 2000–2019
//    The BurnDate band gives day-of-year of burn; we reclassify to binary
var burned = ee.ImageCollection("MODIS/061/MCD64A1")
  .select('BurnDate')
  .filterDate('2000-01-01', '2019-01-01')
  .map(function(img) {
    return img.gt(0).selfMask();  // Keep burned pixels only
  })
  .sum()
  .gte(1);  // Burned at least once in the period

// 8. Burned area visualisation
Map.addLayer(burned.selfMask(), {palette: ['orange']}, 'Burned Area');

// 9. Create binary mask for overlay with ESS maps
var burnBinary = burned.selfMask();

What is Happening in Each Step?

  • Data Filtering: We subset the GFD by event ID to focus on a single flood, then extract relevant hazard bands ('flooded', 'duration').
  • Masking: selfMask() removes non-hazard pixels, leaving a clean binary raster for spatial intersection with ESS layers.
  • Temporal Aggregation: Summing 'flooded' or 'BurnDate' across years reveals multi-year hazard footprints, useful for identifying recurrently affected zones.
  • Permanent Water Removal: Permanent water bodies are excluded so that hazard overlays capture only temporary inundation or burn areas.
  • Binary Hazard Masks: These enable precise spatial intersection analysis with ESS maps-counting pixels of each ESS class that fall within hazard zones.

Technical Dataset Notes

  • MODIS GFD: 250 m resolution, derived from NASA MOD09 surface reflectance. Contains per-event flood extent, duration, and associated metadata. Temporal coverage: 2000–2018.
  • MODIS MCD64A1 Burned Area: 500 m resolution, global coverage, monthly temporal granularity. Derived using the MODIS surface reflectance change detection algorithm.
  • JRC GSW Permanent Water: 30 m resolution, derived from Landsat archive, used here to mask permanent water from flood hazard maps.

Spatial Operations Emphasis

  • Hazard Footprint Extraction: Isolate hazard-affected areas for specific events or long-term summaries.
  • Masking & Intersection: Enable pixel-wise comparisons between hazard exposure and ESS distribution.
  • Gap Identification: By comparing hazard layers with ESS maps, we identify high-ESS but high-hazard areas where NbS could reduce risk and maintain service delivery.
  • Decision Support: This integrated approach provides spatial evidence for prioritising climate adaptation and ecosystem protection measures.

5a Distinct by Hazard

In this step we combine observed hazard intensity (historical floods, fires) with our mitigation capacity layers to find where Nature-based Solutions (NbS) can have the biggest impact. The guiding idea is a simple, transparent gap model:

Priority (per hazard) = Hazard_Intensity × (1 − Mitigation_Capacity)
Intuition: a place is high priority if the hazard is strong and the current capacity to mitigate it is weak (i.e., there’s a big “gap”).

Hotspot → Spatial overlap of high ecosystem service supply and high hazard risk. In this workflow: ESS_mask AND Hazard_mask. It’s a diagnostic layer showing where opportunities exist, without yet ranking or filtering for feasibility.

Priority → A subset of hotspots where the calculated score meets the decision cut-off. Spatially, this is: Hotspot_mask AND (Priority_score ≥ threshold). A pixel might be a hotspot but not a priority if:

  • Hazard intensity is below the threshold.
  • Mitigation capacity is already high (so the “gap” is small).
  • Weighted score is below the applied cut-off.

Protect / Enhance → High ESS areas with low hazard risk. Spatially: ESS_mask AND NOT(Hazard_mask). These are not urgent NbS intervention sites for hazard reduction but should be maintained or enhanced to sustain ecosystem function.

Note: This method is purely spatial. In reality, deciding where to implement NbS also depends on non-spatial factors such as community acceptance, land tenure, policy context, governance capacity, technical feasibility, and budget constraints. These must be considered before moving from maps to action.
Key terms (click to expand)
  • Hazard intensity: how often/strongly a location has experienced a hazard (e.g., flood counts, burned months).
  • Mitigation capacity: our 0–1 scaled layer derived from LULC and spatial modifiers (forests, wetlands, etc.).
  • Gap: 1 − mitigation. A value near 1 means low capacity (large deficit); near 0 means high capacity.
  • Hotspot: the top fraction (percentile) of priority scores for a hazard.
  • Protect/Enhance: places with both high hazard and high capacity - keep and strengthen.

5.1 Base map & boundary (for readable maps)

We render a hillshade (ee.Terrain.hillshade) and an outline to make overlays readable. This is cosmetic, but helpful for interpretation.

5.2 Build hazard intensity layers

What we do: create two hazard layers, both as counts: Flood Hazard Intensity (FHI) from summed flood detections in the Global Flood Database, and Fire Hazard from summed burned months in MODIS MCD64A1. We mask permanent water for floods so lakes/rivers don’t inflate the signal.

Why counts? They’re simple, comparable across space, and robust to single extreme events. Counts approximate “exposure frequency,” which is useful for screening NbS opportunities.

5.3 Adaptive percentiles (robust scaling)

Raw counts can have long tails (a few places with very high values). We scale hazards to 0–1 using positive-only percentiles: compute the p90/p95 among pixels with value > 0, then divide by that number. This (a) ignores zeros, (b) reduces outlier sensitivity, and (c) keeps the top ~5–10% near 1. If nothing is positive, we safely fall back to the max.

5.4 Normalize mitigation capacity & compute the gap

Our mitigation layers (floodAdj, fireAdj) were allowed to go up to ~1.5 after spatial modifiers (slope, aspect, elevation). We normalize them to 0–1 by dividing by 1.5, then compute Gap = 1 − mitigation. Conceptually, the gap is a mitigation deficit: high where ecosystems currently do less to reduce the hazard.

Interpretation tips:
  • A pixel with gap ≈ 1 has little capacity - a prime candidate for NbS if hazard is high.
  • A pixel with gap ≈ 0 already has high capacity - more a protect/enhance case.

5.5 Compute priority (hazard × gap)

This multiplies exposure by deficit. It’s deliberately simple and explainable: places score high only when both hazard and gap are high. Values are 0–1 after normalization, which makes thresholds intuitive.

5.6 Extract hotspots (top tail)

We convert the continuous priority surface into a binary hotspot mask using positive-only percentiles: p60 for floods, p80 for fires (tune to your planning brief). The idea is to flag the top X% by data-driven thresholds, not arbitrary absolute cutoffs.

5.7 Identify protect/enhance zones

We also tag areas with high hazard and high mitigation (both above adaptive thresholds). These are places where ecosystems are already doing useful work under stress - high-value targets for protection or gentle enhancement.

Quality checks & knobs (click to expand)
  • Debug prints (min/max/mean) confirm inputs look sane before thresholding.
  • Percentiles (p90/95 for scaling; p60/80/70 for thresholds) are policy dials. Document your choices.
  • Scales: flood 250 m, fire 500 m, mitigation 100 m. For multi-hazard indices, resample to a common scale/grid.
// ======================================================
// NBS HOTSPOT PRIORITIZATION - DISTINCT BY HAZARD
// Hazards: Flood (GFD) and Fire (MODIS MCD64A1)
// Capacity: your floodAdj & fireAdj mitigation layers (0..~1.5)
// Logic per hazard: Priority = Hazard_Intensity × (1 − Mitigation_Capacity)
// ======================================================

// ======================================================
// NBS HOTSPOT PRIORITIZATION - DISTINCT BY HAZARD (with prints)
// ======================================================

// Base map + boundary
var hill = ee.Terrain.hillshade(srtm)
  .visualize({min:150, max:255, forceRgbOutput:true})
  .multiply(0.9);
var outline = cantabria.style({color:'000000', width:2, fillColor:'00000000'});

// ---------------- STEP 1. Build HAZARD intensity layers ----------------
var gfd = ee.ImageCollection('GLOBAL_FLOOD_DB/MODIS_EVENTS/V1');

// Flood hazard = count of satellite-observed floods (exclude permanent water)
var FHI_raw = gfd.select('flooded').sum().rename('FHI').clip(cantabria);
var JRC_perm = gfd.select('jrc_perm_water').sum().gte(1);
var FHI = FHI_raw.updateMask(JRC_perm.not()); // mask-out permanent water

// Fire hazard = months burned at least once (2000–2019)
var burnedMonthly = ee.ImageCollection('MODIS/061/MCD64A1')
  .select('BurnDate')
  .filterDate('2000-01-01','2019-01-01')
  .map(function(img){ return img.gt(0); });
var FIRE = burnedMonthly.sum().rename('FIRE').clip(cantabria);

// Debug prints
print('STEP 1 · Flood hazard (count) sample stats:',
  FHI.reduceRegion({
    reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
    geometry: cantabria.geometry(), scale: 250, maxPixels: 1e9, bestEffort: true, tileScale: 2
  })
);
print('STEP 1 · Fire hazard (count) sample stats:',
  FIRE.reduceRegion({
    reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
    geometry: cantabria.geometry(), scale: 500, maxPixels: 1e9, bestEffort: true, tileScale: 2
  })
);

// Optional viz of raw counts
Map.addLayer(hill.blend(FHI.visualize({min:0, max:10, palette:['c3effe','1341e8','051cb0','001133'], opacity:0.9})),
             {}, 'STEP 1a · Flood Hazard (count)');
Map.addLayer(hill.blend(FIRE.visualize({min:0, max:24, palette:['fff5eb','fd8d3c','e6550d','a63603'], opacity:0.9})),
             {}, 'STEP 1b · Fire Hazard (count)');

// ---------------- Helpers for adaptive percentiles (robust keys) ----------------
function dictFirstNumber_(dict, fallback) {
  dict = ee.Dictionary(dict);
  var keys = dict.keys();
  var hasAny = keys.size().gt(0);
  var firstKey = ee.String(ee.Algorithms.If(hasAny, keys.get(0), ''));
  return ee.Number(ee.Algorithms.If(hasAny, dict.get(firstKey), fallback));
}
function getP_(dict, band, pct, fallback) {
  dict = ee.Dictionary(dict);
  var key1 = ee.String('p').cat(ee.Number(pct).format());               // 'p95'
  var key2 = ee.String(band).cat('_p').cat(ee.Number(pct).format());    // 'FHI_p95'
  var has1 = dict.contains(key1);
  var has2 = dict.contains(key2);
  return ee.Number(ee.Algorithms.If(
    has1, dict.get(key1),
    ee.Algorithms.If(
      has2, dict.get(key2),
      dictFirstNumber_(dict, fallback)
    )
  ));
}
function posPercentile(img, pct, geom, scale, fallback) {
  img = ee.Image(img);
  var band = ee.String(img.bandNames().get(0));
  var d = img.updateMask(img.gt(0)).reduceRegion({
    reducer: ee.Reducer.percentile([pct]),
    geometry: geom, scale: scale, maxPixels: 1e13, bestEffort: true, tileScale: 2
  });
  return getP_(d, band, pct, fallback);
}
function posPctlThreshold(img01, pct, geom, scale, fallback) {
  img01 = ee.Image(img01);
  var band = ee.String(img01.bandNames().get(0));
  var d = img01.updateMask(img01.gt(0)).reduceRegion({
    reducer: ee.Reducer.percentile([pct]),
    geometry: geom, scale: scale, maxPixels: 1e13, bestEffort: true, tileScale: 2
  });
  return getP_(d, band, pct, fallback);
}

// --------------- STEP 2. Normalize hazards (0..1) with positive-only p95 ---------------
var FHI_max = ee.Number(
  FHI.reduceRegion({reducer: ee.Reducer.max(), geometry: cantabria.geometry(),
                    scale: 250, maxPixels: 1e13, bestEffort: true, tileScale: 2}).get('FHI')
).max(1);
var FIRE_max = ee.Number(
  FIRE.reduceRegion({reducer: ee.Reducer.max(), geometry: cantabria.geometry(),
                     scale: 500, maxPixels: 1e13, bestEffort: true, tileScale: 2}).get('FIRE')
).max(1);

// p95 among positive pixels; fall back to max if empty
var FHI_p95 = posPercentile(FHI, 90, cantabria.geometry(), 250, FHI_max);
var FIRE_p95 = posPercentile(FIRE, 95, cantabria.geometry(), 500, FIRE_max);
print('STEP 2 (fixed) · p95 (positive-only) → Flood:', FHI_p95, 'Fire:', FIRE_p95);

// Normalize to 0..1
var floodHaz = FHI.divide(FHI_p95).clamp(0, 1).rename('flood_haz');
var fireHaz  = FIRE.divide(FIRE_p95).clamp(0, 1).rename('fire_haz');

Map.addLayer(hill.blend(floodHaz.visualize({min:0, max:1, palette:['e0f3ff','74a9cf','0570b0'], opacity:0.9})),
             {}, 'STEP 2a · Flood Hazard (0–1)');
Map.addLayer(hill.blend(fireHaz.visualize({min:0, max:1, palette:['fee8c8','fdbb84','e34a33'], opacity:0.9})),
             {}, 'STEP 2b · Fire Hazard (0–1)');

// --------------- STEP 3. Normalize MITIGATION capacity (0..1) ---------------
var floodMit = floodAdj.divide(1.5).clamp(0, 1).rename('flood_mit');
var fireMit  = fireAdj.divide(1.5).clamp(0, 1).rename('fire_mit');

print('STEP 3 · Mitigation stats (flood):',
  floodMit.reduceRegion({
    reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
    geometry: cantabria.geometry(), scale: 100, maxPixels: 1e9, bestEffort: true, tileScale: 2
  })
);
print('STEP 3 · Mitigation stats (fire):',
  fireMit.reduceRegion({
    reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
    geometry: cantabria.geometry(), scale: 100, maxPixels: 1e9, bestEffort: true, tileScale: 2
  })
);

Map.addLayer(hill.blend(floodMit.visualize({min:0, max:1, palette:['f7fcf5','a6dba0','1b7837'], opacity:0.9})),
             {}, 'STEP 3a · Flood Mitigation (0–1)');
Map.addLayer(hill.blend(fireMit.visualize({min:0, max:1, palette:['f7fcf5','a6dba0','1b7837'], opacity:0.9})),
             {}, 'STEP 3b · Fire Mitigation (0–1)');

// --------------- STEP 4. Mitigation GAP (1 − mitigation) ---------------
var floodGap = ee.Image(1).subtract(floodMit).rename('flood_gap');
var fireGap  = ee.Image(1).subtract(fireMit).rename('fire_gap');

Map.addLayer(hill.blend(floodGap.visualize({min:0, max:1, palette:['ffffff','fde0ef','d6604d','b2182b'], opacity:0.9})),
             {}, 'STEP 4a · Flood Gap (low capacity)');
Map.addLayer(hill.blend(fireGap.visualize({min:0, max:1, palette:['ffffff','fde0ef','d6604d','b2182b'], opacity:0.9})),
             {}, 'STEP 4b · Fire Gap (low capacity)');

// --------------- STEP 5. PRIORITY per hazard ---------------
var prioFlood = floodHaz.multiply(floodGap).rename('prio_flood');  // 0..1
var prioFire  = fireHaz.multiply(fireGap).rename('prio_fire');     // 0..1

var prioPal = ['ffffcc','ffeda0','feb24c','fd8d3c','f03b20','bd0026'];
Map.addLayer(hill.blend(prioFlood.visualize({min:0, max:1, palette: prioPal, opacity:0.95})),
             {}, 'STEP 5a · NbS Priority (FLOOD)');
Map.addLayer(hill.blend(prioFire.visualize({min:0, max:1, palette: prioPal, opacity:0.95})),
             {}, 'STEP 5b · NbS Priority (FIRE)');

// --------------- STEP 6. HOTSPOTS per hazard (top 20%) - positive-only percentiles ---------------
var thFlood = posPctlThreshold(prioFlood, 60, cantabria.geometry(), 100, 0.2);
var thFire  = posPctlThreshold(prioFire,  80, cantabria.geometry(), 100, 0.2);
print('STEP 6 (fixed) · Hotspot thresholds (p80 on pos-only) → Flood:', thFlood, 'Fire:', thFire);

var hotspotFlood = prioFlood.gte(thFlood).selfMask().rename('hotspot_flood');
var hotspotFire  = prioFire.gte(thFire).selfMask().rename('hotspot_fire');

Map.addLayer(hotspotFlood.visualize({palette:['#ff0000'], opacity:0.65}).clip(cantabria),
             {}, 'STEP 6a · HOTSPOT (FLOOD)');
Map.addLayer(hotspotFire.visualize({palette:['#ff7f00'], opacity:0.65}).clip(cantabria),
             {}, 'STEP 6b · HOTSPOT (FIRE)');

// --------------- STEP 7. PROTECT / ENHANCE zones - adaptive (percentiles) ---------------
var hzFloodP70  = posPercentile(floodHaz, 55, cantabria.geometry(), 250, 0.6);
var mitFloodP70 = posPercentile(floodMit, 55, cantabria.geometry(), 100, 0.6);
var hzFireP70   = posPercentile(fireHaz,  70, cantabria.geometry(), 500, 0.6);
var mitFireP70  = posPercentile(fireMit,  70, cantabria.geometry(), 100, 0.6);
print('STEP 7 (adaptive) · thresholds → Flood(hz,mit):', hzFloodP70, mitFloodP70,
      ' Fire(hz,mit):', hzFireP70, mitFireP70);

var protectFlood = floodHaz.gte(hzFloodP70).and(floodMit.gte(mitFloodP70)).selfMask();
var protectFire  = fireHaz.gte(hzFireP70).and(fireMit.gte(mitFireP70)).selfMask();

Map.addLayer(protectFlood.visualize({palette:['#2ca25f'], opacity:0.85}).clip(cantabria),
             {}, 'STEP 7a · Protect/Enhance (FLOOD)');
Map.addLayer(protectFire.visualize({palette:['#238b45'], opacity:0.85}).clip(cantabria),
             {}, 'STEP 7b · Protect/Enhance (FIRE)');
Normally vs. best practice - normally you can scale hazards by max() and pick a fixed threshold (e.g., ≥ 0.7) for hotspots. That works but is sensitive to outliers and regional differences. We use positive-only percentiles for scaling and thresholding so the method adapts to each study area’s distribution and remains reproducible.
Up next: exporting priority and hotspot rasters, plus a combined multi-hazard NbS index (with weights and optional confidence bands) for planning dashboards.

Interpreting the outputs

Hotspot → Spatial overlap of high ESS and high hazard risk. In our workflow: ESS_mask AND Hazard_mask. This is a diagnostic layer showing where the opportunity/problem exists - it’s always computed first.

Priority → Hotspots ranked using the formula Hazard_Intensity × (1 − Mitigation_Capacity). A pixel may be a hotspot but not a priority if hazard intensity is below the threshold, mitigation capacity is already high, or the weighted score falls below the cut-off. Every priority area is a hotspot, but not every hotspot will be a priority.

Protect / Enhance → High ESS areas with low hazard risk (ESS_mask AND NOT(Hazard_mask)). These areas are important for maintaining ecosystem services even if immediate hazard mitigation is not required.

Important: This workflow is purely spatial. In reality, NbS siting also depends on non-spatial factors such as governance capacity, land tenure, legal constraints, financing, technical feasibility, and community acceptance. These must be considered before moving from maps to on-the-ground implementation.

Step 6 - Exports, Multi-Hazard NbS Index & Integrating Local Data

You now have per-hazard priority rasters (prio_flood, prio_fire) and hotspot masks (hotspot_flood, hotspot_fire), plus protect/enhance masks. This section covers (A) exporting those layers, (B) creating a combined multi-hazard NbS index with adjustable weights and optional confidence bands, (C) sensible tweaks for different contexts, and (D) how to swap in local hazard/ESS/LULC data to rerun the same roadmap.

Docs: exporting & projections

6a. Export per-hazard priority, hotspots, and protect/enhance

Exports are useful for dashboards, GIS sharing, or further modelling. Use Export.image.toDrive for rasters; Export.table.toDrive for zonal summaries. Set a consistent CRS, scale, and region.

// ---------- STACK & EXPORT RASTERS (GeoTIFF) ----------
var prioStack = ee.Image()
  .addBands(prioFlood)       // 0–1
  .addBands(prioFire)        // 0–1
  .addBands(hotspotFlood)    // 0/1 (masked)
  .addBands(hotspotFire)     // 0/1 (masked)
  .addBands(protectFlood)    // 0/1 (masked)
  .addBands(protectFire)     // 0/1 (masked)
  .rename([
    'prio_flood','prio_fire',
    'hotspot_flood','hotspot_fire',
    'protect_flood','protect_fire'
  ])
  .clip(cantabria);

// Optional: set a common projection (e.g., 100 m in EPSG:3035 for Europe)
var targetCRS = 'EPSG:3035';
var targetScale = 100;
prioStack = prioStack
  .resample('bilinear')
  .reproject({crs: targetCRS, scale: targetScale});

Export.image.toDrive({
  image: prioStack,
  description: 'NbS_Priority_Hotspots_Cantabria',
  folder: 'desirmed',
  fileNamePrefix: 'NbS_Priority_Hotspots_Cantabria',
  region: cantabria.geometry().bounds(1),
  scale: targetScale,
  crs: targetCRS,
  maxPixels: 1e13
});

// ---------- OPTIONAL: ZONAL STATS AS TABLE ----------
var meanStats = prioStack.reduceRegion({
  reducer: ee.Reducer.mean(),
  geometry: cantabria.geometry(),
  scale: targetScale,
  maxPixels: 1e13,
  bestEffort: true
});

var outFeat = ee.Feature(null, meanStats).set('region', 'Cantabria');
Export.table.toDrive({
  collection: ee.FeatureCollection([outFeat]),
  description: 'NbS_Priority_Means_Cantabria',
  folder: 'desirmed',
  fileFormat: 'CSV'
});

6b. Build a Multi-Hazard NbS Index (weights & optional confidence)

Many planners prefer a single combined surface for screening. We build an index as a weighted sum of per-hazard priorities. Weights (wFlood, wFire) should sum to 1 (or we normalize them). If you’ve created confidence bands per hazard (e.g., from expert certainty or validation), you can also propagate them to the final index.

// ---------- WEIGHTED COMBINATION ----------
var wFlood = 0.6;   // emphasize floods (example)
var wFire  = 0.4;   // fire gets 40%
var wSum   = wFlood + wFire;
var wF     = wFlood / wSum;
var wFi    = wFire  / wSum;

var nbsIndex = prioFlood.multiply(wF)
  .add(prioFire.multiply(wFi))
  .rename('nbs_index')               // 0–1 (approx)
  .clip(cantabria);

// ---------- OPTIONAL: CONFIDENCE (if available) ----------
// Example: confidence bands per hazard on a 1–3 scale (low/med/high)
// Suppose you have floodConf, fireConf (ee.Image, same footprint).
// Convert to 0–1 (1=low, 3=high -> map to 0.33..1), then weighted combine.
var hasConfidence = false; // flip to true if you have these layers
var nbsConfidence = ee.Image(1); // placeholder default

if (hasConfidence) {
  var floodConf01 = floodConf.unitScale(1,3);  // -> 0..1
  var fireConf01  = fireConf.unitScale(1,3);   // -> 0..1
  nbsConfidence = floodConf01.multiply(wF).add(fireConf01.multiply(wFi))
                  .rename('nbs_confidence');
}

// ---------- EXPORT COMBINED INDEX ----------
var toExport = hasConfidence ? nbsIndex.addBands(nbsConfidence) : nbsIndex;
Export.image.toDrive({
  image: toExport.reproject({crs: targetCRS, scale: targetScale}),
  description: 'NbS_Index_Combined',
  folder: 'desirmed',
  fileNamePrefix: 'NbS_Index_Combined',
  region: cantabria.geometry().bounds(1),
  scale: targetScale,
  crs: targetCRS,
  maxPixels: 1e13
});
Interpretation: the combined nbs_index highlights places with consistently high gap-weighted hazard priority. Use the (optional) confidence band to communicate evidence strength, following good practice in IPBES or IPCC-style confidence framing.

6c. Knobs & Tweaks (what you can tune safely)

  • Percentiles: p95 (scaling), p60/p80 (hotspots), p70 (protect). Use positive-only percentiles to keep thresholds adaptive and reproducible.
  • Normalization caps: If your mitigation adj layers top at ~1.3 (not 1.5), divide by 1.3 instead.
  • Common scale & CRS: For multi-hazard overlays, resample to a shared grid (e.g., 100 m EPSG:3035 in the EU)-see projections & resampling.
  • Hotspot area targeting: Instead of fixed percentiles, threshold by area (e.g., top 10,000 ha) via iterative search or histograms.
  • Masks: Excluding permanent water from floods (JRC) avoids misclassifying rivers/lakes as “flood hazard.” Adjust masks to your context.

6d. Integrating Local Data (LULC, hazards, ESS)

You can plug in local datasets seamlessly. Two common cases:

Case 1 - Local hazard rasters (e.g., flood depth, fire susceptibility)

Replace the hazard intensity with your local raster(s). Normalize to 0–1 (e.g., divide by positive-only p95), then proceed with the same gap logic. Ensure alignment (CRS/grid) with mitigation layers.

// Load your local hazard (e.g., flood depth raster in meters)
var floodDepthLocal = ee.Image('projects/your-project/assets/flood_depth_100y')
  .reproject({crs: targetCRS, scale: targetScale});

// Positive-only p95 scaling to 0–1
var fd_p95 = floodDepthLocal.updateMask(floodDepthLocal.gt(0))
  .reduceRegion({
    reducer: ee.Reducer.percentile([95]),
    geometry: cantabria.geometry(),
    scale: targetScale, maxPixels: 1e13, bestEffort: true
  }).values().get(0);

var floodHazLocal = floodDepthLocal.divide(ee.Number(fd_p95)).clamp(0,1).rename('flood_haz_local');

// Priority with existing flood mitigation gap:
var floodGapLocal = ee.Image(1).subtract(floodMit);           // reuse normalized mitigation (0–1)
var prioFloodLocal = floodHazLocal.multiply(floodGapLocal);   // same formula

Case 2 - Local LULC/ESS layers (your own classification)

If you have a local land-use/land-cover (LULC) map or direct ESS layers, build mitigation exactly as before: remap → normalize → adjust with terrain (optional). Keep values in 0–1 for easy integration.

// Example: local LULC (categorical) and your own mitigation scores (0..1)
var lulcLocal = ee.Image('projects/your-project/assets/lulc_local_2023').rename('lulc');

// Dictionary: class → mitigation score (0..1)
var mitDict = { 10:0.75, 11:0.60, 20:0.50, 30:0.20 }; // example codes
var from = Object.keys(mitDict).map(function(k){return parseInt(k);});
var to   = from.map(function(k){return mitDict[k];});

var mitLocal = lulcLocal.remap(from, to).rename('mit_local'); // 0..1

// Optionally, apply spatial modifiers (slope/aspect/elevation) as in Step 4
var mitLocalAdj = mitLocal
  .multiply(ee.Image(1).subtract(slope.divide(60).multiply(0.2))) // gentle slope penalty
  .clamp(0,1)
  .rename('mit_local_adj');
Tip: When swapping in local data, always harmonize scale & CRS and re-run the percentile thresholds on the local distributions (don’t reuse thresholds calibrated for other regions).

6e. Summary

We’ve taken you from proxy-and-spatially-enriched mitigation layers to hazard-aware NbS prioritization, then exported per-hazard priorities, hotspots, and a combined multi-hazard index suitable for decision dashboards. The method remains explainable: Priority = Hazard × (1 − Mitigation), with adaptive thresholds (positive-only percentiles) and optional confidence bands for transparent uncertainty communication.

Where to go next: It is important to stress again that, where to put an NbS is quite complex and goes beyond, spatial aggreagtion to mitigate the risk. Thus, consistent revision of the method we have presented here with local experts, local knowledge, socio-economic and governance parameters within the confinement of the associated risk(s) i think will make much more sense towards implementation.

Recommended Readings

Additional key literature:

  • Eigenbrod, F., et al. (2010). The impact of proxy‐based methods on mapping the distribution of ecosystem services. Journal of Applied Ecology, 47(2), 377–385 - Extends the 2010 critiques with empirical tests on spatial mismatches in proxy-based ES maps.
  • Keeler, B.L., et al. (2012). Linking water quality and well-being for improved assessment and valuation of ecosystem services. PNAS, 109(45), 18619–18624 - Demonstrates integration of ecological data with human well-being indicators, relevant for interpreting hazard–ESS overlaps.
  • Tellman, B., et al. (2021). Satellite imaging reveals increased proportion of population exposed to floods. Nature, 596, 80–86 - Underpins the Global Flood Database; important for understanding dataset derivation and accuracy.
  • Giglio, L., et al. (2018). The Collection 6 MODIS burned area mapping algorithm. Remote Sensing of Environment, 217, 72–85 - Technical background on the burned area product’s algorithm, temporal resolution, and limitations.
  • Zeynep Türkay, Azime Tezer. Contribution of integrated ecosystem services to urban planning tools: Can it be more functional for the sustainability of ecosystems?. One Ecosystem, 9, e121553. - Explains how hazard-aware ESS maps can be embedded into decision-making processes at the local level.

sessionInfo()
R version 4.4.0 (2024-04-24 ucrt)
Platform: x86_64-w64-mingw32/x64
Running under: Windows 11 x64 (build 22631)

Matrix products: default


locale:
[1] LC_COLLATE=English_Germany.utf8  LC_CTYPE=English_Germany.utf8   
[3] LC_MONETARY=English_Germany.utf8 LC_NUMERIC=C                    
[5] LC_TIME=English_Germany.utf8    

time zone: Europe/Berlin
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] terra_1.8-5     sf_1.0-19       changehistory_1.7.1

loaded via a namespace (and not attached):
 [1] jsonlite_1.8.8     compiler_4.4.0     promises_1.3.0     Rcpp_1.0.13       
 [5] stringr_1.5.1      git2r_0.33.0       callr_3.7.6        later_1.3.2       
 [9] jquerylib_0.1.4    yaml_2.3.10        fastmap_1.2.0      R6_2.5.1          
[13] classInt_0.4-10    knitr_1.48         tibble_3.2.1       units_0.8-5       
[17] rprojroot_2.0.4    DBI_1.2.3          bslib_0.8.0        pillar_1.9.0      
[21] rlang_1.1.4        utf8_1.2.4         cachem_1.1.0       stringi_1.8.4     
[25] httpuv_1.6.15      xfun_0.47          getPass_0.2-4      fs_1.6.4          
[29] sass_0.4.9         cli_3.6.3          magrittr_2.0.3     class_7.3-22      
[33] ps_1.8.1           grid_4.4.0         digest_0.6.36      processx_3.8.4    
[37] rstudioapi_0.16.0  lifecycle_1.0.4    vctrs_0.6.5        KernSmooth_2.23-22
[41] proxy_0.4-27       evaluate_0.24.0    glue_1.7.0         whisker_0.4.1     
[45] codetools_0.2-20   e1071_1.7-16       fansi_1.0.6        rmarkdown_2.28    
[49] httr_1.4.7         tools_4.4.0        pkgconfig_2.0.3    htmltools_0.5.8.1