Last updated: 2025-08-29
Knit directory: nbs-workflow/
This page is built with R Markdown and workflowr 1.7.1. See Past versions for the development history.
This guideline is based on the NBRACER project Deliverable 5.3 - Quantification of Ecosystems and NbS Hotspot Identification. The deliverable adopted two complementary approaches:
Mapping and Quantifying the spatial distribution of ecosystem services (ES) is very useful for climate-resilient policy and planning for nature-based solutions (NbS). Normally, one will do this by modelling and assessing the supply of ecosystem services (ESS) at a given location. However, direct monitoring of every service is rarely feasible at larger or even at local scales where local measurement or insotu data capacity is not present. Most Ecosystem service assessment often relies on land cover as a proxy due to the absence of primary field-based data.
Several scholarly works (Burkhard et al., 2009, Jacobs et al., 2015, Maes et al., 2012) have established land cover as a reasonable proxy for estimating ecosystem service supply, especially where direct measurements are unavailable. Here, we modified the approach of (Burkhard et al., 2009) and present a methodology that leverages the CORINE Land Cover (CLC) inventory with expert/sound judgement approximations to generate both regulating and provisioning ES layers.
ES mapping traditionally follows four families of approaches (Burkhard et al., 2009; Jacobs et al., 2015; Maes et al., 2012):
We realised that Proxy methods dominate large-area studies because they are quick and inexpensive, but they risk sizable errors by assuming uniform service supply within a land-cover class (Eigenbrod et al., 2010a; Eigenbrod et al., 2010b). To mitigate those shortcomings, we: (i) use weighted score tables calibrated from recent European studies and best practices on improving ESS mapping (Martínez-Harms et al., 2016; Egoh et al., 2008) and (ii) implemented optional probability layers following Nedkov & Burkhard (2012); and (iii) provide UI tools for rapid sensitivity checks.
Paradigm | Typical Data | Core Strength | Key Limitation |
---|---|---|---|
Proxy / Matrix | Land-cover maps (CORINE, GlobCover) | Fast, transparent, scalable | Assumes equal service across class; static |
Process-based | Climate, DEM, soils, hydrology | Mechanistic; scenario-ready | High data & expertise demand |
Empirical / ML | Field plots, crowdsourced, RS metrics | Data-driven accuracy | Often local; expensive to upscale |
Participatory | Workshops, interviews | Cultural relevance; stakeholder buy-in | Subjective; coarse spatial detail |
Hybrid (this guide) | CORINE + weighted scores + optional probabilities | Continental coverage; reproducible; GEE-native | Relies on proxy assumptions; field validation advised |
We will walk you through:
.remap()
.Each step is designed to be modular and replicable, allowing you to adapt the workflow to your specific needs or local conditions. To handle these datasets and computation effiecntly, We will use Earth Engine to do this.
At the end of this guide, we will produce a replicable pipeline that can be deploy to NbS potential implementation, evaluate land-use trade-offs, or track ES produced by these areas and what hazards they can support or mitigate it as well as what potential hzards can affect those ecosystems.
Before mapping ecosystem services (ESS), we need a study area-the geographic extent where our calculations run and our maps render. In GIS terms, this is just a polygon (or set of polygons). We’ll show two common routes:
ESS scoring and any later hazard–mitigation overlays must be computed over a consistent footprint. Keeping a well-defined boundary helps with reproducibility, performance, and interpretation (e.g., comparing results across regions). If you’re new to Earth Engine, a FeatureCollection is simply a set of vector features (points/lines/polygons) with attributes-perfect for holding country/region boundaries.
ee.Filter.eq
: Builds an equality filter for attribute queries.filterBounds()
: Spatial filter-keeps features that intersect a geometry.Map.centerObject()
: Recenters the map on a given geometry.The snippet below loads country and regional boundaries, filters to Spain (ADM0), then narrows to Cantabria (ADM1). This route is great when you don’t yet have a local shapefile.
// =========================================
// 1) Load all boundary levels (geoBoundaries mirror)
// =========================================
var adm0 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM0");
var adm1 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM1");
var adm2 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM2");
// =========================================
// 2) Spain – Cantabria (ADM1 via ADM0)
// =========================================
var spain = adm0.filter(ee.Filter.eq('shapeName', 'Spain')); // attribute filter
var adm1_spain = adm1.filterBounds(spain); // spatial filter (intersects Spain)
var cantabria = adm1_spain.filter(ee.Filter.eq('shapeName', 'Cantabria')); // attribute filter
// =========================================
// 3) Display Cantabria boundary
// =========================================
Map.addLayer(cantabria, {color: 'blue'}, 'Spain – Cantabria (ADM1)', false);
Map.centerObject(cantabria, 10);
ee.Filter.eq('shapeName','Spain')
to isolate Spain.filterBounds(spain)
so we only keep ADM1 regions that touch Spain’s geometry.
If you already uploaded a shapefile/GeoJSON (see Earth Engine guide: Upload tables as assets), load it directly.
Important: Asset paths must be strings.
Wrong: ee.FeatureCollection(projects/ee-desmond/assets/cantabria)
Right: ee.FeatureCollection("projects/ee-desmond/assets/cantabria")
// If you have an uploaded boundary asset, load it like this:
var cantabria = ee.FeatureCollection("projects/ee-desmond/assets/cantabria");
// (Optional) Visualize
Map.addLayer(cantabria, {color: 'blue'}, 'Cantabria (asset)', false);
Map.centerObject(cantabria, 10);
shapeName
) can vary by dataset.
Use the Code Editor’s Inspector to click a feature and confirm property keys before filtering.
With the study area fixed, every subsequent step-linking land cover to ESS scores, raster reclassification, uncertainty bands, and hazard overlays-will be clipped and summarized against this boundary. That ensures your ESS intensity maps and your future NbS hotspot analysis remain spatially consistent and comparable across regions.
At the heart of NbS spatial planning is a relational table linking land-cover (CORINE) codes to ecosystem services (ESS), each scored for its potential to mitigate specific hazards such as floods, erosion, or heat. This is the widely used matrix approach (Burkhard et al., 2009), where land-use classes are mapped to ESS with a score representing their functional capacity.
We use a 0–1 scale (0 = no potential, 1 = very high potential) for scoring to ensure clarity and compatibility with later weighting and combination steps. Each ESS will form a separate raster band representing its spatial distribution of potential.
remap(..., 0)
).We start with five mitigation themes commonly linked to regulating services: Flood, Fire, Drought, Heat wave, and Erosion. You can add more later (e.g., water quality regulation, coastal protection). See matrix-style mapping ideas in Nedkov & Burkhard (2012).
// =========================================
// CORINE Land Cover 2018 (100 m) - clip to your boundary
// Docs: https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_CORINE_V20_100m_2018
// =========================================
var corine = ee.Image('COPERNICUS/CORINE/V20/100m/2018').select('landcover').clip(cantabria);
// (Optional) quick look
Map.addLayer(corine, {}, 'CORINE 2018 (raw)', false);
We are going to encode the scores as JavaScript objects. Every value you see reflect potential (not measured flow).
Unlisted classes will be mapped to 0
by default to avoid accidental inflation.
Use the official CLC code list to extend/refine (reference).
// =========================================
// FLOOD mitigation (0–1)
// =========================================
var floodMitigationScore = {
411: 0.95, // Inland marshes - storage, roughness, attenuation
412: 0.95, // Peatbogs - high retention
421: 0.75, // Salt marshes
311: 0.75, // Broad-leaved forest - canopy, infiltration
312: 0.60, // Coniferous forest
213: 0.40, // Rice fields - temporary water retention
244: 0.60, // Agroforestry - perennial cover increases infiltration
321: 0.60, // Natural grasslands - surface roughness
323: 0.75, // Sclerophyllous vegetation - structure, soils
324: 0.40, // Transitional woodland/shrub - mixed, variable
523: 0.90, // Estuaries - storage/expansion areas
521: 0.75, // Coastal lagoons
511: 0.40, // Water courses - conveyance; limited attenuation on-channel
141: 0.20, // Green urban areas - local infiltration/cooling
112: 0.10 // Discontinuous urban fabric - some permeable fraction
};
// =========================================
// FIRE mitigation (0–1)
// =========================================
var fireMitigationScore = {
231: 0.40, // Pastures - lower fuel continuity vs dense forest
241: 0.30, // Annual crops - seasonal moisture
243: 0.40, // Mosaic vegetation - breaks continuity
311: 0.20, // Broad-leaved forest - can carry fire under drought
322: 0.50, // Heathland - mixed; can reduce spread with mosaics
324: 0.30, // Transitional woodland/shrub
411: 0.80, // Inland marshes - high moisture
512: 0.90, // Lakes - strong barrier
141: 0.30, // Green urban areas - breaks, irrigated
112: 0.20 // Discontinuous urban - mixed materials
};
// =========================================
// DROUGHT mitigation (0–1)
// =========================================
var droughtMitigationScore = {
411: 0.95, // Marshes - buffering water availability
412: 0.95, // Peat bogs - storage
311: 0.80, // Broad-leaved forest - rooting depth/ET shading
312: 0.70, // Coniferous forest
321: 0.60, // Natural grasslands
241: 0.75, // Agroforestry / perennial systems
211: 0.20, // Arable land - exposure/high demand
142: 0.05, // Sports/leisure - manicured turf, water demand
521: 0.70 // Coastal lagoons - local buffering
};
// =========================================
// HEAT WAVE mitigation (0–1)
// =========================================
var heatWaveMitigationScore = {
311: 0.90, // Broad-leaved forest - canopy cooling
312: 0.80, // Coniferous forest
313: 0.85, // Mixed forest
141: 0.60, // Green urban areas - shading/ET
142: 0.50, // Sports/leisure
112: 0.30, // Discontinuous urban fabric - partial greening
243: 0.40, // Mosaic vegetation
321: 0.65, // Natural grasslands
324: 0.50, // Transitional woodland/shrub
322: 0.40, // Heathland
512: 0.80, // Lakes - evap cooling
411: 0.70 // Inland marshes - ET cooling
};
// =========================================
// EROSION mitigation (0–1)
// =========================================
var erosionMitigationScore = {
244: 0.60, // Agroforestry - root systems, cover
321: 0.60, // Natural grasslands - continuous cover
323: 0.75, // Sclerophyllous - deep roots, structure
324: 0.40, // Transitional woodland/shrub
211: 0.20, // Arable land - exposed soil
243: 0.50, // Mosaic crops/natural - patchiness reduces runoff
222: 0.25, // Permanently irrigated - management-dependent
231: 0.30, // Pastures
312: 0.50, // Coniferous forest
311: 0.60, // Broad-leaved forest
313: 0.55 // Mixed forest
};
We will now convert the dictionaries to parallel lists and use image.remap()
.
Note the explicit default 0
for any classes not in the table (transparent and conservative).
// =========================================
// Helper: remap CORINE codes → scores (default 0 if missing)
// =========================================
function remapMitigation(image, scoreDict, newName) {
var fromCodes = Object.keys(scoreDict).map(function(k){ return parseInt(k, 10); });
var toScores = fromCodes.map(function(k){ return scoreDict[k]; });
return image.remap(fromCodes, toScores, 0).rename(newName);
}
// =========================================
// Base (per-hazard) layers
// =========================================
var floodBase = remapMitigation(corine, floodMitigationScore, 'Flood_Mitigation');
var fireBase = remapMitigation(corine, fireMitigationScore, 'Fire_Mitigation');
var droughtBase = remapMitigation(corine, droughtMitigationScore, 'Drought_Mitigation');
var heatBase = remapMitigation(corine, heatWaveMitigationScore, 'HeatWave_Mitigation');
var erosionBase = remapMitigation(corine, erosionMitigationScore, 'Erosion_Mitigation');
// Stack into a single multi-band image
var mitigationImage = floodBase.addBands([fireBase, droughtBase, heatBase, erosionBase]);
// Bands: Flood_Mitigation, Fire_Mitigation, Drought_Mitigation, HeatWave_Mitigation, Erosion_Mitigation
Keep a consistent palette and {min:0, max:1}
for comparability across hazards.
// =========================================
// Visualization (0–1). Toggle in Layers panel.
// =========================================
var vis01 = {min: 0, max: 1, palette: ['#f7fbff','#deebf7','#c6dbef','#9ecae1','#6baed6','#4292c6','#2171b5','#084594']};
Map.addLayer(mitigationImage.select('Flood_Mitigation'), vis01, 'Mitigation · Flood', true);
Map.addLayer(mitigationImage.select('Fire_Mitigation'), vis01, 'Mitigation · Fire', false);
Map.addLayer(mitigationImage.select('Drought_Mitigation'), vis01, 'Mitigation · Drought', false);
Map.addLayer(mitigationImage.select('HeatWave_Mitigation'), vis01, 'Mitigation · Heat Wave', false);
Map.addLayer(mitigationImage.select('Erosion_Mitigation'), vis01, 'Mitigation · Erosion', false);
Values are relative potentials inferred from land cover - not measured service flows. Use them for screening (e.g., locating NbS opportunities) and comparative analysis. Where decisions carry high stakes, combine with process-based or empirical checks (e.g., hydrologic models, local monitoring).
Image.remap
: Reclassifies pixel values using lists of from and to values.Zonal statistics
: calculates statistics of a raster dataset within the boundaries of zones defined by another dataset (we’ll use this later).Assigning 0–1 scores with expert judgement already yields a useful relative map of ESS potential or habitat ranking. However, where a service sits in the landscape also matters: the same class performs differently on a floodplain than on a ridge. This is the key critique of pure proxy mapping highlighted by Eigenbrod et al., 2010a and 2010b.
We use 30 m SRTM for elevation, then derive slope and aspect using Earth Engine’s
ee.Terrain.slope
and
ee.Terrain.aspect
.
We also build simple elevation zones to reflect lowland vs. upland dynamics.
// =============================
// 4) Topographic data
// =============================
var srtm = ee.Image('USGS/SRTMGL1_003').clip(cantabria); // elevation (m)
var slope = ee.Terrain.slope(srtm); // degrees
var aspect = ee.Terrain.aspect(srtm); // degrees from north
// (Optional preview)
Map.addLayer(srtm, {min:0, max:2500, palette:['blue','green','brown','white']}, 'Elevation (m)', false);
Map.addLayer(slope, {min:0, max:60, palette:['white','yellow','red']}, 'Slope (°)', false);
Map.addLayer(aspect,{min:0, max:360, palette:['blue','green','yellow','red','blue']}, 'Aspect (°)', false);
// Normalize slope to 0–1 (assume 60° ~ "very steep" cap)
var slopeNorm = slope.divide(60).clamp(0, 1);
// Aspect to radians and derive northness/eastness in [0,1]
// (northness=1 at north-facing, 0 at south-facing)
var aspectRad = aspect.multiply(Math.PI/180);
var northness = aspectRad.cos().unitScale(-1, 1); // [-1..1] → [0..1]
var eastness = aspectRad.sin().unitScale(-1, 1); // [-1..1] → [0..1]
// Elevation zones (tune thresholds for your region)
var elev = srtm;
var lowland = elev.lt(300);
var midland = elev.gte(300).and(elev.lte(800));
var upland = elev.gt(800);
// Hazard-specific elevation weights (multiplicative factors)
// Note: returns ee.Image with zone-wise multipliers that sum "piecewise"
function elevWeight(hazard) {
if (hazard === 'Flood') {
// Higher flood-mitigation relevance in lowlands (floodplains)
return lowland.multiply(1.10).add(midland.multiply(1.00)).add(upland.multiply(0.80));
}
if (hazard === 'Erosion') {
// Erosion control more critical in uplands/steep terrains
return lowland.multiply(0.90).add(midland.multiply(1.00)).add(upland.multiply(1.20));
}
if (hazard === 'Fire') {
// Slightly higher importance away from valley bottoms
return lowland.multiply(0.95).add(midland.multiply(1.05)).add(upland.multiply(1.10));
}
if (hazard === 'Drought') {
// Some uplift in mid/uplands (cooler/more storage), penalize low hot flats
return lowland.multiply(0.90).add(midland.multiply(1.05)).add(upland.multiply(1.10));
}
if (hazard === 'Heat') {
// Slight boost in low/mid (where people heat-stressed), neutral in high uplands
return lowland.multiply(1.05).add(midland.multiply(1.05)).add(upland.multiply(1.00));
}
}
We start from the 0–1 base layers and apply: (i) a slope factor, (ii) an aspect factor (where relevant),
and (iii) the elevation-zone weight. Factors are gentle (±5–50%) to avoid overpowering the base LULC signal.
Finally, we clamp
to a reasonable upper bound (e.g., 1.5).
// =============================
// 5) Apply combined adjustments
// =============================
// Flood: flatter terrain ↑ (reduce with slope), lowlands ↑ (elevation weight)
var floodAdj = floodBase
.multiply(ee.Image(1).subtract(slopeNorm.multiply(0.50))) // steeper → lower potential
.multiply(elevWeight('Flood'))
.clamp(0, 1.5);
// Erosion: steeper terrain ↑ (increase with slope), uplands ↑
var erosionAdj = erosionBase
.multiply(ee.Image(1).add(slopeNorm.multiply(0.50))) // steeper → higher mitigation need/value
.multiply(elevWeight('Erosion'))
.clamp(0, 1.5);
// Fire: steep slopes ↓; south-facing (warmer/drier) ↓; slight elev boost mid/uplands
// northness ∈ [0..1]: 1=north-facing (cooler), 0=south-facing (warmer)
// Factor varies ~ [0.9 .. 1.1] by aspect; adjust weights if your region differs.
var fireAdj = fireBase
.multiply(ee.Image(1).subtract(slopeNorm.multiply(0.20))) // steep → lower effectiveness
.multiply(ee.Image(0.90).add(northness.multiply(0.20))) // boost north-facing, reduce south
.multiply(elevWeight('Fire'))
.clamp(0, 1.5);
// Drought: north-facing (cooler/moister) ↑; mid/uplands ↑
var droughtAdj = droughtBase
.multiply(ee.Image(1).add(northness.multiply(0.10)))
.multiply(elevWeight('Drought'))
.clamp(0, 1.5);
// Heat wave: north & east aspects ↑ (shading/wind), low/mid slight ↑
var heatAdj = heatBase
.multiply(ee.Image(1).add(northness.multiply(0.05)))
.multiply(ee.Image(1).add(eastness.multiply(0.05)))
.multiply(elevWeight('Heat'))
.clamp(0, 1.5);
// =============================
// 6) Stack adjusted layers (bands keep the same names)
// =============================
var allMitigation = floodAdj
.addBands([fireAdj, droughtAdj, heatAdj, erosionAdj])
.rename([
'Flood_Mitigation',
'Fire_Mitigation',
'Drought_Mitigation',
'HeatWave_Mitigation',
'Erosion_Mitigation'
]);
We keep a simple 2-color ramp here for contrast. You can reuse the sequential palette from Step 2 as well.
// =============================
// 7) Display
// =============================
var twoTone = ['white', 'red'];
Map.addLayer(floodAdj, {min:0, max:1.5, palette: twoTone}, 'Flood Mitigation (Adj)', true);
Map.addLayer(fireAdj, {min:0, max:1.5, palette: twoTone}, 'Fire Mitigation (Adj)', false);
Map.addLayer(droughtAdj, {min:0, max:1.5, palette: twoTone}, 'Drought Mitigation (Adj)', false);
Map.addLayer(heatAdj, {min:0, max:1.5, palette: twoTone}, 'HeatWave Mitigation (Adj)',false);
Map.addLayer(erosionAdj, {min:0, max:1.5, palette: twoTone}, 'Erosion Mitigation (Adj)', false);
1±0.05..0.50
) so
land-cover still drives the signal, while terrain nuances nudge it toward realism.1.5
for display.
For indices, you can renormalize later to 0–1 if needed.northness
here is rescaled to [0,1] where 1 = north-facing, 0 = south-facing.
Tweak aspect weights for your climate (e.g., in Southern Hemisphere, invert the sign).Now that we have adjusted mitigation layers (0–1, optionally up to 1.5 after modifiers), let’s extract region-level statistics and export both tables (CSV) and maps (GeoTIFF). This makes it easy to report results, plot charts elsewhere, and archive the layers for your team.
It’s the mean of mitigation scores across pixels that have any mitigation signal (>0), expressed as a percentage. Concretely, we sum pixel scores (0–1), divide by the count of pixels with score > 0, and multiply by 100. This is NOT a land area percentage of a strict class- rather, it’s a normalized measure of how much of the “maximum possible” mitigation is realized across the pixels that contribute at all. If you instead want area above threshold (e.g., % of pixels with score ≥ 0.7), we show how to modify the reducer in a note below.
// =============================
// 1) Cantabria boundary (single feature)
// =============================
var adm1 = ee.FeatureCollection("projects/sat-io/open-datasets/geoboundaries/CGAZ_ADM1");
var cantabria = ee.Feature(
adm1.filter(ee.Filter.eq('shapeName', 'Cantabria')).first()
);
We compute region-wide means for all mitigation bands using reduceRegion
with Reducer.mean()
.
// =============================
// 2) Compute mean mitigation scores for Cantabria
// =============================
var mitigationStats = cantabria.set(
allMitigation.reduceRegion({
reducer: ee.Reducer.mean(),
geometry: cantabria.geometry(),
scale: 100,
maxPixels: 1e9
})
);
print('Mitigation scores – Cantabria (mean per band):', mitigationStats);
// =============================
// 3) Export mean scores to CSV
// =============================
Export.table.toDrive({
collection: ee.FeatureCollection([mitigationStats]),
description: 'Mitigation_Scores_Cantabria',
folder: 'desirmed', // change or remove as you prefer
fileFormat: 'CSV'
});
The helper below computes the percentage defined as:
100 × mean(score | score > 0)
. It’s a compact indicator of how strong the mitigation
signal is across pixels that contribute at all.
// =============================
// 4) Function: "Effective contributing area %" per hazard
// = 100 * mean(score | score > 0)
// =============================
function computeMitigationContribution(region, hazardImage, bandName) {
// Sum of scores over the region
var stats = hazardImage.reduceRegion({
reducer: ee.Reducer.sum(),
geometry: region.geometry(),
scale: 100,
maxPixels: 1e9
});
var contributingSum = ee.Number(stats.get(bandName)); // sum of 0..1 values
// Count of pixels that actually contribute (>0)
var countContrib = ee.Number(
hazardImage.select(bandName).gt(0).reduceRegion({
reducer: ee.Reducer.count(),
geometry: region.geometry(),
scale: 100,
maxPixels: 1e9
}).get(bandName)
);
// 100 * mean(score | score > 0). Guard against division by zero.
var percent = ee.Algorithms.If(
countContrib.gt(0),
contributingSum.divide(countContrib).multiply(100),
0
);
return region.set(bandName + '_Percent', percent);
}
// =============================
// 5) Apply to all hazard bands
// =============================
var allBands = [
'Flood_Mitigation',
'Fire_Mitigation',
'Drought_Mitigation',
'HeatWave_Mitigation',
'Erosion_Mitigation'
];
var cantabriaPercent = cantabria;
allBands.forEach(function(band) {
cantabriaPercent = computeMitigationContribution(
cantabriaPercent,
allMitigation.select(band),
band
);
});
print('Hazard mitigation contribution percentages – Cantabria:', cantabriaPercent);
// =============================
// 6) Export percentages to CSV
// =============================
Export.table.toDrive({
collection: ee.FeatureCollection([cantabriaPercent]),
description: 'Hazard_Mitigation_Contribution_Percentages_Cantabria',
folder: 'desirmed',
fileFormat: 'CSV'
});
// Example: percent of area with score >= 0.7
function percentAreaAboveThreshold(region, hazardImage, bandName, thr) {
var pxArea = ee.Image.pixelArea(); // m²/pixel
var band = hazardImage.select(bandName);
var areaAbove = pxArea.updateMask(band.gte(thr)).reduceRegion({
reducer: ee.Reducer.sum(),
geometry: region.geometry(),
scale: 100, maxPixels: 1e13
}).get('area');
var areaAll = pxArea.updateMask(band.gte(0)).reduceRegion({
reducer: ee.Reducer.sum(),
geometry: region.geometry(),
scale: 100, maxPixels: 1e13
}).get('area');
return ee.Number(areaAbove).divide(ee.Number(areaAll)).multiply(100);
}
Export the adjusted, multi-band mitigation image for use in GIS or modelling pipelines. You can include a CRS if you need a specific projection.
// =============================
// 7) Export multiband mitigation GeoTIFF for Cantabria
// =============================
Export.image.toDrive({
image: allMitigation.clip(cantabria.geometry()),
description: 'Mitigation_Map_Cantabria',
folder: 'desirmed',
fileNamePrefix: 'Mitigation_Map_Cantabria',
region: cantabria.geometry().bounds(),
scale: 100,
maxPixels: 1e13,
fileFormat: 'GeoTIFF'
// , crs: 'EPSG:3035' // (optional) set a CRS if your workflow requires it
});
pixelArea()
when you truly want area in m²/ha (as in the threshold example).Purpose: Hazard layers are a crucial component in spatial ecosystem service (ESS) assessments because they reveal where valuable ecosystem functions are at risk from natural disturbances. By combining satellite-derived hazard footprints with ESS maps, we can identify locations where ecosystem benefits (e.g., flood regulation, habitat provision) are both present and historically exposed to hazards such as floods or wildfires. This enables more targeted Nature-based Solutions (NbS) planning that prioritises areas where interventions can simultaneously protect ecosystem function and reduce hazard impacts.
In this section, we integrate two global hazard datasets:
// 1. Load MODIS Global Flood Database (Tellman et al., 2021) var gfd = ee.ImageCollection('GLOBAL_FLOOD_DB/MODIS_EVENTS/V1'); // 2. Select a single flood event (Hurricane Isaac, USA, 2012) // The 'id' property is a unique Dartmouth Flood Observatory identifier var hurricaneIsaacDartmouthId = 3977; var hurricaneIsaacUsa = ee.Image( gfd.filterMetadata('id', 'equals', hurricaneIsaacDartmouthId).first() ); // Set map view and basemap Map.setOptions('SATELLITE'); Map.setCenter(-90.2922, 29.4064, 9); // 3. Flood extent (binary water mask for event duration) Map.addLayer( hurricaneIsaacUsa.select('flooded').selfMask(), {min: 0, max: 1, palette: '001133'}, 'Hurricane Isaac - Inundation Extent' ); // 4. Flood duration (number of days flooded during the event) var durationPalette = ['c3effe', '1341e8', '051cb0', '001133']; Map.addLayer( hurricaneIsaacUsa.select('duration').selfMask(), {min: 0, max: 4, palette: durationPalette}, 'Hurricane Isaac - Duration' ); // 5. Aggregated historical flood footprint (sum of all events, 2000–2018) var gfdFloodedSum = gfd.select('flooded').sum(); Map.addLayer( gfdFloodedSum.selfMask(), {min: 0, max: 10, palette: durationPalette}, 'GFD Satellite Observed Flood Plain' ); // 6. Permanent water from JRC Global Surface Water dataset (to mask out rivers/lakes) var jrc = gfd.select('jrc_perm_water').sum().gte(1); Map.addLayer( jrc.selfMask(), {min: 0, max: 1, palette: 'C3EFFE'}, 'JRC Permanent Water' ); // 7. Load MODIS Burned Area Product (MCD64A1) for 2000–2019 // The BurnDate band gives day-of-year of burn; we reclassify to binary var burned = ee.ImageCollection("MODIS/061/MCD64A1") .select('BurnDate') .filterDate('2000-01-01', '2019-01-01') .map(function(img) { return img.gt(0).selfMask(); // Keep burned pixels only }) .sum() .gte(1); // Burned at least once in the period // 8. Burned area visualisation Map.addLayer(burned.selfMask(), {palette: ['orange']}, 'Burned Area'); // 9. Create binary mask for overlay with ESS maps var burnBinary = burned.selfMask();
selfMask()
removes non-hazard pixels, leaving a clean binary raster for spatial intersection with ESS layers.In this step we combine observed hazard intensity (historical floods, fires) with our mitigation capacity layers to find where Nature-based Solutions (NbS) can have the biggest impact. The guiding idea is a simple, transparent gap model:
Hazard_Intensity
× (1 − Mitigation_Capacity)
Hotspot → Spatial overlap of high ecosystem service supply and high hazard risk.
In this workflow: ESS_mask AND Hazard_mask
. It’s a diagnostic layer showing where opportunities exist, without yet ranking or filtering for feasibility.
Priority → A subset of hotspots where the calculated score meets the decision cut-off.
Spatially, this is: Hotspot_mask AND (Priority_score ≥ threshold)
.
A pixel might be a hotspot but not a priority if:
Protect / Enhance → High ESS areas with low hazard risk. Spatially: ESS_mask AND NOT(Hazard_mask)
.
These are not urgent NbS intervention sites for hazard reduction but should be maintained or enhanced to sustain ecosystem function.
1 − mitigation
. A value near 1 means low capacity (large deficit); near 0 means high capacity.
We render a hillshade (ee.Terrain.hillshade
)
and an outline to make overlays readable. This is cosmetic, but helpful for interpretation.
What we do: create two hazard layers, both as counts: Flood Hazard Intensity (FHI) from summed flood detections in the Global Flood Database, and Fire Hazard from summed burned months in MODIS MCD64A1. We mask permanent water for floods so lakes/rivers don’t inflate the signal.
Raw counts can have long tails (a few places with very high values). We scale hazards to 0–1 using positive-only percentiles: compute the p90/p95 among pixels with value > 0, then divide by that number. This (a) ignores zeros, (b) reduces outlier sensitivity, and (c) keeps the top ~5–10% near 1. If nothing is positive, we safely fall back to the max.
Our mitigation layers (floodAdj
, fireAdj
) were allowed to go up to ~1.5 after spatial
modifiers (slope, aspect, elevation). We normalize them to 0–1 by dividing by 1.5, then compute
Gap = 1 − mitigation
. Conceptually, the gap is a mitigation deficit:
high where ecosystems currently do less to reduce the hazard.
gap ≈ 1
has little capacity - a prime candidate for NbS if hazard is high.gap ≈ 0
already has high capacity - more a protect/enhance case.This multiplies exposure by deficit. It’s deliberately simple and explainable: places score high only when both hazard and gap are high. Values are 0–1 after normalization, which makes thresholds intuitive.
We convert the continuous priority surface into a binary hotspot mask using positive-only percentiles: p60 for floods, p80 for fires (tune to your planning brief). The idea is to flag the top X% by data-driven thresholds, not arbitrary absolute cutoffs.
We also tag areas with high hazard and high mitigation (both above adaptive thresholds). These are places where ecosystems are already doing useful work under stress - high-value targets for protection or gentle enhancement.
// ======================================================
// NBS HOTSPOT PRIORITIZATION - DISTINCT BY HAZARD
// Hazards: Flood (GFD) and Fire (MODIS MCD64A1)
// Capacity: your floodAdj & fireAdj mitigation layers (0..~1.5)
// Logic per hazard: Priority = Hazard_Intensity × (1 − Mitigation_Capacity)
// ======================================================
// ======================================================
// NBS HOTSPOT PRIORITIZATION - DISTINCT BY HAZARD (with prints)
// ======================================================
// Base map + boundary
var hill = ee.Terrain.hillshade(srtm)
.visualize({min:150, max:255, forceRgbOutput:true})
.multiply(0.9);
var outline = cantabria.style({color:'000000', width:2, fillColor:'00000000'});
// ---------------- STEP 1. Build HAZARD intensity layers ----------------
var gfd = ee.ImageCollection('GLOBAL_FLOOD_DB/MODIS_EVENTS/V1');
// Flood hazard = count of satellite-observed floods (exclude permanent water)
var FHI_raw = gfd.select('flooded').sum().rename('FHI').clip(cantabria);
var JRC_perm = gfd.select('jrc_perm_water').sum().gte(1);
var FHI = FHI_raw.updateMask(JRC_perm.not()); // mask-out permanent water
// Fire hazard = months burned at least once (2000–2019)
var burnedMonthly = ee.ImageCollection('MODIS/061/MCD64A1')
.select('BurnDate')
.filterDate('2000-01-01','2019-01-01')
.map(function(img){ return img.gt(0); });
var FIRE = burnedMonthly.sum().rename('FIRE').clip(cantabria);
// Debug prints
print('STEP 1 · Flood hazard (count) sample stats:',
FHI.reduceRegion({
reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
geometry: cantabria.geometry(), scale: 250, maxPixels: 1e9, bestEffort: true, tileScale: 2
})
);
print('STEP 1 · Fire hazard (count) sample stats:',
FIRE.reduceRegion({
reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
geometry: cantabria.geometry(), scale: 500, maxPixels: 1e9, bestEffort: true, tileScale: 2
})
);
// Optional viz of raw counts
Map.addLayer(hill.blend(FHI.visualize({min:0, max:10, palette:['c3effe','1341e8','051cb0','001133'], opacity:0.9})),
{}, 'STEP 1a · Flood Hazard (count)');
Map.addLayer(hill.blend(FIRE.visualize({min:0, max:24, palette:['fff5eb','fd8d3c','e6550d','a63603'], opacity:0.9})),
{}, 'STEP 1b · Fire Hazard (count)');
// ---------------- Helpers for adaptive percentiles (robust keys) ----------------
function dictFirstNumber_(dict, fallback) {
dict = ee.Dictionary(dict);
var keys = dict.keys();
var hasAny = keys.size().gt(0);
var firstKey = ee.String(ee.Algorithms.If(hasAny, keys.get(0), ''));
return ee.Number(ee.Algorithms.If(hasAny, dict.get(firstKey), fallback));
}
function getP_(dict, band, pct, fallback) {
dict = ee.Dictionary(dict);
var key1 = ee.String('p').cat(ee.Number(pct).format()); // 'p95'
var key2 = ee.String(band).cat('_p').cat(ee.Number(pct).format()); // 'FHI_p95'
var has1 = dict.contains(key1);
var has2 = dict.contains(key2);
return ee.Number(ee.Algorithms.If(
has1, dict.get(key1),
ee.Algorithms.If(
has2, dict.get(key2),
dictFirstNumber_(dict, fallback)
)
));
}
function posPercentile(img, pct, geom, scale, fallback) {
img = ee.Image(img);
var band = ee.String(img.bandNames().get(0));
var d = img.updateMask(img.gt(0)).reduceRegion({
reducer: ee.Reducer.percentile([pct]),
geometry: geom, scale: scale, maxPixels: 1e13, bestEffort: true, tileScale: 2
});
return getP_(d, band, pct, fallback);
}
function posPctlThreshold(img01, pct, geom, scale, fallback) {
img01 = ee.Image(img01);
var band = ee.String(img01.bandNames().get(0));
var d = img01.updateMask(img01.gt(0)).reduceRegion({
reducer: ee.Reducer.percentile([pct]),
geometry: geom, scale: scale, maxPixels: 1e13, bestEffort: true, tileScale: 2
});
return getP_(d, band, pct, fallback);
}
// --------------- STEP 2. Normalize hazards (0..1) with positive-only p95 ---------------
var FHI_max = ee.Number(
FHI.reduceRegion({reducer: ee.Reducer.max(), geometry: cantabria.geometry(),
scale: 250, maxPixels: 1e13, bestEffort: true, tileScale: 2}).get('FHI')
).max(1);
var FIRE_max = ee.Number(
FIRE.reduceRegion({reducer: ee.Reducer.max(), geometry: cantabria.geometry(),
scale: 500, maxPixels: 1e13, bestEffort: true, tileScale: 2}).get('FIRE')
).max(1);
// p95 among positive pixels; fall back to max if empty
var FHI_p95 = posPercentile(FHI, 90, cantabria.geometry(), 250, FHI_max);
var FIRE_p95 = posPercentile(FIRE, 95, cantabria.geometry(), 500, FIRE_max);
print('STEP 2 (fixed) · p95 (positive-only) → Flood:', FHI_p95, 'Fire:', FIRE_p95);
// Normalize to 0..1
var floodHaz = FHI.divide(FHI_p95).clamp(0, 1).rename('flood_haz');
var fireHaz = FIRE.divide(FIRE_p95).clamp(0, 1).rename('fire_haz');
Map.addLayer(hill.blend(floodHaz.visualize({min:0, max:1, palette:['e0f3ff','74a9cf','0570b0'], opacity:0.9})),
{}, 'STEP 2a · Flood Hazard (0–1)');
Map.addLayer(hill.blend(fireHaz.visualize({min:0, max:1, palette:['fee8c8','fdbb84','e34a33'], opacity:0.9})),
{}, 'STEP 2b · Fire Hazard (0–1)');
// --------------- STEP 3. Normalize MITIGATION capacity (0..1) ---------------
var floodMit = floodAdj.divide(1.5).clamp(0, 1).rename('flood_mit');
var fireMit = fireAdj.divide(1.5).clamp(0, 1).rename('fire_mit');
print('STEP 3 · Mitigation stats (flood):',
floodMit.reduceRegion({
reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
geometry: cantabria.geometry(), scale: 100, maxPixels: 1e9, bestEffort: true, tileScale: 2
})
);
print('STEP 3 · Mitigation stats (fire):',
fireMit.reduceRegion({
reducer: ee.Reducer.minMax().combine({reducer2: ee.Reducer.mean(), sharedInputs:true}),
geometry: cantabria.geometry(), scale: 100, maxPixels: 1e9, bestEffort: true, tileScale: 2
})
);
Map.addLayer(hill.blend(floodMit.visualize({min:0, max:1, palette:['f7fcf5','a6dba0','1b7837'], opacity:0.9})),
{}, 'STEP 3a · Flood Mitigation (0–1)');
Map.addLayer(hill.blend(fireMit.visualize({min:0, max:1, palette:['f7fcf5','a6dba0','1b7837'], opacity:0.9})),
{}, 'STEP 3b · Fire Mitigation (0–1)');
// --------------- STEP 4. Mitigation GAP (1 − mitigation) ---------------
var floodGap = ee.Image(1).subtract(floodMit).rename('flood_gap');
var fireGap = ee.Image(1).subtract(fireMit).rename('fire_gap');
Map.addLayer(hill.blend(floodGap.visualize({min:0, max:1, palette:['ffffff','fde0ef','d6604d','b2182b'], opacity:0.9})),
{}, 'STEP 4a · Flood Gap (low capacity)');
Map.addLayer(hill.blend(fireGap.visualize({min:0, max:1, palette:['ffffff','fde0ef','d6604d','b2182b'], opacity:0.9})),
{}, 'STEP 4b · Fire Gap (low capacity)');
// --------------- STEP 5. PRIORITY per hazard ---------------
var prioFlood = floodHaz.multiply(floodGap).rename('prio_flood'); // 0..1
var prioFire = fireHaz.multiply(fireGap).rename('prio_fire'); // 0..1
var prioPal = ['ffffcc','ffeda0','feb24c','fd8d3c','f03b20','bd0026'];
Map.addLayer(hill.blend(prioFlood.visualize({min:0, max:1, palette: prioPal, opacity:0.95})),
{}, 'STEP 5a · NbS Priority (FLOOD)');
Map.addLayer(hill.blend(prioFire.visualize({min:0, max:1, palette: prioPal, opacity:0.95})),
{}, 'STEP 5b · NbS Priority (FIRE)');
// --------------- STEP 6. HOTSPOTS per hazard (top 20%) - positive-only percentiles ---------------
var thFlood = posPctlThreshold(prioFlood, 60, cantabria.geometry(), 100, 0.2);
var thFire = posPctlThreshold(prioFire, 80, cantabria.geometry(), 100, 0.2);
print('STEP 6 (fixed) · Hotspot thresholds (p80 on pos-only) → Flood:', thFlood, 'Fire:', thFire);
var hotspotFlood = prioFlood.gte(thFlood).selfMask().rename('hotspot_flood');
var hotspotFire = prioFire.gte(thFire).selfMask().rename('hotspot_fire');
Map.addLayer(hotspotFlood.visualize({palette:['#ff0000'], opacity:0.65}).clip(cantabria),
{}, 'STEP 6a · HOTSPOT (FLOOD)');
Map.addLayer(hotspotFire.visualize({palette:['#ff7f00'], opacity:0.65}).clip(cantabria),
{}, 'STEP 6b · HOTSPOT (FIRE)');
// --------------- STEP 7. PROTECT / ENHANCE zones - adaptive (percentiles) ---------------
var hzFloodP70 = posPercentile(floodHaz, 55, cantabria.geometry(), 250, 0.6);
var mitFloodP70 = posPercentile(floodMit, 55, cantabria.geometry(), 100, 0.6);
var hzFireP70 = posPercentile(fireHaz, 70, cantabria.geometry(), 500, 0.6);
var mitFireP70 = posPercentile(fireMit, 70, cantabria.geometry(), 100, 0.6);
print('STEP 7 (adaptive) · thresholds → Flood(hz,mit):', hzFloodP70, mitFloodP70,
' Fire(hz,mit):', hzFireP70, mitFireP70);
var protectFlood = floodHaz.gte(hzFloodP70).and(floodMit.gte(mitFloodP70)).selfMask();
var protectFire = fireHaz.gte(hzFireP70).and(fireMit.gte(mitFireP70)).selfMask();
Map.addLayer(protectFlood.visualize({palette:['#2ca25f'], opacity:0.85}).clip(cantabria),
{}, 'STEP 7a · Protect/Enhance (FLOOD)');
Map.addLayer(protectFire.visualize({palette:['#238b45'], opacity:0.85}).clip(cantabria),
{}, 'STEP 7b · Protect/Enhance (FIRE)');
max()
and pick a fixed threshold (e.g., ≥ 0.7) for hotspots.
That works but is sensitive to outliers and regional differences. We use positive-only percentiles
for scaling and thresholding so the method adapts to each study area’s distribution and remains reproducible.
Hotspot → Spatial overlap of high ESS and high hazard risk.
In our workflow: ESS_mask AND Hazard_mask
.
This is a diagnostic layer showing where the opportunity/problem exists - it’s always computed first.
Priority → Hotspots ranked using the formula
Hazard_Intensity × (1 − Mitigation_Capacity)
.
A pixel may be a hotspot but not a priority if hazard intensity is below the threshold,
mitigation capacity is already high, or the weighted score falls below the cut-off.
Every priority area is a hotspot, but not every hotspot will be a priority.
Protect / Enhance → High ESS areas with low hazard risk
(ESS_mask AND NOT(Hazard_mask)
). These areas are important for maintaining
ecosystem services even if immediate hazard mitigation is not required.
You now have per-hazard priority rasters (prio_flood
, prio_fire
) and
hotspot masks (hotspot_flood
, hotspot_fire
), plus protect/enhance masks.
This section covers (A) exporting those layers, (B) creating a combined multi-hazard NbS index with
adjustable weights and optional confidence bands, (C) sensible tweaks for different contexts, and (D) how to
swap in local hazard/ESS/LULC data to rerun the same roadmap.
Exports are useful for dashboards, GIS sharing, or further modelling. Use Export.image.toDrive
for rasters; Export.table.toDrive
for zonal summaries. Set a consistent CRS, scale, and region.
// ---------- STACK & EXPORT RASTERS (GeoTIFF) ----------
var prioStack = ee.Image()
.addBands(prioFlood) // 0–1
.addBands(prioFire) // 0–1
.addBands(hotspotFlood) // 0/1 (masked)
.addBands(hotspotFire) // 0/1 (masked)
.addBands(protectFlood) // 0/1 (masked)
.addBands(protectFire) // 0/1 (masked)
.rename([
'prio_flood','prio_fire',
'hotspot_flood','hotspot_fire',
'protect_flood','protect_fire'
])
.clip(cantabria);
// Optional: set a common projection (e.g., 100 m in EPSG:3035 for Europe)
var targetCRS = 'EPSG:3035';
var targetScale = 100;
prioStack = prioStack
.resample('bilinear')
.reproject({crs: targetCRS, scale: targetScale});
Export.image.toDrive({
image: prioStack,
description: 'NbS_Priority_Hotspots_Cantabria',
folder: 'desirmed',
fileNamePrefix: 'NbS_Priority_Hotspots_Cantabria',
region: cantabria.geometry().bounds(1),
scale: targetScale,
crs: targetCRS,
maxPixels: 1e13
});
// ---------- OPTIONAL: ZONAL STATS AS TABLE ----------
var meanStats = prioStack.reduceRegion({
reducer: ee.Reducer.mean(),
geometry: cantabria.geometry(),
scale: targetScale,
maxPixels: 1e13,
bestEffort: true
});
var outFeat = ee.Feature(null, meanStats).set('region', 'Cantabria');
Export.table.toDrive({
collection: ee.FeatureCollection([outFeat]),
description: 'NbS_Priority_Means_Cantabria',
folder: 'desirmed',
fileFormat: 'CSV'
});
Many planners prefer a single combined surface for screening. We build an index as a weighted sum of
per-hazard priorities. Weights (wFlood
, wFire
) should sum to 1 (or we normalize them).
If you’ve created confidence bands per hazard (e.g., from expert certainty or validation), you can also
propagate them to the final index.
// ---------- WEIGHTED COMBINATION ----------
var wFlood = 0.6; // emphasize floods (example)
var wFire = 0.4; // fire gets 40%
var wSum = wFlood + wFire;
var wF = wFlood / wSum;
var wFi = wFire / wSum;
var nbsIndex = prioFlood.multiply(wF)
.add(prioFire.multiply(wFi))
.rename('nbs_index') // 0–1 (approx)
.clip(cantabria);
// ---------- OPTIONAL: CONFIDENCE (if available) ----------
// Example: confidence bands per hazard on a 1–3 scale (low/med/high)
// Suppose you have floodConf, fireConf (ee.Image, same footprint).
// Convert to 0–1 (1=low, 3=high -> map to 0.33..1), then weighted combine.
var hasConfidence = false; // flip to true if you have these layers
var nbsConfidence = ee.Image(1); // placeholder default
if (hasConfidence) {
var floodConf01 = floodConf.unitScale(1,3); // -> 0..1
var fireConf01 = fireConf.unitScale(1,3); // -> 0..1
nbsConfidence = floodConf01.multiply(wF).add(fireConf01.multiply(wFi))
.rename('nbs_confidence');
}
// ---------- EXPORT COMBINED INDEX ----------
var toExport = hasConfidence ? nbsIndex.addBands(nbsConfidence) : nbsIndex;
Export.image.toDrive({
image: toExport.reproject({crs: targetCRS, scale: targetScale}),
description: 'NbS_Index_Combined',
folder: 'desirmed',
fileNamePrefix: 'NbS_Index_Combined',
region: cantabria.geometry().bounds(1),
scale: targetScale,
crs: targetCRS,
maxPixels: 1e13
});
nbs_index
highlights places with consistently high
gap-weighted hazard priority. Use the (optional) confidence band to communicate evidence strength, following
good practice in IPBES or
IPCC-style confidence framing.
You can plug in local datasets seamlessly. Two common cases:
Replace the hazard intensity with your local raster(s). Normalize to 0–1 (e.g., divide by positive-only p95), then proceed with the same gap logic. Ensure alignment (CRS/grid) with mitigation layers.
// Load your local hazard (e.g., flood depth raster in meters)
var floodDepthLocal = ee.Image('projects/your-project/assets/flood_depth_100y')
.reproject({crs: targetCRS, scale: targetScale});
// Positive-only p95 scaling to 0–1
var fd_p95 = floodDepthLocal.updateMask(floodDepthLocal.gt(0))
.reduceRegion({
reducer: ee.Reducer.percentile([95]),
geometry: cantabria.geometry(),
scale: targetScale, maxPixels: 1e13, bestEffort: true
}).values().get(0);
var floodHazLocal = floodDepthLocal.divide(ee.Number(fd_p95)).clamp(0,1).rename('flood_haz_local');
// Priority with existing flood mitigation gap:
var floodGapLocal = ee.Image(1).subtract(floodMit); // reuse normalized mitigation (0–1)
var prioFloodLocal = floodHazLocal.multiply(floodGapLocal); // same formula
If you have a local land-use/land-cover (LULC) map or direct ESS layers, build mitigation exactly as before: remap → normalize → adjust with terrain (optional). Keep values in 0–1 for easy integration.
// Example: local LULC (categorical) and your own mitigation scores (0..1)
var lulcLocal = ee.Image('projects/your-project/assets/lulc_local_2023').rename('lulc');
// Dictionary: class → mitigation score (0..1)
var mitDict = { 10:0.75, 11:0.60, 20:0.50, 30:0.20 }; // example codes
var from = Object.keys(mitDict).map(function(k){return parseInt(k);});
var to = from.map(function(k){return mitDict[k];});
var mitLocal = lulcLocal.remap(from, to).rename('mit_local'); // 0..1
// Optionally, apply spatial modifiers (slope/aspect/elevation) as in Step 4
var mitLocalAdj = mitLocal
.multiply(ee.Image(1).subtract(slope.divide(60).multiply(0.2))) // gentle slope penalty
.clamp(0,1)
.rename('mit_local_adj');
We’ve taken you from proxy-and-spatially-enriched mitigation layers to hazard-aware NbS prioritization,
then exported per-hazard priorities, hotspots, and a combined multi-hazard index suitable for decision
dashboards. The method remains explainable:
Priority = Hazard × (1 − Mitigation)
, with adaptive thresholds (positive-only percentiles) and
optional confidence bands for transparent uncertainty communication.
Additional key literature:
sessionInfo()
R version 4.4.0 (2024-04-24 ucrt)
Platform: x86_64-w64-mingw32/x64
Running under: Windows 11 x64 (build 22631)
Matrix products: default
locale:
[1] LC_COLLATE=English_Germany.utf8 LC_CTYPE=English_Germany.utf8
[3] LC_MONETARY=English_Germany.utf8 LC_NUMERIC=C
[5] LC_TIME=English_Germany.utf8
time zone: Europe/Berlin
tzcode source: internal
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] terra_1.8-5 sf_1.0-19 changehistory_1.7.1
loaded via a namespace (and not attached):
[1] jsonlite_1.8.8 compiler_4.4.0 promises_1.3.0 Rcpp_1.0.13
[5] stringr_1.5.1 git2r_0.33.0 callr_3.7.6 later_1.3.2
[9] jquerylib_0.1.4 yaml_2.3.10 fastmap_1.2.0 R6_2.5.1
[13] classInt_0.4-10 knitr_1.48 tibble_3.2.1 units_0.8-5
[17] rprojroot_2.0.4 DBI_1.2.3 bslib_0.8.0 pillar_1.9.0
[21] rlang_1.1.4 utf8_1.2.4 cachem_1.1.0 stringi_1.8.4
[25] httpuv_1.6.15 xfun_0.47 getPass_0.2-4 fs_1.6.4
[29] sass_0.4.9 cli_3.6.3 magrittr_2.0.3 class_7.3-22
[33] ps_1.8.1 grid_4.4.0 digest_0.6.36 processx_3.8.4
[37] rstudioapi_0.16.0 lifecycle_1.0.4 vctrs_0.6.5 KernSmooth_2.23-22
[41] proxy_0.4-27 evaluate_0.24.0 glue_1.7.0 whisker_0.4.1
[45] codetools_0.2-20 e1071_1.7-16 fansi_1.0.6 rmarkdown_2.28
[49] httr_1.4.7 tools_4.4.0 pkgconfig_2.0.3 htmltools_0.5.8.1