Skip to article frontmatterSkip to article content

Feedback analysis using radiative kernels

Project Pythia Logo

Note

This content is under construction!

Feedback analysis using radiative kernels


Overview

This notebook details all of the steps required to perform a radiative kernel analysis.

Prerequisites

ConceptsImportanceNotes
Loading CMIP6 data with Intake-ESMHelpful
Intro to XarrayNecessary
  • Time to learn: 60 minutes

Imports

import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import xesmf as xe
import intake
import s3fs
import fsspec
import glob

Loading in required data

Climate model output

In this example, we will perform the analysis on a single model from the CMIP6 ensemble, CESM2. The simplest way to calculate feedbacks is to take differences between two climate states, as opposed to regressions. Here we use runs with:

  • preindustrial conditions (piControl) as the control climate
  • instantaneously quadrupled CO2 (abrupt-4xCO2) as the perturbed climate

We will use CMIP6 data hosted on Pangeo’s Google Cloud Storage:

cat_url = 'https://storage.googleapis.com/cmip6/pangeo-cmip6.json'
col = intake.open_esm_datastore(cat_url)

The fields (and CMIP names) required to calculate each feedback are:

  • Albedo: upwelling and downwelling SW radiation at the surface (rsus and rsds)
  • Temperature (Planck and lapse rate): air temperature (ta) and surface temperature (ts)
  • Water vapor: specific humidity (hus) and air temperature
  • SW CRE: Net SW radiation at TOA (down rsdt minus up rsut) and clear-sky versions (down, which is the same, minus up rsutcs)
  • LW CRE: Net LW radiation at TOA (rlut) and the clear-sky version (rlutcs)

The cloud feedbacks require the results from the other feedbacks to correct for noncloud contributions to the CREs.

We will also need near-surface air temperature (tas) for calculating the change in global mean surface temperature (GMST).

cat = col.search(activity_id='CMIP', experiment_id=['piControl', 'abrupt-4xCO2'], table_id='Amon', source_id='CESM2', 
                 variable_id=['rsus', 'rsds', 'ta', 'ts', 'hus', 'rsdt', 'rsut', 'rsutcs', 'rlut', 'rlutcs', 'tas'])
cat.df
Loading...
dset_dict = cat.to_dataset_dict(zarr_kwargs={'consolidated': True})
Loading...
ctrl = dset_dict['CMIP.NCAR.CESM2.piControl.Amon.gn']
ctrl
Loading...
pert = dset_dict['CMIP.NCAR.CESM2.abrupt-4xCO2.Amon.gn']
pert
Loading...

Radiative forcing data

To calculate the cloud feedback, we will need the effective radiative forcing (ERF) resulting from 4xCO2, which we can find under another project, RFMIP:

rf_cat = col.search(activity_id='RFMIP', experiment_id='piClim-4xCO2', source_id='CESM2')
rf_cat.df
Loading...
rf_dset_dict = rf_cat.to_dataset_dict(zarr_kwargs={'consolidated': True})
Loading...
forcing = rf_dset_dict['RFMIP.NCAR.CESM2.piClim-4xCO2.Amon.gn']
forcing
Loading...

Radiative kernels

We will load in radiative kernels from Project Pythia’s storage on JetStream2. We will be using the ERA5 kernels (Huang & Huang 2023).

URL = 'https://js2.jetstream-cloud.org:8001/'
path = f'pythia/ClimKern'
fs = fsspec.filesystem("s3", anon=True, client_kwargs=dict(endpoint_url=URL))
patternKern = f's3://{path}/kernels/ERA5/*.nc'
patternTutorial = f's3://{path}/tutorial_data/*.nc'
filesKern = sorted(fs.glob(patternKern))
filesTutorial = sorted(fs.glob(patternTutorial))
filesetKern = [fs.open(file) for file in filesKern[0:2]]
filesetTutorial = [fs.open(file) for file in filesKern[0:2]]

We will use the more common TOA kernels.

filesKern
['pythia/ClimKern/kernels/ERA5/SFC_ERA5_Kerns.nc', 'pythia/ClimKern/kernels/ERA5/TOA_ERA5_Kerns.nc']

As we will see in a moment, the naming convention for months as calculated from Xarray’s .groupby() method (“month”, 1-12) is different from what is used in the kernel dataset (“time”, 0-11), so we need to align them:

data_kern = xr.open_dataset(filesetKern[1])
data_kern['time'] = data_kern['time'] + 1
data_kern = data_kern.rename({'time': 'month'})
data_kern
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[14], line 1
----> 1 data_kern = xr.open_dataset(filesetKern[1])
      2 data_kern['time'] = data_kern['time'] + 1
      3 data_kern = data_kern.rename({'time': 'month'})

File ~/micromamba/envs/feedback-cookbook-dev/lib/python3.11/site-packages/xarray/backends/api.py:668, in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, inline_array, chunked_array_type, from_array_kwargs, backend_kwargs, **kwargs)
    665     kwargs.update(backend_kwargs)
    667 if engine is None:
--> 668     engine = plugins.guess_engine(filename_or_obj)
    670 if from_array_kwargs is None:
    671     from_array_kwargs = {}

File ~/micromamba/envs/feedback-cookbook-dev/lib/python3.11/site-packages/xarray/backends/plugins.py:194, in guess_engine(store_spec)
    186 else:
    187     error_msg = (
    188         "found the following matches with the input file in xarray's IO "
    189         f"backends: {compatible_engines}. But their dependencies may not be installed, see:\n"
    190         "https://docs.xarray.dev/en/stable/user-guide/io.html \n"
    191         "https://docs.xarray.dev/en/stable/getting-started-guide/installing.html"
    192     )
--> 194 raise ValueError(error_msg)

ValueError: did not find a match in any of xarray's currently installed IO backends ['netcdf4', 'h5netcdf', 'scipy', 'zarr']. Consider explicitly selecting one of the installed engines via the ``engine`` parameter, or installing additional IO dependencies, see:
https://docs.xarray.dev/en/stable/getting-started-guide/installing.html
https://docs.xarray.dev/en/stable/user-guide/io.html

Let’s take a look at some of these kernels. First, the albedo kernel, annually averaged:

data_kern.sw_a.mean(dim='month').plot.contourf(levels=20)

And the LW water vapor kernel, annually and zonally averaged:

data_kern.lw_q.mean(dim=['month', 'lon']).plot.contourf(levels=40, yincrease=False)

Preparing data for analysis

Define a function for taking global averages:

def global_average(data):
    weights = np.cos(np.deg2rad(data.lat))
    data_weighted = data.weighted(weights)
    return data_weighted.mean(dim=['lat', 'lon'], skipna=True)

To get a general idea of how the climate changes in these runs, we can plot the 12-month rolling GMST for the two runs:

gmst_ctrl = global_average(ctrl.tas.rolling(time=12, center=True).mean())
gmst_ctrl.sel(time=slice('1150', '1200')).plot()
gmst_pert = global_average(pert.tas.rolling(time=12, center=True).mean())
gmst_pert.sel(time=slice('0949', '0999')).plot()

Calculating the two climate states

Ideally, we would want to compare two equilibrated climates, but since that is usually not possible with standard-length CMIP experiments, we will simply use the monthly climatology over the last 50 years available for each run, which is close enough to equilibrium. The pressure coordinates are in Pa, so let’s convert them to hPa to match the kernels:

ctrl_state = ctrl.sel(time=slice('1150', '1200')).groupby('time.month').mean(dim='time').squeeze()
pert_state = pert.sel(time=slice('0949', '0999')).groupby('time.month').mean(dim='time').squeeze()

ctrl_state['plev'] = ctrl_state['plev']/100
pert_state['plev'] = pert_state['plev']/100
pert_state

We will need the change in GMST in order to calculate the feedbacks:

dgmst = global_average(pert_state.tas - ctrl_state.tas).mean(dim='month')
dgmst.load()

Regridding

The model output and kernels are not on the same grid, so we will regrid the kernel dataset to the model’s grid using the regridding package xesmf. For reusability, let’s define a function to regrid data:

def regrid(ds_in, regrid_to, method='bilinear'):
    regridder = xe.Regridder(ds_in, regrid_to, method=method, periodic=True, ignore_degenerate=True)
    ds_out = regridder(ds_in)
    return ds_out
regr_kernels = regrid(data_kern, pert_state)
regr_kernels

Check that the kernels look as expected:

regr_kernels.sw_a.mean(dim='month').plot.contourf(levels=20)
regr_kernels.lw_q.mean(dim=['month', 'lon']).plot.contourf(levels=40, yincrease=False)

Calculating feedbacks

For all noncloud feedbacks, we will calculate both the all-sky and clear-sky (indicated by clr in the kernel dataset here) change in TOA radiation ΔR\Delta R. To do this, we just use the clear-sky version of the kernel in place of the all-sky kernel. The differences ΔRXclearΔRXall\Delta R^\mathrm{clear}_X-\Delta R^\mathrm{all}_X for each feedback XX will be used to adjust the CREs into proper cloud feedbacks.

For the feedbacks involving fields that are a function of pressure (i.e., temperature and water vapor), we need to mask out the stratosphere. In this notebook, we will approximate this using a function that masks above 100 hPa at the equator and linearly decreases to 300 hPa at the poles:

def tropo_mask(ds):
    return ds.where(ds.plev > ((200/90)*abs(ds.lat) + 100))

Compare:

ctrl_state.ta.mean(dim=['month', 'lon']).plot(yincrease=False)
tropo_mask(ctrl_state.ta.mean(dim=['month', 'lon'])).plot(yincrease=False)

Surface albedo

The first step is to calculate the change in albedo between the control and perturbed climates. The surface albedo α is defined as the fraction of downwelling solar radiation at the surface that is reflected back up. That is,

α=SsSs=rsusrsds\alpha=\frac{S^\uparrow_\mathrm{s}}{S^\downarrow_\mathrm{s}}=\frac{\mathtt{rsus}}{\mathtt{rsds}}
alb_ctrl = (ctrl_state.rsus/ctrl_state.rsds.where(ctrl_state.rsds > 0)).fillna(0)
alb_pert = (pert_state.rsus/pert_state.rsds.where(ctrl_state.rsds > 0)).fillna(0)

Note that we avoided dividing by zero by masking regions where rsds = 0 and filling in the resulting nans with 0. Now, take the difference:

dalb = alb_pert - alb_ctrl
dalb.mean(dim='month').plot()

By multiplying the change in albedo with the albedo kernel, we get the change in TOA radiation (in units W m-2) resulting from that change in albedo:

ΔRα=KαΔα\Delta R_\alpha=K_\alpha\cdot\Delta\alpha

We also need to multiply by 100 to get albedo as a percentage.

dSW_alb = regr_kernels.sw_a * dalb * 100
dSW_alb_clr = regr_kernels.swclr_a * dalb * 100
dSW_alb.mean(dim='month').plot()

The feedback in units W m-2 K-1 is then calculated by normalizing this change in TOA radiation by the change in GMST:

λα=ΔRαΔTs\lambda_\alpha=\frac{\Delta R_\alpha}{\Delta T_\mathrm{s}}
alb_feedback = dSW_alb/dgmst
alb_feedback.mean(dim='month').plot()

Taking the global and annual average, we get a final value of about 0.35 W m-2 K-1:

global_average(alb_feedback.mean(dim='month')).load()

Temperature

The temperature feedback is decomposed into Planck and lapse rate feedbacks, but first, we can calculate the change in TOA LW radiation resulting from changes in surface (ts) and air (ta) temperature. The first step is calculating the changes in surface and air temperature:

dts = pert_state.ts - ctrl_state.ts
dta = pert_state.ta - ctrl_state.ta

Then the change in TOA radiation using the kernels:

ΔRTs=KTsΔTs\Delta R_{T_\mathrm{s}}=K_{T_\mathrm{s}}\cdot\Delta T_\mathrm{s}

And for feedbacks that involve functions of pressure, we sum over the pressure levels:

ΔRT=pKT(p)ΔT(p)\Delta R_T=\sum_p K_T(p)\cdot\Delta T(p)
dLW_ts = regr_kernels.lw_ts * dts
dLW_ts_clr = regr_kernels.lwclr_ts * dts

dLW_ta = (regr_kernels.lw_t * tropo_mask(dta)).sum(dim='plev', skipna=True)
dLW_ta_clr = (regr_kernels.lwclr_t * tropo_mask(dta)).sum(dim='plev', skipna=True)
dLW_ts.mean(dim='month').plot()
dLW_ta.mean(dim='month').plot()

The full temperature feedback is the sum of these, normalized by the change in GMST:

λT=ΔRTs+ΔRTΔTs\lambda_T=\frac{\Delta R_{T_\mathrm{s}}+\Delta R_T}{\Delta T_\mathrm{s}}
t_feedback = (dLW_ta + dLW_ts)/dgmst
t_feedback.mean(dim='month').plot()

For the annual and global average, we get about -4.3 W m-2 K-1:

global_average(t_feedback.mean(dim='month')).load()

Planck

There are two components to the Planck feedback:

  1. The change in TOA radiation due to the surface warming (calculated in the previous section)
  2. The change in TOA radiation due to a vertically-uniform warming

We assume a vertically-uniform warming based on the surface temperature. To do this, project the surface warming into the vertical. One way to do this with Xarray is:

dts_3d = dts.expand_dims(dim={'plev': dta.plev})

Check the vertical profile:

dts_3d.mean(dim=['month', 'lon']).plot(yincrease=False)
dLW_dts_3d = (regr_kernels.lw_t * tropo_mask(dts_3d)).sum(dim='plev', skipna=True)

The sum of the two components divided by the change in GMST is the Planck feedback:

planck_feedback = (dLW_ts + dLW_dts_3d)/dgmst
planck_feedback.mean(dim='month').plot()

Annual and global average: -3.6 W m-2 K-1.

global_average(planck_feedback.mean(dim='month')).load()

Lapse rate

Water vapor

Shortwave water vapor

Longwave water vapor

Cloud

Cloud radiative effect (CRE)

ctrl_SWCRE = (-ctrl_state.rsut + ctrl_state.rsutcs)
pert_SWCRE = (-pert_state.rsut + pert_state.rsutcs)
dSWCRE = pert_SWCRE - ctrl_SWCRE
dSWCRE.mean(dim='month').plot()

Cloud feedback adjustments

Net


Summary

What’s next?

Resources and references