MITgcm ECCOv4 Example¶
Overview¶
This Jupyter notebook demonstrates how to use xarray and xgcm to analyze data from the ECCO v4r3 ocean state estimate.
Loading ECCO zarr data and converting to an xarray dataset
Visualize ocean depth using cartopy
Indexing and selecting data using xarray
Use dask cluster to speed up reading the data
Calculate and plot the horizontally integrated heat content anomaly
Use xgcm to compute the time-mean convergence of veritcally-integrated heat fluxes
Prerequisites¶
| Concepts | Importance | Notes |
|---|---|---|
| Intro to Cartopy | Helpful | |
| Xarray | Helpful | Slicing, indexing, basic statistics |
| Dask | Helpful |
Time to learn: 1 hour
Imports¶
import xarray as xr
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import intake
import cartopy as cart
import pyresample
from dask_gateway import GatewayCluster
from dask.distributed import Client
import xgcmLoad the data¶
The ECCOv4r3 data was converted from its raw MDS (.data / .meta file) format to zarr format, using the xmitgcm package. Zarr is a powerful data storage format that can be thought of as an alternative to HDF. In contrast to HDF, zarr works very well with cloud object storage. Zarr is currently useable in python, java, C++, and julia. It is likely that zarr will form the basis of the next major version of the netCDF library.
If you’re curious, here are some resources to learn more about zarr:
https://
speakerdeck .com /rabernat /pangeo -zarr -cloud -data -storage https://
mrocklin .github .com /blog /work /2018 /02 /06 /hdf -in -the -cloud
The ECCO zarr data currently lives in Google Cloud Storage as part of the Pangeo Data Catalog. This means we can open the whole dataset using one line of code.
This takes a bit of time to run because the metadata must be downloaded and parsed. The type of object returned is an Xarray dataset.
cat = intake.open_catalog("https://raw.githubusercontent.com/pangeo-data/pangeo-datastore/master/intake-catalogs/ocean.yaml")
ds = cat.ECCOv4r3.to_dask()
ds---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 2
1 cat = intake.open_catalog("https://raw.githubusercontent.com/pangeo-data/pangeo-datastore/master/intake-catalogs/ocean.yaml")
----> 2 ds = cat.ECCOv4r3.to_dask()
3 ds
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/intake_xarray/base.py:8, in IntakeXarraySourceAdapter.to_dask(self)
6 def to_dask(self):
7 if "chunks" not in self.reader.kwargs:
----> 8 return self.reader(chunks={}).read()
9 else:
10 return self.reader.read()
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/intake/readers/readers.py:121, in BaseReader.read(self, *args, **kwargs)
119 kw.update(kwargs)
120 args = kw.pop("args", ()) or args
--> 121 return self._read(*args, **kw)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/intake/readers/readers.py:1327, in XArrayDatasetReader._read(self, data, open_local, **kw)
1325 f = fsspec.open(data.url, **(data.storage_options or {})).open()
1326 return open_dataset(f, **kw)
-> 1327 return open_dataset(data.url, **kw)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/xarray/backends/api.py:607, in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, create_default_indexes, inline_array, chunked_array_type, from_array_kwargs, backend_kwargs, **kwargs)
595 decoders = _resolve_decoders_kwargs(
596 decode_cf,
597 open_backend_dataset_parameters=backend.open_dataset_parameters,
(...) 603 decode_coords=decode_coords,
604 )
606 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None)
--> 607 backend_ds = backend.open_dataset(
608 filename_or_obj,
609 drop_variables=drop_variables,
610 **decoders,
611 **kwargs,
612 )
613 ds = _dataset_from_backend_dataset(
614 backend_ds,
615 filename_or_obj,
(...) 626 **kwargs,
627 )
628 return ds
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/xarray/backends/zarr.py:1683, in ZarrBackendEntrypoint.open_dataset(self, filename_or_obj, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta, group, mode, synchronizer, consolidated, chunk_store, storage_options, zarr_version, zarr_format, store, engine, use_zarr_fill_value_as_mask, cache_members)
1681 filename_or_obj = _normalize_path(filename_or_obj)
1682 if not store:
-> 1683 store = ZarrStore.open_group(
1684 filename_or_obj,
1685 group=group,
1686 mode=mode,
1687 synchronizer=synchronizer,
1688 consolidated=consolidated,
1689 consolidate_on_close=False,
1690 chunk_store=chunk_store,
1691 storage_options=storage_options,
1692 zarr_version=zarr_version,
1693 use_zarr_fill_value_as_mask=None,
1694 zarr_format=zarr_format,
1695 cache_members=cache_members,
1696 )
1698 store_entrypoint = StoreBackendEntrypoint()
1699 with close_on_error(store):
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/xarray/backends/zarr.py:722, in ZarrStore.open_group(cls, store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, append_dim, write_region, safe_chunks, align_chunks, zarr_version, zarr_format, use_zarr_fill_value_as_mask, write_empty, cache_members)
696 @classmethod
697 def open_group(
698 cls,
(...) 715 cache_members: bool = True,
716 ):
717 (
718 zarr_group,
719 consolidate_on_close,
720 close_store_on_close,
721 use_zarr_fill_value_as_mask,
--> 722 ) = _get_open_params(
723 store=store,
724 mode=mode,
725 synchronizer=synchronizer,
726 group=group,
727 consolidated=consolidated,
728 consolidate_on_close=consolidate_on_close,
729 chunk_store=chunk_store,
730 storage_options=storage_options,
731 zarr_version=zarr_version,
732 use_zarr_fill_value_as_mask=use_zarr_fill_value_as_mask,
733 zarr_format=zarr_format,
734 )
736 return cls(
737 zarr_group,
738 mode,
(...) 747 cache_members=cache_members,
748 )
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/xarray/backends/zarr.py:1887, in _get_open_params(store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, zarr_version, use_zarr_fill_value_as_mask, zarr_format)
1883 group = open_kwargs.pop("path")
1885 if consolidated:
1886 # TODO: an option to pass the metadata_key keyword
-> 1887 zarr_root_group = zarr.open_consolidated(store, **open_kwargs)
1888 elif consolidated is None:
1889 # same but with more error handling in case no consolidated metadata found
1890 try:
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/api/synchronous.py:238, in open_consolidated(use_consolidated, *args, **kwargs)
233 def open_consolidated(*args: Any, use_consolidated: Literal[True] = True, **kwargs: Any) -> Group:
234 """
235 Alias for [`open_group`][zarr.api.synchronous.open_group] with ``use_consolidated=True``.
236 """
237 return Group(
--> 238 sync(async_api.open_consolidated(*args, use_consolidated=use_consolidated, **kwargs))
239 )
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/core/sync.py:159, in sync(coro, loop, timeout)
156 return_result = next(iter(finished)).result()
158 if isinstance(return_result, BaseException):
--> 159 raise return_result
160 else:
161 return return_result
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/core/sync.py:119, in _runner(coro)
114 """
115 Await a coroutine and return the result of running it. If awaiting the coroutine raises an
116 exception, the exception will be returned.
117 """
118 try:
--> 119 return await coro
120 except Exception as ex:
121 return ex
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/api/asynchronous.py:415, in open_consolidated(use_consolidated, *args, **kwargs)
410 if use_consolidated is not True:
411 raise TypeError(
412 "'use_consolidated' must be 'True' in 'open_consolidated'. Use 'open' with "
413 "'use_consolidated=False' to bypass consolidated metadata."
414 )
--> 415 return await open_group(*args, use_consolidated=use_consolidated, **kwargs)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/api/asynchronous.py:866, in open_group(store, mode, cache_attrs, synchronizer, path, chunk_store, storage_options, zarr_version, zarr_format, meta_array, attributes, use_consolidated)
864 try:
865 if mode in _READ_MODES:
--> 866 return await AsyncGroup.open(
867 store_path, zarr_format=zarr_format, use_consolidated=use_consolidated
868 )
869 except (KeyError, FileNotFoundError):
870 pass
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/core/group.py:570, in AsyncGroup.open(cls, store, zarr_format, use_consolidated)
563 raise FileNotFoundError(store_path)
564 elif zarr_format is None:
565 (
566 zarr_json_bytes,
567 zgroup_bytes,
568 zattrs_bytes,
569 maybe_consolidated_metadata_bytes,
--> 570 ) = await asyncio.gather(
571 (store_path / ZARR_JSON).get(),
572 (store_path / ZGROUP_JSON).get(),
573 (store_path / ZATTRS_JSON).get(),
574 (store_path / str(consolidated_key)).get(),
575 )
576 if zarr_json_bytes is not None and zgroup_bytes is not None:
577 # warn and favor v3
578 msg = f"Both zarr.json (Zarr format 3) and .zgroup (Zarr format 2) metadata objects exist at {store_path}. Zarr format 3 will be used."
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/storage/_common.py:168, in StorePath.get(self, prototype, byte_range)
166 if prototype is None:
167 prototype = default_buffer_prototype()
--> 168 return await self.store.get(self.path, prototype=prototype, byte_range=byte_range)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/zarr/storage/_fsspec.py:289, in FsspecStore.get(self, key, prototype, byte_range)
287 try:
288 if byte_range is None:
--> 289 value = prototype.buffer.from_bytes(await self.fs._cat_file(path))
290 elif isinstance(byte_range, RangeByteRequest):
291 value = prototype.buffer.from_bytes(
292 await self.fs._cat_file(
293 path,
(...) 296 )
297 )
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/gcsfs/core.py:1120, in GCSFileSystem._cat_file(self, path, start, end, **kwargs)
1118 else:
1119 head = {}
-> 1120 headers, out = await self._call("GET", u2, headers=head)
1121 return out
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/gcsfs/core.py:483, in GCSFileSystem._call(self, method, path, json_out, info_out, *args, **kwargs)
479 async def _call(
480 self, method, path, *args, json_out=False, info_out=False, **kwargs
481 ):
482 logger.debug(f"{method.upper()}: {path}, {args}, {kwargs.get('headers')}")
--> 483 status, headers, info, contents = await self._request(
484 method, path, *args, **kwargs
485 )
486 if json_out:
487 return json.loads(contents)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/decorator.py:224, in decorate.<locals>.fun(*args, **kw)
222 if not kwsyntax:
223 args, kw = fix(args, kw, sig)
--> 224 return await caller(func, *(extras + args), **kw)
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/gcsfs/retry.py:135, in retry_request(func, retries, *args, **kwargs)
133 if retry > 0:
134 await asyncio.sleep(min(random.random() + 2 ** (retry - 1), 32))
--> 135 return await func(*args, **kwargs)
136 except (
137 HttpError,
138 requests.exceptions.RequestException,
(...) 141 aiohttp.client_exceptions.ClientError,
142 ) as e:
143 if (
144 isinstance(e, HttpError)
145 and e.code == 400
146 and "requester pays" in e.message
147 ):
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/gcsfs/core.py:476, in GCSFileSystem._request(self, method, path, headers, json, data, *args, **kwargs)
473 info = r.request_info # for debug only
474 contents = await r.read()
--> 476 validate_response(status, contents, path, args)
477 return status, headers, info, contents
File ~/micromamba/envs/po-cookbook-dev/lib/python3.14/site-packages/gcsfs/retry.py:120, in validate_response(status, content, path, args)
118 raise requests.exceptions.ProxyError()
119 elif "invalid" in str(msg):
--> 120 raise ValueError(f"Bad Request: {path}\n{msg}")
121 elif error and not isinstance(error, str):
122 raise HttpError(error)
ValueError: Bad Request: https://storage.googleapis.com/download/storage/v1/b/pangeo-ecco-eccov4r3/o/eccov4r3%2Fzarr.json?alt=media
User project specified in the request is invalid.Note that no data has been actually download yet. Xarray uses the approach of lazy evaluation, in which loading of data and execution of computations is delayed as long as possible (i.e. until data is actually needed for a plot). The data are represented symbolically as dask arrays. For example:
SALT (time, k, face, j, i) float32 dask.array<shape=(288, 50, 13, 90, 90), chunksize=(1, 50, 13, 90, 90)>The full shape of the array is (288, 50, 13, 90, 90), quite large. But the chunksize is (1, 50, 13, 90, 90). Here the chunks correspond to the individual granuales of data (objects) in cloud storage. The chunk is the minimum amount of data we can read at one time.
# a trick to make things work a bit faster
coords = ds.coords.to_dataset().reset_coords()
ds = ds.reset_coords(drop=True)Visualizing Data¶
A Direct Plot¶
Let’s try to visualize something simple: the Depth variable. Here is how the data are stored:
Depth (face, j, i) float32 dask.array<shape=(13, 90, 90), chunksize=(13, 90, 90)>Although depth is a 2D field, there is an extra, dimension (face) corresponding to the LLC face number. Let’s use xarray’s built in plotting functions to plot each face individually.
coords.Depth.plot(col='face', col_wrap=5)This view is not the most useful. It reflects how the data is arranged logically, rather than geographically.
A Pretty Map¶
To make plotting easier, we can define a quick function to plot the data in a more geographically friendly way. Eventually these plotting functions may be provided by the gcmplots package: https://
class LLCMapper:
def __init__(self, ds, dx=0.25, dy=0.25):
# Extract LLC 2D coordinates
lons_1d = ds.XC.values.ravel()
lats_1d = ds.YC.values.ravel()
# Define original grid
self.orig_grid = pyresample.geometry.SwathDefinition(lons=lons_1d, lats=lats_1d)
# Longitudes latitudes to which we will we interpolate
lon_tmp = np.arange(-180, 180, dx) + dx/2
lat_tmp = np.arange(-90, 90, dy) + dy/2
# Define the lat lon points of the two parts.
self.new_grid_lon, self.new_grid_lat = np.meshgrid(lon_tmp, lat_tmp)
self.new_grid = pyresample.geometry.GridDefinition(lons=self.new_grid_lon,
lats=self.new_grid_lat)
def __call__(self, da, ax=None, projection=cart.crs.Robinson(), lon_0=-60, **plt_kwargs):
assert set(da.dims) == set(['face', 'j', 'i']), "da must have dimensions ['face', 'j', 'i']"
if ax is None:
fig, ax = plt.subplots(figsize=(12, 6), subplot_kw={'projection': projection})
else:
m = plt.axes(projection=projection)
field = pyresample.kd_tree.resample_nearest(self.orig_grid, da.values,
self.new_grid,
radius_of_influence=100000,
fill_value=None)
vmax = plt_kwargs.pop('vmax', field.max())
vmin = plt_kwargs.pop('vmin', field.min())
x,y = self.new_grid_lon, self.new_grid_lat
# Find index where data is splitted for mapping
split_lon_idx = round(x.shape[1]/(360/(lon_0 if lon_0>0 else lon_0+360)))
p = ax.pcolormesh(x[:,:split_lon_idx], y[:,:split_lon_idx], field[:,:split_lon_idx],
vmax=vmax, vmin=vmin, transform=cart.crs.PlateCarree(), zorder=1, **plt_kwargs)
p = ax.pcolormesh(x[:,split_lon_idx:], y[:,split_lon_idx:], field[:,split_lon_idx:],
vmax=vmax, vmin=vmin, transform=cart.crs.PlateCarree(), zorder=2, **plt_kwargs)
ax.add_feature(cart.feature.LAND, facecolor='0.5', zorder=3)
label = ''
if da.name is not None:
label = da.name
if 'units' in da.attrs:
label += ' [%s]' % da.attrs['units']
cb = plt.colorbar(p, shrink=0.4, label=label)
return ax
mapper = LLCMapper(coords)
mapper(coords.Depth);We can use this with any 2D cell-centered LLC variable.
Selecting data¶
The entire ECCOv4e3 dataset is contained in a single Xarray.Dataset object. How do we find a view specific pieces of data? This is handled by Xarray’s indexing and selecting functions. To get the SST from January 2000, we do this:
sst = ds.THETA.sel(time='2000-01-15', k=0)
sstStill no data has been actually downloaded. That doesn’t happen until we call .load() explicitly or try to make a plot.
mapper(sst, cmap='RdBu_r');Do some Calculations¶
Now let’s start doing something besides just plotting the existing data. For example, let’s calculate the time-mean SST.
mean_sst = ds.THETA.sel(k=0).mean(dim='time')
mean_sstAs usual, no data was loaded. Instead, mean_sst is a symbolic representation of the data that needs to be pulled and the computations that need to be executed to produce the desired result. In this case, the 288 original chunks all need to be read from cloud storage. Dask coordinates this automatically for us. But it does take some time.
%time mean_sst.load()mapper(mean_sst, cmap='RdBu_r');Speeding things up with a Dask Cluster¶
How can we speed things up? In general, the main bottleneck for this type of data analysis is the speed with which we can read the data. With cloud storage, the access is highly parallelizeable.
From a Pangeo environment, we can create a Dask cluster to spread the work out amongst many compute nodes. This works on both HPC and cloud. In the cloud, the compute nodes are provisioned on the fly and can be shut down as soon as we are done with our analysis.
The code below will create a cluster with five compute nodes. It can take a few minutes to provision our nodes.
cluster = GatewayCluster()
cluster.scale(5)
client = Client(cluster)
clusterNow we re-run the mean calculation. Note how the dashboard helps us visualize what the cluster is doing.
%time ds.THETA.isel(k=0).mean(dim='time').load()Spatially-Integrated Heat Content Anomaly¶
Now let’s do something harder. We will calculate the horizontally integrated heat content anomaly for the full 3D model domain.
# the monthly climatology
theta_clim = ds.THETA.groupby('time.month').mean(dim='time')
# the anomaly
theta_anom = ds.THETA.groupby('time.month') - theta_clim
rho0 = 1029
cp = 3994
ohc = rho0 * cp * (theta_anom *
coords.rA *
coords.hFacC).sum(dim=['face', 'j', 'i'])
ohc# actually load the data
ohc.load()
# put the depth coordinate back for plotting purposes
ohc.coords['Z'] = coords.Z
ohc.swap_dims({'k': 'Z'}).transpose().plot(vmax=1e20)Spatial Derivatives: Heat Budget¶
As our final exercise, we will do something much more complicated. We will compute the time-mean convergence of vertically-integrated heat fluxes. This is hard for several reasons.
The first reason it is hard is because it involves variables located at different grid points.
Following MITgcm conventions, xmitgcm (which produced this dataset) labels the center point with the coordinates j, i, the u-velocity point as j, i_g, and the v-velocity point as j_g, i.
The horizontal advective heat flux variables are
ADVx_TH (time, k, face, j, i_g) float32 dask.array<shape=(288, 50, 13, 90, 90), chunksize=(1, 50, 13, 90, 90)>
ADVy_TH (time, k, face, j_g, i) float32 dask.array<shape=(288, 50, 13, 90, 90), chunksize=(1, 50, 13, 90, 90)>Xarray won’t allow us to add or multiply variables that have different dimensions, and xarray by itself doesn’t understand how to transform from one grid position to another.
That’s why xgcm was created.
Xgcm allows us to create a Grid object, which understands how to interpolate and take differences in a way that is compatible with finite volume models such at MITgcm. Xgcm also works with many other models, including ROMS, POP, MOM5/6, NEMO, etc.
A second reason this is hard is because of the complex topology connecting the different MITgcm faces. Fortunately xgcm also supports this.
# define the connectivity between faces
face_connections = {'face':
{0: {'X': ((12, 'Y', False), (3, 'X', False)),
'Y': (None, (1, 'Y', False))},
1: {'X': ((11, 'Y', False), (4, 'X', False)),
'Y': ((0, 'Y', False), (2, 'Y', False))},
2: {'X': ((10, 'Y', False), (5, 'X', False)),
'Y': ((1, 'Y', False), (6, 'X', False))},
3: {'X': ((0, 'X', False), (9, 'Y', False)),
'Y': (None, (4, 'Y', False))},
4: {'X': ((1, 'X', False), (8, 'Y', False)),
'Y': ((3, 'Y', False), (5, 'Y', False))},
5: {'X': ((2, 'X', False), (7, 'Y', False)),
'Y': ((4, 'Y', False), (6, 'Y', False))},
6: {'X': ((2, 'Y', False), (7, 'X', False)),
'Y': ((5, 'Y', False), (10, 'X', False))},
7: {'X': ((6, 'X', False), (8, 'X', False)),
'Y': ((5, 'X', False), (10, 'Y', False))},
8: {'X': ((7, 'X', False), (9, 'X', False)),
'Y': ((4, 'X', False), (11, 'Y', False))},
9: {'X': ((8, 'X', False), None),
'Y': ((3, 'X', False), (12, 'Y', False))},
10: {'X': ((6, 'Y', False), (11, 'X', False)),
'Y': ((7, 'Y', False), (2, 'X', False))},
11: {'X': ((10, 'X', False), (12, 'X', False)),
'Y': ((8, 'Y', False), (1, 'X', False))},
12: {'X': ((11, 'X', False), None),
'Y': ((9, 'Y', False), (0, 'X', False))}}}
# create the grid object
grid = xgcm.Grid(ds, periodic=False, face_connections=face_connections)
gridNow we can use the grid object we created to take the divergence of a 2D vector
# vertical integral and time mean of horizontal diffusive heat flux
advx_th_vint = ds.ADVx_TH.sum(dim='k').mean(dim='time')
advy_th_vint = ds.ADVy_TH.sum(dim='k').mean(dim='time')
# difference in the x and y directions
diff_ADV_th = grid.diff_2d_vector({'X': advx_th_vint, 'Y': advy_th_vint}, boundary='fill')
# convergence
conv_ADV_th = -diff_ADV_th['X'] - diff_ADV_th['Y']
conv_ADV_th# vertical integral and time mean of horizontal diffusive heat flux
difx_th_vint = ds.DFxE_TH.sum(dim='k').mean(dim='time')
dify_th_vint = ds.DFyE_TH.sum(dim='k').mean(dim='time')
# difference in the x and y directions
diff_DIF_th = grid.diff_2d_vector({'X': difx_th_vint, 'Y': dify_th_vint}, boundary='fill')
# convergence
conv_DIF_th = -diff_DIF_th['X'] - diff_DIF_th['Y']
conv_DIF_th# convert to Watts / m^2 and load
mean_adv_conv = rho0 * cp * (conv_ADV_th/coords.rA).fillna(0.).load()
mean_dif_conv = rho0 * cp * (conv_DIF_th/coords.rA).fillna(0.).load()ax = mapper(mean_adv_conv, cmap='RdBu_r', vmax=300, vmin=-300);
ax.set_title(r'Convergence of Advective Flux (W/m$^2$)');ax = mapper(mean_dif_conv, cmap='RdBu_r', vmax=300, vmin=-300)
ax.set_title(r'Convergence of Diffusive Flux (W/m$^2$)');ax = mapper(mean_dif_conv + mean_adv_conv, cmap='RdBu_r', vmax=300, vmin=-300)
ax.set_title(r'Convergence of Net Horizontal Flux (W/m$^2$)');ax = mapper(ds.TFLUX.mean(dim='time').load(), cmap='RdBu_r', vmax=300, vmin=-300);
ax.set_title(r'Surface Heat Flux (W/m$^2$)');Summary¶
In this example we used xarray and cartopy to visualize ocean depth and ocean heat content anomalies. Then, we used xgcm to easily work with variables that have different dimensions.
What’s next?¶
In our last example, we will visualize ocean currents.
Resources and references¶
This notebook is based on the ECCOv4 example from the Pangeo physical oceanography gallery: http://
gallery .pangeo .io /repos /pangeo -gallery /physical -oceanography /04 _eccov4 .html