Tile Generation Benchmarks for Varied Chunk Sizes

Explanation

In this notebook we compare the performance of tiling artificially generated Zarr data to different chunk sizes. The CMIP6 data provides an excellent real world dataset, but is relatively low resolution. In order to study the impact of higher resolution data, we artificially generated Zarr datastores to explore the relationship between tile generation time and chunk size.

Setup

# External modules
import hvplot.pandas
import holoviews as hv
import json
import pandas as pd
pd.options.plotting.backend = 'holoviews'
import warnings
warnings.filterwarnings('ignore')

# Local modules
import sys; sys.path.append('..')
import helpers.dataframe as dataframe_helpers
import helpers.eodc_hub_role as eodc_hub_role
from xarray_tile_test import XarrayTileTest
credentials = eodc_hub_role.fetch_and_set_credentials()

Load the fake datasets which have increasingly fine spatial resolution and thus increasingly large chunk size.

# Run 3 iterations of each setting
iterations = 5
zooms = range(6)
all_zarr_datasets = json.loads(open('../01-generate-datasets/fake-datasets.json').read())
zarr_datasets = {k: v for k, v in all_zarr_datasets.items() if 'single_chunk' in k}

Run Tests

results = []

for zarr_dataset_id, zarr_dataset in zarr_datasets.items():
    zarr_tile_test = XarrayTileTest(
        dataset_id=zarr_dataset_id,
        **zarr_dataset
    )

    # Run it 3 times for each zoom level
    for zoom in zooms:
        zarr_tile_test.run_batch({'zoom': zoom}, batch_size=iterations)

    results.append(zarr_tile_test.store_results(credentials))
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919222720_XarrayTileTest_single_chunk_store_lat1024_lon2048.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919222734_XarrayTileTest_single_chunk_store_lat1448_lon2896.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919222759_XarrayTileTest_single_chunk_store_lat2048_lon4096.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919222845_XarrayTileTest_single_chunk_store_lat2896_lon5792.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919223015_XarrayTileTest_single_chunk_store_lat4096_lon8192.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919223021_XarrayTileTest_single_chunk_store_lat512_lon1024.zarr.json
Wrote instance data to s3://nasa-eodc-data-store/test-results/20230919223027_XarrayTileTest_single_chunk_store_lat724_lon1448.zarr.json

Read and Plot Results

all_df = dataframe_helpers.load_all_into_dataframe(credentials, results)
expanded_df = dataframe_helpers.expand_timings(all_df)
expanded_df = expanded_df.sort_values('chunk_size_mb')
cmap = ["#E66100", "#5D3A9B"]
plt_opts = {"width": 400, "height": 300}

plts = []

for zoom_level in zooms:
    df_level = expanded_df[expanded_df["zoom"] == zoom_level]
    plts.append(
        df_level.hvplot.box(
            y="time",
            by=["chunk_size_mb"],
            c="chunk_size_mb",
            cmap=cmap,
            ylabel="Time to render (ms)",
            xlabel="Chunk size (MB)",
            legend=False,
            title=f"Zoom level {zoom_level}",
        ).opts(**plt_opts)
    )
hv.Layout(plts).cols(2)
expanded_df.to_csv('results-csvs/03-chunk-size-results.csv')