-
Notifications
You must be signed in to change notification settings - Fork 794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(RFC): Adds altair.datasets
#3631
Draft
dangotbanned
wants to merge
219
commits into
main
Choose a base branch
from
vega-datasets
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+3,937
−12
Draft
Changes from all commits
Commits
Show all changes
219 commits
Select commit
Hold shift + click to select a range
7933771
wip
dangotbanned b30081e
feat(DRAFT): Minimal reimplementation
dangotbanned 279586b
refactor: Make version accessible via `data.source_tag`
dangotbanned 32150ad
refactor: `ext_fn` -> `Dataset.read_fn`
dangotbanned f1d18a2
docs: Add trailing docs to long literals
dangotbanned 4d3c550
docs: Add module-level doc
dangotbanned 7e65841
Merge branch 'main' into vega-datasets
dangotbanned 05773af
Merge branch 'main' into vega-datasets
dangotbanned 4fff80a
Merge branch 'main' into vega-datasets
dangotbanned 3a284a5
feat: Adds `.arrow` support
dangotbanned 22a5039
feat: Add support for caching metadata
dangotbanned a618ffc
feat: Support env var `VEGA_GITHUB_TOKEN`
dangotbanned 1792340
feat: Add support for multi-version metadata
dangotbanned fa2c9e7
refactor: Renaming, docs, reorganize
dangotbanned 24cd7d7
feat: Support collecting release tags
dangotbanned 7dd461f
feat: Adds `refresh_tags`
dangotbanned 9768495
feat(DRAFT): Adds `url_from`
dangotbanned c38c235
fix: Wrap all requests with auth
dangotbanned a22cc8a
chore: Remove `DATASET_NAMES_USED`
dangotbanned 1181860
feat: Major `GitHub` rewrite, handle rate limiting
dangotbanned 31eeb20
feat(DRAFT): Partial implement `data("name")`
dangotbanned 511a845
fix(typing): Resolve some `mypy` errors
dangotbanned c76cfd4
Merge branch 'main' into vega-datasets
dangotbanned d3f0497
Merge branch 'main' into vega-datasets
dangotbanned 1b3390b
Merge branch 'main' into vega-datasets
dangotbanned a770ba9
fix(ruff): Apply `3.8` fixes
dangotbanned 686a485
docs(typing): Add `WorkInProgress` marker to `data(...)`
dangotbanned ba4491d
Merge branch 'main' into vega-datasets
dangotbanned 1a4e107
Merge branch 'main' into vega-datasets
dangotbanned 989b9b7
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 0bbf2e9
feat(DRAFT): Add a source for available `npm` versions
dangotbanned 9c386e2
refactor: Bake `"v"` prefix into `tags_npm`
dangotbanned 1937f2b
refactor: Move `_npm_metadata` into a class
dangotbanned 66fa6d1
chore: Remove unused, add todo
dangotbanned 937aa01
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 21b2edd
feat: Adds `app` context for github<->npm
dangotbanned 6527305
fix: Invalidate old trees
dangotbanned 336eeca
chore: Remove early test files#
dangotbanned 225be0a
refactor: Rename `metadata_full` -> `metadata`
dangotbanned e91baab
refactor: `tools.vendor_datasets` -> `tools.datasets` package
dangotbanned 7782925
refactor: Move `TypedDict`, `NamedTuple`(s) -> `datasets.models`
dangotbanned bc86ca1
refactor: Move, rename `semver`-related tools
dangotbanned a6f5645
refactor: Remove `write_schema` from `_Npm`, `_GitHub`
dangotbanned 07a8342
refactor: Rename, split `_Npm`, `_GitHub` into own modules
dangotbanned b89e6dc
refactor: Move `DataLoader.__call__` -> `DataLoader.url()`
dangotbanned 7b0fe29
feat(typing): Generate annotations based on known datasets
dangotbanned 572d069
refactor(typing): Utilize `datasets._typing`
dangotbanned 07dcc0b
feat: Adds `Npm.dataset` for remote reading]
dangotbanned d8f3791
refactor: Remove dead code
dangotbanned 4642a23
refactor: Replace `name_js`, `name_py` with `dataset_name`
dangotbanned 65f87fc
fix: Remove invalid `semver.sort` op
dangotbanned 6349b0f
fix: Add missing init path for `refresh_trees`
dangotbanned f1d610c
refactor: Move public interface to `_io`
dangotbanned c4ef112
refactor(perf): Don't recreate path mapping on every attribute access
dangotbanned eb876eb
refactor: Split `Reader._url_from` into `url`, `_query`
dangotbanned 661a385
feat(DRAFT): Adds `GitHubUrl.BLOBS`
dangotbanned 22dcb17
feat: Store `sha` instead of `github_url`
dangotbanned 669df02
feat(perf): Adds caching to `ALTAIR_DATASETS_DIR`
dangotbanned 2051410
feat(DRAFT): Adds initial generic backends
dangotbanned 0ea4e21
feat: Generate and move `Metadata` (`TypedDict`) to `datasets._typing`
dangotbanned a2e9baa
feat: Adds optional backends, `polars[pyarrow]`, `with_backend`
dangotbanned c8a1429
feat: Adds `pyarrow` backend
dangotbanned 279fea9
docs: Update `.with_backend()`
dangotbanned 7d6c7ca
chore: Remove `duckdb` comment
dangotbanned 0bb4210
ci(typing): Add `pyarrow-stubs` to `dev` dependencies
dangotbanned 8984425
refactor: `generate_datasets_typing` -> `Application.generate_typing`
dangotbanned 9d062c8
refactor: Split `datasets` into public/private packages
dangotbanned a17d674
refactor: Provide `npm` url to `GitHub(...)`
dangotbanned 69a619c
refactor: Rename `ext` -> `suffix`
dangotbanned a259b10
refactor: Remove unimplemented `tag="latest"`
dangotbanned 88968c8
feat: Rename `_datasets_dir`, make configurable, add docs
dangotbanned b987308
docs: Adds examples to `Loader.with_backend`
dangotbanned 4a2a2e0
refactor: Clean up requirements -> imports
dangotbanned e6dd27e
docs: Add basic example to `Loader` class
dangotbanned 2a7bc4f
refactor: Reorder `alt.datasets` module
dangotbanned c572180
docs: Fill out `Loader.url`
dangotbanned 9ab9463
feat: Adds `_Reader._read_metadata`
dangotbanned dd3edd6
refactor: Rename `(reader|scanner_from()` -> `(read|scan)_fn()`
dangotbanned 146cb50
refactor(typing): Replace some explicit casts
dangotbanned 94ad0d1
refactor: Shorten and document request delays
dangotbanned 4093383
feat(DRAFT): Make `[tag]` a `pl.Enum`
dangotbanned 76cdd45
fix: Handle `pyarrow` scalars conversion
dangotbanned bb7bc17
test: Adds `test_datasets`
dangotbanned ebc1bfa
fix(DRAFT): hotfix `pyarrow` read
dangotbanned fe0ae88
fix(DRAFT): Treat `polars` as exception, invalidate cache
dangotbanned 7089f2a
test: Skip `pyarrow` tests on `3.9`
dangotbanned e1290d4
refactor: Tidy up changes from last 4 commits
dangotbanned 9d88e1b
refactor: Rework `_readers.py`
dangotbanned 60d39f5
test: Adds tests for missing dependencies
dangotbanned d6f0e45
test: Adds `test_dataset_not_found`
dangotbanned b7d57a0
test: Adds `test_reader_cache`
dangotbanned 5c2e581
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned b70aef8
docs: Finish `_Reader`, fill parameters of `Loader.__call__`
dangotbanned 403b787
refactor: Rename `backend` -> `backend_name`, `get_backend` -> `backend`
dangotbanned 3fbc759
fix(DRAFT): Add multiple fallbacks for `pyarrow` JSON
dangotbanned 4f5b4de
test: Remove `pandas` fallback for `pyarrow`
dangotbanned 69a72b6
test: Adds `test_all_datasets`
dangotbanned 08101cc
refactor: Remove `_Reader._response`
dangotbanned 90428a6
fix: Correctly handle no remote connection
dangotbanned 8ad78c1
docs: Align `_typing.Metadata` and `Loader.(url|__call__)` descriptions
dangotbanned e650454
feat: Update to `v2.10.0`, fix tag inconsistency
dangotbanned 72296b0
refactor: Tidying up `tools.datasets`
dangotbanned ca1b500
revert: Remove tags schema files
dangotbanned 5bd70d1
ci: Introduce `datasets` refresh to `generate_schema_wrapper`
dangotbanned 012f98b
docs: Add `tools.datasets.Application` doc
dangotbanned bc0f42c
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 5e677c0
revert: Remove comment
dangotbanned a99d2c9
docs: Add a table preview to `Metadata`
dangotbanned 7e6da39
docs: Add examples for `Loader.__call__`
dangotbanned b49e679
refactor: Rename `DatasetName` -> `Dataset`, `VersionTag` -> `Version`
dangotbanned 7a14394
fix: Ensure latest `[tag]` appears first
dangotbanned 99f823e
refactor: Misc `models.py` updates
dangotbanned dcef1d9
docs: Update `tools.datasets.__init__.py`
dangotbanned 173f3d6
test: Fix `@datasets_debug` selection
dangotbanned 3f5a805
test: Add support for overrides in `test_all_datasets`
dangotbanned 4fc8446
test: Adds `test_metadata_columns`
dangotbanned 882af33
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 9e9deeb
fix: Warn instead of raise for hit rate limit
dangotbanned 88d4491
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned ebc8dec
Merge branch 'main' into vega-datasets
dangotbanned f2823b4
Merge branch 'main' into vega-datasets
dangotbanned fa5bea8
feat: Update for `v2.11.0`
dangotbanned 95582df
feat: Always use `pl.read_csv(try_parse_dates=True)`
dangotbanned dc4a230
feat: Adds `_pl_read_json_roundtrip`
dangotbanned 7ddb2a8
feat(DRAFT): Adds infer-based `altair.datasets.load`
dangotbanned 9544d9b
refactor: Rename `Loader.with_backend` -> `Loader.from_backend`
dangotbanned 7b3a89e
feat(DRAFT): Add optional `backend` parameter for `load(...)`
dangotbanned c835c13
feat(DRAFT): Adds `altair.datasets.url`
dangotbanned 0817ff8
feat: Support `url(...)` without dependencies
dangotbanned e01fdd7
fix(DRAFT): Don't generate csv on refresh
dangotbanned 0c5195e
test: Replace rogue `NotImplementedError`
dangotbanned 5595d90
fix: Omit `.gz` last modification time header
dangotbanned 9f62151
docs: Add doc for `Application.write_csv_gzip`
dangotbanned 1bd4552
revert: Remove `"polars[pyarrow]" backend
dangotbanned 11da9c8
test: Add a complex `xfail` for `test_load_call`
dangotbanned 694ada0
refactor: Renaming/recomposing `_readers.py`
dangotbanned 6f41c7e
build: Generate `VERSION_LATEST`
dangotbanned 88d06a6
feat: Adds `_cache.py` for `UrlCache`, `DatasetCache`
dangotbanned a0d2df4
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned f21b52b
ci(ruff): Ignore `0.8.0` violations
dangotbanned de03046
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned e7974d9
fix: Use stable `narwhals` imports
dangotbanned 8ba48a9
Merge branch 'main' into vega-datasets
dangotbanned 9d97096
Merge branch 'main' into vega-datasets
dangotbanned a698de9
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned c907dc5
revert(ruff): Ignore `0.8.0` violations
dangotbanned a3b38c4
revert: Remove `_readers._filter`
dangotbanned a6c5096
feat: Adds example and tests for disabling caching
dangotbanned 71423ea
refactor: Tidy up `DatasetCache`
dangotbanned 7dd9c18
docs: Finish `Loader.cache`
dangotbanned a982759
refactor(typing): Use `Mapping` instead of `dict`
dangotbanned d20e9c1
perf: Use `to_list()` for all backends
dangotbanned 909e7d0
feat(DRAFT): Utilize `datapackage` schemas in `pandas` backends
dangotbanned d93fda1
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 9274284
refactor(ruff): Apply `TC006` fixes in new code
dangotbanned 8e232b8
docs(DRAFT): Add notes on `datapackage.features_typing`
dangotbanned 9330895
docs: Update `Loader.from_backend` example w/ dtypes
dangotbanned caf534d
feat: Use `_pl_read_json_roundtrip` instead of `pl.read_json` for `py…
dangotbanned 75bf2ba
docs: Replace example dataset
dangotbanned 9e1fd09
Merge branch 'main' into vega-datasets
dangotbanned d4930e7
fix(ruff): resolve `RUF043` warnings
dangotbanned 5a31333
build: run `generate-schema-wrapper`
dangotbanned 6080116
chore: update schemas
dangotbanned 897e8f9
feat(typing): Update `frictionless` model hierarchy
dangotbanned 7e7b303
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned fdffed0
chore: Freeze all metadata
dangotbanned e259fba
feat: Support and extract `hash` from `datapackage.json`
dangotbanned 3fa7cac
feat: Build dataset url with `datapackage.json`
dangotbanned 34b869e
revert: Removes `is_name_collision`
dangotbanned 5af3701
build: Re-enable and generate `datapackage_features.parquet`
dangotbanned c3139f1
feat: add temp `_Reader.*_dpkg` methods
dangotbanned 6035b39
test: Remove/replace all `tag` based tests
dangotbanned 5d8b6db
revert: Remove all `tag` based features
dangotbanned ff12199
Merge branch 'main' into vega-datasets
dangotbanned df26bc2
feat: Source version from `tool.altair.vega.vega-datasets`
dangotbanned 9f23ccd
refactor(DRAFT): Migrate to `datapackage.json` only
dangotbanned d297d7e
docs: Update `Metadata` example
dangotbanned 64b80ff
docs: Add missing descriptions to `Metadata`
dangotbanned a0f7585
refactor: Renaming/reorganize in `tools/`
dangotbanned 0df79b0
test: Skip `is_image` datasets
dangotbanned ee0d381
refactor: Make caching **opt-out**, use `$XDG_CACHE_HOME`
dangotbanned 138ede6
refactor(typing): Add `_iter_results` helper
dangotbanned 1a4f1c1
feat(DRAFT): Replace `UrlCache` w/ `CsvCache`
dangotbanned 32fd0f9
refactor: Misc reworking caching
dangotbanned a1839df
chore: Include `.parquet` in `metadata.csv.gz`
dangotbanned 2db8daf
feat: Extend `_extract_suffix` to support `Metadata`
dangotbanned c265e1d
refactor(typing): Simplify `Dataset` import
dangotbanned 5503e0b
fix: Convert `str` to correct types in `CsvCache`
dangotbanned 3c7c571
feat: Support `pandas` w/o a `.parquet` reader
dangotbanned c23805d
refactor: Reduce repetition w/ `_Reader._download`
dangotbanned 056f96d
feat(DRAFT): `Metadata`-based error handling
dangotbanned 9c5db19
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned e168948
chore(ruff): Remove unused `0.9.2` ignores
dangotbanned 7d6b81d
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned a752b3c
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 5975a8b
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 7fd1f4d
refactor: clean up, standardize `_exceptions.py`
dangotbanned 5dc227e
test: Refactor decorators, test new errors
dangotbanned ba01af1
docs: Replace outdated docs
dangotbanned 80647b6
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned ad4c747
Merge remote-tracking branch 'upstream/main' into vega-datasets
dangotbanned 63f4be0
refactor: Clean up `tools.datasets`
dangotbanned 7433eb8
test: `test_datasets` overhaul
dangotbanned d64dbee
refactor: Reuse `tools.fs` more, fix `app.(read|scan)`
dangotbanned 0c72435
feat(typing): Set `"polars"` as default in `Loader.from_backend`
dangotbanned 8e4c168
docs: Adds module-level doc to `altair.datasets`
dangotbanned 106f8bb
test: Clean up `test_datasets`
dangotbanned c3c2eda
docs: Make `sphinx` happy with docs
dangotbanned d3b3ef2
refactor: Add `find_spec` fastpath to `is_available`
dangotbanned b606a7d
feat(DRAFT): Private API overhaul
dangotbanned 2203972
refactor: Simplify obsolete paths in `CsvCache`
dangotbanned e68ab89
chore: add workaround for `narwhals` bug
dangotbanned 576a9b4
feat(typing): replace `(Read|Scan)Impl` classes with aliases
dangotbanned 91562d5
feat: Rename, docs `unwrap_or` -> `unwrap_or_skip`
dangotbanned 1628cbd
refactor: Replace `._contents` w/ `.__str__()`
dangotbanned cbd04e3
fix: Use correct type for `pyarrow.csv.read_csv`
dangotbanned c0a92a6
docs: Add docs for `Read`, `Scan`, `BaseImpl`
dangotbanned 2b8bf5e
docs: Clean up `_merge_kwds`, `_solve`
dangotbanned 755ab4f
refactor(typing): Include all suffixes in `Extension`
dangotbanned File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,151 @@ | ||
""" | ||
Load example datasets *remotely* from `vega-datasets`_. | ||
|
||
Provides over **70+** datasets, used throughout our `Example Gallery`_. | ||
|
||
You can learn more about each dataset at `datapackage.md`_. | ||
|
||
Examples | ||
-------- | ||
Load a dataset as a ``DataFrame``/``Table``:: | ||
|
||
from altair.datasets import load | ||
|
||
load("cars") | ||
|
||
.. note:: | ||
Requires installation of either `polars`_, `pandas`_, or `pyarrow`_. | ||
|
||
Get the remote address of a dataset and use directly in a :class:`altair.Chart`:: | ||
|
||
import altair as alt | ||
from altair.datasets import url | ||
|
||
source = url("co2-concentration") | ||
alt.Chart(source).mark_line(tooltip=True).encode(x="Date:T", y="CO2:Q") | ||
|
||
.. note:: | ||
Works without any additional dependencies. | ||
|
||
For greater control over the backend library use:: | ||
|
||
from altair.datasets import Loader | ||
|
||
load = Loader.from_backend("polars") | ||
load("penguins") | ||
load.url("penguins") | ||
|
||
This method also provides *precise* <kbd>Tab</kbd> completions on the returned object:: | ||
|
||
load("cars").<Tab> | ||
# bottom_k | ||
# drop | ||
# drop_in_place | ||
# drop_nans | ||
# dtypes | ||
# ... | ||
|
||
.. _vega-datasets: | ||
https://github.com/vega/vega-datasets | ||
.. _Example Gallery: | ||
https://altair-viz.github.io/gallery/index.html#example-gallery | ||
.. _datapackage.md: | ||
https://github.com/vega/vega-datasets/blob/main/datapackage.md | ||
.. _polars: | ||
https://docs.pola.rs/user-guide/installation/ | ||
.. _pandas: | ||
https://pandas.pydata.org/docs/getting_started/install.html | ||
.. _pyarrow: | ||
https://arrow.apache.org/docs/python/install.html | ||
""" | ||
|
||
from __future__ import annotations | ||
|
||
from typing import TYPE_CHECKING | ||
|
||
from altair.datasets._loader import Loader | ||
|
||
if TYPE_CHECKING: | ||
import sys | ||
from typing import Any | ||
|
||
if sys.version_info >= (3, 11): | ||
from typing import LiteralString | ||
else: | ||
from typing_extensions import LiteralString | ||
|
||
from altair.datasets._loader import _Load | ||
from altair.datasets._typing import Dataset, Extension | ||
|
||
|
||
__all__ = ["Loader", "load", "url"] | ||
|
||
|
||
load: _Load[Any, Any] | ||
""" | ||
Get a remote dataset and load as tabular data. | ||
|
||
For full <kbd>Tab</kbd> completions, instead use:: | ||
|
||
from altair.datasets import Loader | ||
load = Loader.from_backend("polars") | ||
cars = load("cars") | ||
movies = load("movies") | ||
|
||
Alternatively, specify ``backend`` during a call:: | ||
|
||
from altair.datasets import load | ||
cars = load("cars", backend="polars") | ||
movies = load("movies", backend="polars") | ||
""" | ||
|
||
|
||
def url( | ||
name: Dataset | LiteralString, | ||
suffix: Extension | None = None, | ||
/, | ||
) -> str: | ||
""" | ||
Return the address of a remote dataset. | ||
Comment on lines
+103
to
+109
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
||
Parameters | ||
---------- | ||
name | ||
Name of the dataset/`Path.stem`_. | ||
suffix | ||
File extension/`Path.suffix`_. | ||
|
||
.. note:: | ||
Only needed if ``name`` is available in multiple formats. | ||
|
||
Returns | ||
------- | ||
``str`` | ||
|
||
.. _Path.stem: | ||
https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.stem | ||
.. _Path.suffix: | ||
https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.suffix | ||
Comment on lines
+121
to
+128
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not a fan of Returns sections, but Truly surprised this solved the issue |
||
""" | ||
from altair.datasets._exceptions import AltairDatasetsError | ||
|
||
try: | ||
from altair.datasets._loader import load | ||
|
||
url = load.url(name, suffix) | ||
except AltairDatasetsError: | ||
from altair.datasets._cache import csv_cache | ||
|
||
url = csv_cache.url(name) | ||
|
||
return url | ||
|
||
|
||
def __getattr__(name): | ||
if name == "load": | ||
from altair.datasets._loader import load | ||
|
||
return load | ||
else: | ||
msg = f"module {__name__!r} has no attribute {name!r}" | ||
raise AttributeError(msg) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related
altair.datasets
#3631 (comment)altair.datasets
#3631 (comment)altair.datasets
#3631 (comment)