Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Competitions feature #121

Merged
merged 48 commits into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
1700882
competition wip
Andrewq11 Apr 11, 2024
1cdd330
wip
Andrewq11 Apr 15, 2024
cb60c04
wip
Andrewq11 Apr 26, 2024
252bae0
Merge branch 'main' into feat/competitions
Andrewq11 Apr 26, 2024
9e14e89
adding methods for interfacing w/ competitions
Andrewq11 Apr 26, 2024
3a55b27
Continuing to integrate polaris client with the Hub for comps
Andrewq11 Apr 29, 2024
b09fd08
comp wip
Andrewq11 May 16, 2024
26210b2
updating date serializer
Andrewq11 May 24, 2024
6704b6f
Competition evaluation (#103)
kiramclean May 27, 2024
54c42e2
Use evaluation logic directly in hub, no need for wrapper (#109)
kiramclean May 30, 2024
9edd693
light formatting updates
Andrewq11 May 31, 2024
818ff12
updating fallback version for dev build
Andrewq11 Jun 5, 2024
40c17f6
integrating results for comps (#111)
Andrewq11 Jun 7, 2024
8c2daae
updates to enable fetching & interacting with comps
Andrewq11 Jun 8, 2024
a4cfcbe
updating requirement for eval name
Andrewq11 Jun 12, 2024
44c5d7f
Feat/competition/eval (#114)
kiramclean Jun 21, 2024
66c1913
test that all rows of a competition test set will have at least a val…
kiramclean Jun 27, 2024
83a77a5
Merge branch 'main' into feat/competitions
kirahowe Jun 27, 2024
36f04c0
update competition evaluation to support y_prob
kirahowe Jun 28, 2024
173a8e3
Merge remote-tracking branch 'refs/remotes/origin/feat/competitions' …
kirahowe Jun 28, 2024
fd16b1b
Merge branch 'main' into feat/competitions
kirahowe Jul 29, 2024
d50a48d
run ruff on all files and fix issues
kirahowe Jul 31, 2024
efd739b
fix wrong url printout after upload
kirahowe Jul 31, 2024
c8c462e
Clarifying typing for nested types
Andrewq11 Aug 1, 2024
03ed6df
removing if_exists arg from comps
Andrewq11 Aug 1, 2024
352a6d5
raising error for trying to make zarr comp
Andrewq11 Aug 1, 2024
1169c15
updating name of ArtifactType to ArtifactSubtype
Andrewq11 Aug 1, 2024
09f5e7d
updating comments & removing redundant class attributes
Andrewq11 Aug 1, 2024
c7c5fb7
moving split validator logic from comp spec to benchmark spec
Andrewq11 Aug 1, 2024
7096a0f
removing redundant checks from CompetitionDataset class
Andrewq11 Aug 1, 2024
fd009f2
creating pydantic model for comp predictions
Andrewq11 Aug 1, 2024
250c612
split validator logic, redundant pydantic checks, comp pred pydantic …
Andrewq11 Aug 2, 2024
b774242
changes for comps wrap up
Andrewq11 Aug 6, 2024
ef94203
Adding CompetitionsPredictionsType
Andrewq11 Aug 6, 2024
7378d1f
adding conversion validator for comp prediction type
Andrewq11 Aug 6, 2024
d5179f3
setting predictions validator as class method
Andrewq11 Aug 6, 2024
ea4cb22
Using self instead of cls for field validators
Andrewq11 Aug 6, 2024
889f147
removing model validation on fetch from hub
Andrewq11 Aug 7, 2024
5686411
Creating HubOwner object in comp result eval method
Andrewq11 Aug 8, 2024
363938b
Documentation & tutorials for competitions
Andrewq11 Aug 10, 2024
ce7b4d5
Removing create comp method, fixing failing tests, updating benchmark…
Andrewq11 Aug 13, 2024
6b3bfe3
Updating docs for create comp & benchmark pred structure
Andrewq11 Aug 13, 2024
0005c78
tiny wording change in competition tutorial
Andrewq11 Aug 14, 2024
802d3bd
Addressing PR feedback
Andrewq11 Aug 15, 2024
54533da
fixing tests & removing dataset redefinition from CompetitionDataset …
Andrewq11 Aug 15, 2024
d6f2472
Commenting out line in tutorial to fix test
Andrewq11 Aug 15, 2024
fa39613
fixing formatting
Andrewq11 Aug 15, 2024
4bac9d2
small fixes & depending on tableContent for dataset storage info
Andrewq11 Aug 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/api/competition.dataset.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
::: polaris.dataset.CompetitionDataset
options:
filters: ["!^_"]

---
7 changes: 7 additions & 0 deletions docs/api/competition.evaluation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
::: polaris.evaluate.CompetitionPredictions

---

::: polaris.evaluate.CompetitionResults

---
3 changes: 3 additions & 0 deletions docs/api/competition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
::: polaris.competition.CompetitionSpecification

---
9 changes: 9 additions & 0 deletions docs/api/evaluation.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
::: polaris.evaluate.ResultsMetadata
options:
filters: ["!^_"]

---

::: polaris.evaluate.EvaluationResult

---

::: polaris.evaluate.BenchmarkResults

Expand Down
253 changes: 253 additions & 0 deletions docs/tutorials/competition.participate.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,253 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "40f99374-b47e-4f84-bdb9-148a11f9c07d",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"# Participating in a Competition\n",
"\n",
"<div class=\"admonition abstract highlight\">\n",
" <p class=\"admonition-title\">In short</p>\n",
" <p>This tutorial walks you through how to fetch an active competition from Polaris, prepare your predictions and then submit them for secure evaluation by the Polaris Hub.</p>\n",
"</div>\n",
"\n",
"Participating in a competition on Polaris is very similar to participating in a standard benchmark. The main difference lies in how predictions are prepared and how they are evaluated. We'll touch on each of these topics later in the tutorial. \n",
"\n",
"Before continuing, please ensure you are logged into Polaris."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3d66f466",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": [
"remove_cell"
]
},
"outputs": [],
"source": [
"# Note: Cell is tagged to not show up in the mkdocs build\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9b465ea4-7c71-443b-9908-3f9e567ee4c4",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-08-09 18:05:23.205\u001b[0m | \u001b[32m\u001b[1mSUCCESS \u001b[0m | \u001b[36mpolaris.hub.client\u001b[0m:\u001b[36mlogin\u001b[0m:\u001b[36m267\u001b[0m - \u001b[32m\u001b[1mYou are successfully logged in to the Polaris Hub.\u001b[0m\n"
]
}
],
"source": [
"import polaris as po\n",
"from polaris.hub.client import PolarisHubClient\n",
"\n",
"# Don't forget to add your Polaris Hub username below!\n",
"MY_POLARIS_USERNAME = \"\"\n",
"\n",
"client = PolarisHubClient()\n",
"client.login()"
]
},
{
"cell_type": "markdown",
"id": "5edee39f-ce29-4ae6-91ce-453d9190541b",
"metadata": {},
"source": [
"## Fetching a Competition\n",
"\n",
"As with standard benchmarks, Polaris provides simple APIs that allow you to quickly fetch a competition from the Polaris Hub. All you need is the unique identifier for the competition which follows the format of `competition_owner`/`competition_name`."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4e004589-6c48-4232-b353-b1700536dde6",
"metadata": {},
"outputs": [],
"source": [
"competition_id = \"polaris/hello-world-competition\"\n",
"competition = po.load_competition(competition_id)"
]
},
{
"cell_type": "markdown",
"id": "36f3e829",
"metadata": {},
"source": [
"## Participate in the Competition\n",
"The Polaris library is designed to make it easy to participate in a competition. In just a few lines of code, we can get the train and test partition, access the associated data in various ways and evaluate our predictions. There's two main API endpoints. \n",
"\n",
"- `get_train_test_split()`: For creating objects through which we can access the different dataset partitions.\n",
"- `evaluate()`: For evaluating a set of predictions in accordance with the competition protocol."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d8605928",
"metadata": {},
"outputs": [],
"source": [
"train, test = competition.get_train_test_split()"
]
},
{
"cell_type": "markdown",
"id": "e78bf878",
"metadata": {},
"source": [
"The created test and train objects support various flavours to access the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b17bb31",
"metadata": {},
"outputs": [],
"source": [
"# The objects are iterable\n",
"for x, y in train:\n",
" pass\n",
"\n",
"# The objects can be indexed\n",
"for i in range(len(train)):\n",
" x, y = train[i]\n",
"\n",
"# The objects have properties to access all data at once\n",
"x = train.inputs\n",
"y = train.targets"
]
},
{
"cell_type": "markdown",
"id": "5ec12825",
"metadata": {},
"source": [
"Now, let's create some predictions against the official Polaris `hello-world-competition`. We will train a simple random forest model on the ECFP representation through scikit-learn and datamol, and then we will submit our results for secure evaluation by the Polaris Hub."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "902353bc",
"metadata": {},
"outputs": [],
"source": [
"import datamol as dm\n",
"from sklearn.ensemble import RandomForestRegressor\n",
"\n",
"# Load the competition (automatically loads the underlying dataset as well)\n",
"competition = po.load_competition(\"polaris/hello-world-benchmark\")\n",
"\n",
"# Get the split and convert SMILES to ECFP fingerprints by specifying an featurize function.\n",
"train, test = competition.get_train_test_split(featurization_fn=dm.to_fp)\n",
"\n",
"# Define a model and train\n",
"model = RandomForestRegressor(max_depth=2, random_state=0)\n",
"model.fit(train.X, train.y)\n",
"\n",
"predictions = model.predict(test.X)"
]
},
{
"cell_type": "markdown",
"id": "1a36e334",
"metadata": {},
"source": [
"Now that we have created some predictions, we can construct a `CompetitionPredictions` object that will prepare our predictions for evaluation by the Polaris Hub. Here, you can also add metadata to your predictions to better describe your results and how you achieved them. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2b36e09b",
"metadata": {},
"outputs": [],
"source": [
"from polaris.evaluate import CompetitionPredictions\n",
"\n",
"competition_predictions = CompetitionPredictions(\n",
" name=\"hello-world-result\",\n",
" predictions=predictions,\n",
" github_url=\"https://github.com/polaris-hub/polaris-hub\",\n",
" paper_url=\"https://polarishub.io/\",\n",
" description=\"Hello, World!\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5ff06a9c",
"metadata": {},
"source": [
"Once your `CompetitionPredictions` object is created, you're ready to submit them for evaluation! This will automatically save your result to the Polaris Hub, but it will be private. You can choose to make it public through the Polaris web application. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e684c611",
"metadata": {},
"outputs": [],
"source": [
"results = competition.evaluate(competition_predictions)\n",
"\n",
"client.close()"
]
},
{
"cell_type": "markdown",
"id": "44973556",
"metadata": {},
"source": [
"That's it! Just like that you have partaken in your first Polaris competition. Keep an eye on that leaderboard and best of luck in your future competitions!\n",
"\n",
"The End.\n",
"\n",
"---"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
12 changes: 6 additions & 6 deletions docs/tutorials/custom_dataset_benchmark.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -393,7 +393,7 @@
},
"outputs": [],
"source": [
"from polaris.hub.client import PolarisHubClient\n",
"# from polaris.hub.client import PolarisHubClient\n",
"\n",
"# NOTE: Commented out to not flood the DB\n",
"# with PolarisHubClient() as client:\n",
Expand Down Expand Up @@ -491,11 +491,11 @@
"evalue": "1 validation error for MultiTaskBenchmarkSpecification\ntarget_cols\n Value error, A multi-task benchmark should specify at least two target columns [type=value_error, input_value='LOG SOLUBILITY PH 6.8 (ug/mL)', input_type=str]\n For further information visit https://errors.pydantic.dev/2.4/v/value_error",
"output_type": "error",
"traceback": [
"\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
"\u001B[0;31mValidationError\u001B[0m Traceback (most recent call last)",
"\u001B[1;32m/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb Cell 25\u001B[0m line \u001B[0;36m3\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=0'>1</a>\u001B[0m \u001B[39mfrom\u001B[39;00m \u001B[39mpolaris\u001B[39;00m\u001B[39m.\u001B[39;00m\u001B[39mbenchmark\u001B[39;00m \u001B[39mimport\u001B[39;00m MultiTaskBenchmarkSpecification\n\u001B[0;32m----> <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=2'>3</a>\u001B[0m benchmark \u001B[39m=\u001B[39m MultiTaskBenchmarkSpecification(\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=3'>4</a>\u001B[0m dataset\u001B[39m=\u001B[39;49mdataset,\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=4'>5</a>\u001B[0m target_cols\u001B[39m=\u001B[39;49m\u001B[39m\"\u001B[39;49m\u001B[39mLOG SOLUBILITY PH 6.8 (ug/mL)\u001B[39;49m\u001B[39m\"\u001B[39;49m,\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=5'>6</a>\u001B[0m input_cols\u001B[39m=\u001B[39;49m\u001B[39m\"\u001B[39;49m\u001B[39mSMILES\u001B[39;49m\u001B[39m\"\u001B[39;49m,\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=6'>7</a>\u001B[0m split\u001B[39m=\u001B[39;49msplit,\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=7'>8</a>\u001B[0m metrics\u001B[39m=\u001B[39;49m\u001B[39m\"\u001B[39;49m\u001B[39mmean_absolute_error\u001B[39;49m\u001B[39m\"\u001B[39;49m,\n\u001B[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=8'>9</a>\u001B[0m )\n",
"File \u001B[0;32m~/micromamba/envs/polaris/lib/python3.11/site-packages/pydantic/main.py:164\u001B[0m, in \u001B[0;36mBaseModel.__init__\u001B[0;34m(__pydantic_self__, **data)\u001B[0m\n\u001B[1;32m 162\u001B[0m \u001B[39m# `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks\u001B[39;00m\n\u001B[1;32m 163\u001B[0m __tracebackhide__ \u001B[39m=\u001B[39m \u001B[39mTrue\u001B[39;00m\n\u001B[0;32m--> 164\u001B[0m __pydantic_self__\u001B[39m.\u001B[39;49m__pydantic_validator__\u001B[39m.\u001B[39;49mvalidate_python(data, self_instance\u001B[39m=\u001B[39;49m__pydantic_self__)\n",
"\u001B[0;31mValidationError\u001B[0m: 1 validation error for MultiTaskBenchmarkSpecification\ntarget_cols\n Value error, A multi-task benchmark should specify at least two target columns [type=value_error, input_value='LOG SOLUBILITY PH 6.8 (ug/mL)', input_type=str]\n For further information visit https://errors.pydantic.dev/2.4/v/value_error"
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValidationError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb Cell 25\u001b[0m line \u001b[0;36m3\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=0'>1</a>\u001b[0m \u001b[39mfrom\u001b[39;00m \u001b[39mpolaris\u001b[39;00m\u001b[39m.\u001b[39;00m\u001b[39mbenchmark\u001b[39;00m \u001b[39mimport\u001b[39;00m MultiTaskBenchmarkSpecification\n\u001b[0;32m----> <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=2'>3</a>\u001b[0m benchmark \u001b[39m=\u001b[39m MultiTaskBenchmarkSpecification(\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=3'>4</a>\u001b[0m dataset\u001b[39m=\u001b[39;49mdataset,\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=4'>5</a>\u001b[0m target_cols\u001b[39m=\u001b[39;49m\u001b[39m\"\u001b[39;49m\u001b[39mLOG SOLUBILITY PH 6.8 (ug/mL)\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=5'>6</a>\u001b[0m input_cols\u001b[39m=\u001b[39;49m\u001b[39m\"\u001b[39;49m\u001b[39mSMILES\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=6'>7</a>\u001b[0m split\u001b[39m=\u001b[39;49msplit,\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=7'>8</a>\u001b[0m metrics\u001b[39m=\u001b[39;49m\u001b[39m\"\u001b[39;49m\u001b[39mmean_absolute_error\u001b[39;49m\u001b[39m\"\u001b[39;49m,\n\u001b[1;32m <a href='vscode-notebook-cell:/Users/cas.wognum/Documents/repositories/polaris/docs/tutorials/custom_dataset_benchmark.ipynb#X33sZmlsZQ%3D%3D?line=8'>9</a>\u001b[0m )\n",
"File \u001b[0;32m~/micromamba/envs/polaris/lib/python3.11/site-packages/pydantic/main.py:164\u001b[0m, in \u001b[0;36mBaseModel.__init__\u001b[0;34m(__pydantic_self__, **data)\u001b[0m\n\u001b[1;32m 162\u001b[0m \u001b[39m# `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks\u001b[39;00m\n\u001b[1;32m 163\u001b[0m __tracebackhide__ \u001b[39m=\u001b[39m \u001b[39mTrue\u001b[39;00m\n\u001b[0;32m--> 164\u001b[0m __pydantic_self__\u001b[39m.\u001b[39;49m__pydantic_validator__\u001b[39m.\u001b[39;49mvalidate_python(data, self_instance\u001b[39m=\u001b[39;49m__pydantic_self__)\n",
"\u001b[0;31mValidationError\u001b[0m: 1 validation error for MultiTaskBenchmarkSpecification\ntarget_cols\n Value error, A multi-task benchmark should specify at least two target columns [type=value_error, input_value='LOG SOLUBILITY PH 6.8 (ug/mL)', input_type=str]\n For further information visit https://errors.pydantic.dev/2.4/v/value_error"
]
}
],
Expand Down
13 changes: 7 additions & 6 deletions docs/tutorials/optimization.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Let's create a dummy dataset with two columns \n",
"# Let's create a dummy dataset with two columns\n",
"rng = np.random.default_rng(0)\n",
"col_a = rng.choice(list(range(100)), 10000)\n",
"col_b = rng.random(10000)\n",
Expand Down Expand Up @@ -195,10 +195,10 @@
"import zarr\n",
"from tempfile import mkdtemp\n",
"\n",
"tmpdir = mkdtemp()\n",
"tmpdir = mkdtemp()\n",
"\n",
"# For the ones familiar with Zarr, this is not optimized at all. \n",
"# If you wouldn't want to convert to NumPy, you would want to \n",
"# For the ones familiar with Zarr, this is not optimized at all.\n",
"# If you wouldn't want to convert to NumPy, you would want to\n",
"# optimize the chunking / compression.\n",
"\n",
"path = os.path.join(tmpdir, \"data.zarr\")\n",
Expand Down Expand Up @@ -276,7 +276,7 @@
],
"source": [
"%%timeit\n",
"for batch in dataloader: \n",
"for batch in dataloader:\n",
" pass"
]
},
Expand Down Expand Up @@ -314,7 +314,7 @@
],
"source": [
"%%timeit\n",
"for batch in dataloader: \n",
"for batch in dataloader:\n",
" pass"
]
},
Expand All @@ -336,6 +336,7 @@
"outputs": [],
"source": [
"from shutil import rmtree\n",
"\n",
"rmtree(tmpdir)"
]
},
Expand Down
6 changes: 6 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,19 @@ nav:
- Zarr Datasets: tutorials/dataset_zarr.ipynb
- Dataset Factory: tutorials/dataset_factory.ipynb
- Optimization: tutorials/optimization.ipynb
- Competitions:
- tutorials/competition.participate.ipynb
- API Reference:
- Load: api/load.md
- Core:
- Dataset: api/dataset.md
- Benchmark: api/benchmark.md
- Subset: api/subset.md
- Evaluation: api/evaluation.md
- Competitions:
- Competition Dataset: api/competition.dataset.md
- Competition: api/competition.md
- Competiton Evaluation: api/competition.evaluation.md
- Hub:
- Client: api/hub.client.md
- PolarisFileSystem: api/hub.polarisfs.md
Expand Down
4 changes: 2 additions & 2 deletions polaris/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
from loguru import logger

from ._version import __version__
from .loader import load_benchmark, load_dataset
from .loader import load_benchmark, load_dataset, load_competition

__all__ = ["load_dataset", "load_benchmark", "__version__"]
__all__ = ["load_dataset", "load_benchmark", "__version__", "load_competition"]

# Configure the default logging level
os.environ["LOGURU_LEVEL"] = os.environ.get("LOGURU_LEVEL", "INFO")
Expand Down
Loading
Loading