From 325c5df722d83919bb96cd894249769e64a9b356 Mon Sep 17 00:00:00 2001 From: Andy Stoneberg Date: Thu, 9 Jan 2025 14:04:14 -0500 Subject: [PATCH] fix(test): ensure papermill tests run successfully for all supported notebooks Our testing framework had a fundamental issue in that it would, for certain images, naively run the test suite of parent images for a child image. In the case of `tensorflow` images, this caused problems because the (child) `tensorflow` image would **purposefully** _downgraded_ a package version given compatability issues. Furthermore, when attempting to verify the changes to address this core issue, numerous other bugs were encountered that required to be fixed. Significant changes introduced in this commit: - logic of the `test-%` Makefile target was moved into a shell script named `test_jupyter_with_papermill.sh` - test resources required to be copied to pod are now retrieved via the locally checked out repo - previously there were pulled from a remote branch via `wget` (defaulting to `main` branch) - this ensure our PR checks are now always leveraging any updated files - test_notebook.ipynb files now expect an `expected_versions.json` file to exist within the same directory. - `expected_versions.json` file is derived from the relevant `...-notebook-imagestream.yaml` manifest and leveraged when asserting on dependency versions - admittedly the duplication of various helper functions across all the notebook files is annoying - but helps to keep the `.ipynb` files self-contained - `...-notebook-imagestream.yaml` manifest had annotations updated to include any package that had a unit test in `test_notebook.ipynb` asserting versions - CUDA tensorflow unit test that converts to ONNX was updated to be actually functional Minor changes introduced in this commit: - use `ifdef OS` in Makefile to avoid warnings about undefined variable - all `test_notebook.ipynb` files: - have an `id` attribute defined in metadata - specify the `nbformat` as `4.5` - the more compute-intensive notebooks had `readinessProbe` and `livenessProbe` settings updated to be less aggressive - was observing liveness checks sporadically failing while the notebook was being tested - and this update seemed to help - `trustyai` notebook now runs the minimal and datascience papermill tests (similar to `tensorflow` and `pytorch`) instead of including the test code within its own `test_noteook.ipynb` file - various "quality of life" improvements where introduced into `test_jupyter_with_papermill.sh` - properly invoke tests for any valid/supported target - previously certain test targets required manual intervention in order to run end to end - improved logging (particularly when running with the `-x` flag) - more modular structure to hopefully improve readability - script now honors proper shell return code semantics (i.e. returns `0` on success) It should also be noted that while most `intel` notebooks are now passing the papermill tests - there are issues with the `intel` `tensorflow` unit tests still. More detail is captured in the JIRA ticket related to this commit. However, we are imminently removing `intel` logic from the `notebooks` repo entirely... so I didn't wanna burn any more time trying to get that last notebook to pass as it will be removed shortly! Further details on `expected_versions.json`: - `yq` (which is installed via the `Makefile` is used to: - query the relevant imagestream manifest to parse out the `["opendatahub.io/notebook-software"]` and `["opendatahub.io/notebook-python-dependencies"]` annotations from the first element of the `spec.tags` attribute - inject name/version elements for `nbdime` and `nbgitpuller` (which are asserted against in `minimal` notebook tests) - convert this `yaml` list to a JSON object of the form: `{: }` - this JSON object is then copied into the running notebook workload in the same directly that `test_notebook.ipynb` resides - each `test_notebook.ipynb` has a couple helper functions defined to then interact with this file: - `def load_expected_versions() -> dict:` - `def get_expected_version(dependency_name: str) -> str: - The argument provided to the `get_expected_version` function should match the `name` attribute of the JSON structure defined in the imagestream manifest Related-to: https://issues.redhat.com/browse/RHOAIENG-16587 --- .gitignore | 1 + Makefile | 71 +-- .../ubi9-python-3.11/test/test_notebook.ipynb | 37 +- .../kustomize/base/statefulset.yaml | 10 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 52 ++- .../kustomize/base/statefulset.yaml | 10 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 37 +- .../test/test_notebook_cpu.ipynb | 41 +- .../kustomize/base/statefulset.yaml | 18 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 44 +- .../test/test_notebook_cpu.ipynb | 46 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 28 +- .../kustomize/base/statefulset.yaml | 10 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 29 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 35 +- .../kustomize/base/statefulset.yaml | 10 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 35 +- .../ubi9-python-3.11/test/test_notebook.ipynb | 55 ++- .../ubi9-python-3.11/test/test_notebook.ipynb | 208 +-------- ...yter-datascience-notebook-imagestream.yaml | 1 + .../jupyter-pytorch-notebook-imagestream.yaml | 1 + ...ter-rocm-pytorch-notebook-imagestream.yaml | 1 + ...-rocm-tensorflow-notebook-imagestream.yaml | 1 + ...pyter-tensorflow-notebook-imagestream.yaml | 1 + ...jupyter-trustyai-notebook-imagestream.yaml | 1 + ...jupyter-intel-ml-notebook-imagestream.yaml | 5 +- ...er-intel-pytorch-notebook-imagestream.yaml | 5 +- ...intel-tensorflow-notebook-imagestream.yaml | 3 +- scripts/test_jupyter_with_papermill.sh | 430 ++++++++++++++++++ 29 files changed, 846 insertions(+), 380 deletions(-) create mode 100755 scripts/test_jupyter_with_papermill.sh diff --git a/.gitignore b/.gitignore index be7d7a0b7..552631d2e 100644 --- a/.gitignore +++ b/.gitignore @@ -128,6 +128,7 @@ venv/ ENV/ env.bak/ venv.bak/ +.DS_store # Spyder project settings .spyderproject diff --git a/Makefile b/Makefile index 996a0e1bc..515fc7eee 100644 --- a/Makefile +++ b/Makefile @@ -28,13 +28,15 @@ BUILD_DEPENDENT_IMAGES ?= yes PUSH_IMAGES ?= yes # OS dependant: Generate date, select appropriate cmd to locate container engine -ifeq ($(OS), Windows_NT) - DATE ?= $(shell powershell -Command "Get-Date -Format 'yyyyMMdd'") - WHERE_WHICH ?= where -else - DATE ?= $(shell date +'%Y%m%d') - WHERE_WHICH ?= which +ifdef OS + ifeq ($(OS), Windows_NT) + DATE ?= $(shell powershell -Command "Get-Date -Format 'yyyyMMdd'") + WHERE_WHICH ?= where + endif endif +DATE ?= $(shell date +'%Y%m%d') +WHERE_WHICH ?= which + # linux/amd64 or darwin/arm64 OS_ARCH=$(shell go env GOOS)/$(shell go env GOARCH) @@ -340,64 +342,11 @@ undeploy-c9s-%: bin/kubectl $(info # Undeploying notebook from $(NOTEBOOK_DIR) directory...) $(KUBECTL_BIN) delete -k $(NOTEBOOK_DIR) -# Function for testing a notebook with papermill -# ARG 1: Notebook name -# ARG 1: UBI flavor -# ARG 1: Python kernel -define test_with_papermill - $(eval PREFIX_NAME := $(subst /,-,$(1)_$(2))) - $(KUBECTL_BIN) exec $(FULL_NOTEBOOK_NAME) -- /bin/sh -c "python3 -m pip install papermill" - if ! $(KUBECTL_BIN) exec $(FULL_NOTEBOOK_NAME) -- /bin/sh -c "wget ${NOTEBOOK_REPO_BRANCH_BASE}/jupyter/$(1)/$(2)-$(3)/test/test_notebook.ipynb -O test_notebook.ipynb && python3 -m papermill test_notebook.ipynb $(PREFIX_NAME)_output.ipynb --kernel python3 --stderr-file $(PREFIX_NAME)_error.txt" ; then - echo "ERROR: The $(1) $(2) notebook encountered a failure. To investigate the issue, you can review the logs located in the ocp-ci cluster on 'artifacts/notebooks-e2e-tests/jupyter-$(1)-$(2)-$(3)-test-e2e' directory or run 'cat $(PREFIX_NAME)_error.txt' within your container. The make process has been aborted." - exit 1 - fi - if $(KUBECTL_BIN) exec $(FULL_NOTEBOOK_NAME) -- /bin/sh -c "cat $(PREFIX_NAME)_error.txt | grep --quiet FAILED" ; then - echo "ERROR: The $(1) $(2) notebook encountered a failure. The make process has been aborted." - $(KUBECTL_BIN) exec $(FULL_NOTEBOOK_NAME) -- /bin/sh -c "cat $(PREFIX_NAME)_error.txt" - exit 1 - fi -endef - # Verify the notebook's readiness by pinging the /api endpoint and executing the corresponding test_notebook.ipynb file in accordance with the build chain logic. .PHONY: test test-%: bin/kubectl - # Verify the notebook's readiness by pinging the /api endpoint - $(eval NOTEBOOK_NAME := $(subst .,-,$(subst cuda-,,$*))) - $(eval PYTHON_VERSION := $(shell echo $* | sed 's/.*-python-//')) - $(info # Running tests for $(NOTEBOOK_NAME) notebook...) - $(KUBECTL_BIN) wait --for=condition=ready pod -l app=$(NOTEBOOK_NAME) --timeout=600s - $(KUBECTL_BIN) port-forward svc/$(NOTEBOOK_NAME)-notebook 8888:8888 & curl --retry 5 --retry-delay 5 --retry-connrefused http://localhost:8888/notebook/opendatahub/jovyan/api ; EXIT_CODE=$$?; echo && pkill --full "^$(KUBECTL_BIN).*port-forward.*" - $(eval FULL_NOTEBOOK_NAME = $(shell ($(KUBECTL_BIN) get pods -l app=$(NOTEBOOK_NAME) -o custom-columns=":metadata.name" | tr -d '\n'))) - - # Tests notebook's functionalities - if echo "$(FULL_NOTEBOOK_NAME)" | grep -q "minimal-ubi9"; then - $(call test_with_papermill,minimal,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "intel-tensorflow-ubi9"; then - $(call test_with_papermill,intel/tensorflow,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "intel-pytorch-ubi9"; then - $(call test_with_papermill,intel/pytorch,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "datascience-ubi9"; then - $(MAKE) validate-ubi9-datascience PYTHON_VERSION=$(PYTHON_VERSION) -e FULL_NOTEBOOK_NAME=$(FULL_NOTEBOOK_NAME) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "pytorch-ubi9"; then - $(MAKE) validate-ubi9-datascience PYTHON_VERSION=$(PYTHON_VERSION) -e FULL_NOTEBOOK_NAME=$(FULL_NOTEBOOK_NAME) - $(call test_with_papermill,pytorch,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "tensorflow-ubi9"; then - $(MAKE) validate-ubi9-datascience PYTHON_VERSION=$(PYTHON_VERSION) -e FULL_NOTEBOOK_NAME=$(FULL_NOTEBOOK_NAME) - $(call test_with_papermill,tensorflow,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "intel-ml-ubi9"; then - $(call test_with_papermill,intel/ml,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "trustyai-ubi9"; then - $(call test_with_papermill,trustyai,ubi9,python-$(PYTHON_VERSION)) - elif echo "$(FULL_NOTEBOOK_NAME)" | grep -q "anaconda"; then - echo "There is no test notebook implemented yet for Anaconda Notebook...." - else - echo "No matching condition found for $(FULL_NOTEBOOK_NAME)." - fi - -.PHONY: validate-ubi9-datascience -validate-ubi9-datascience: - $(call test_with_papermill,minimal,ubi9,python-$(PYTHON_VERSION)) - $(call test_with_papermill,datascience,ubi9,python-$(PYTHON_VERSION)) + $(info # Running tests for $* notebook...) + @./scripts/test_jupyter_with_papermill.sh $* # Validate that runtime image meets minimum criteria # This validation is created from subset of https://github.com/elyra-ai/elyra/blob/9c417d2adc9d9f972de5f98fd37f6945e0357ab9/Makefile#L325 diff --git a/jupyter/datascience/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/datascience/ubi9-python-3.11/test/test_notebook.ipynb index 90b18b843..fe8917433 100644 --- a/jupyter/datascience/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/datascience/ubi9-python-3.11/test/test_notebook.ipynb @@ -7,6 +7,9 @@ "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "from unittest import mock\n", "from platform import python_version\n", @@ -24,22 +27,35 @@ "import kafka\n", "from kafka import KafkaConsumer, KafkaProducer, TopicPartition\n", "from kafka.producer.buffer import SimpleBufferPool\n", - "from kafka import KafkaConsumer\n", "from kafka.errors import KafkaConfigurationError\n", "import boto3\n", "\n", "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version)\n", + "\n", "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '3.11'\n", + " expected_major_minor = expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", "class TestPandas(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '2.2'\n", + " expected_major_minor = get_expected_version('Pandas')\n", " actual_major_minor = get_major_minor(pd.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -94,7 +110,7 @@ "\n", "class TestNumpy(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '2.1'\n", + " expected_major_minor = get_expected_version('Numpy')\n", " actual_major_minor = get_major_minor(np.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -119,7 +135,7 @@ "\n", "class TestScipy(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.14'\n", + " expected_major_minor = get_expected_version('Scipy')\n", " actual_major_minor = get_major_minor(scipy.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -136,7 +152,7 @@ "\n", "class TestSKLearn(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.5'\n", + " expected_major_minor = get_expected_version('Scikit-learn')\n", " actual_major_minor = get_major_minor(sklearn.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -160,7 +176,7 @@ "class TestMatplotlib(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '3.9'\n", + " expected_major_minor = get_expected_version('Matplotlib')\n", " actual_major_minor = get_major_minor(matplotlib.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -171,7 +187,7 @@ "class TestKafkaPython(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '2.2'\n", + " expected_major_minor = get_expected_version('Kafka-Python-ng')\n", " actual_major_minor = get_major_minor(kafka.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -193,7 +209,7 @@ "class TestBoto3(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '1.35'\n", + " expected_major_minor = get_expected_version('Boto3')\n", " actual_major_minor = get_major_minor(boto3.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -212,6 +228,7 @@ "\n", " self.assertEqual(boto3.DEFAULT_SESSION, session)\n", "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] } @@ -223,5 +240,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/intel/ml/ubi9-python-3.11/kustomize/base/statefulset.yaml b/jupyter/intel/ml/ubi9-python-3.11/kustomize/base/statefulset.yaml index b06644e2d..59e0c568b 100644 --- a/jupyter/intel/ml/ubi9-python-3.11/kustomize/base/statefulset.yaml +++ b/jupyter/intel/ml/ubi9-python-3.11/kustomize/base/statefulset.yaml @@ -36,8 +36,9 @@ spec: livenessProbe: tcpSocket: port: notebook-port - initialDelaySeconds: 5 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: @@ -45,8 +46,9 @@ spec: path: /notebook/opendatahub/jovyan/api port: notebook-port scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: diff --git a/jupyter/intel/ml/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/intel/ml/ubi9-python-3.11/test/test_notebook.ipynb index 2f3439e18..3e54bd15c 100644 --- a/jupyter/intel/ml/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/intel/ml/ubi9-python-3.11/test/test_notebook.ipynb @@ -3,11 +3,15 @@ { "cell_type": "code", "execution_count": null, + "id": "bac7ee1b-65c4-4515-8006-7ba01e843906", "metadata": { "tags": [] }, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "from unittest import mock\n", "from platform import python_version\n", @@ -27,7 +31,6 @@ "from kafka.producer.buffer import SimpleBufferPool\n", "from kafka import KafkaConsumer\n", "from kafka.errors import KafkaConfigurationError\n", - "import boto3\n", "import kfp_tekton\n", "import kfp\n", "from kfp import LocalClient, run_pipeline_func_locally\n", @@ -37,15 +40,29 @@ "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version)\n", + " \n", "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '3.11'\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", "class TestModin(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '0.24'\n", + " expected_major_minor = get_expected_version('Modin')\n", " actual_major_minor = get_major_minor(pdm.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -61,7 +78,7 @@ "\n", "class TestPandas(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '2.1'\n", + " expected_major_minor = get_expected_version('Pandas')\n", " actual_major_minor = get_major_minor(pd.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -116,7 +133,7 @@ "\n", "class TestNumpy(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.24'\n", + " expected_major_minor = get_expected_version('Numpy')\n", " actual_major_minor = get_major_minor(np.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -141,7 +158,7 @@ "\n", "class TestScipy(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.11'\n", + " expected_major_minor = get_expected_version('Scipy')\n", " actual_major_minor = get_major_minor(scipy.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -158,7 +175,7 @@ "\n", "class TestSKLearn(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.3'\n", + " expected_major_minor = get_expected_version('Scikit-learn')\n", " actual_major_minor = get_major_minor(sklearn.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -182,7 +199,7 @@ "class TestMatplotlib(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '3.6'\n", + " expected_major_minor = get_expected_version('Matplotlib')\n", " actual_major_minor = get_major_minor(matplotlib.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -193,7 +210,7 @@ "class TestKafkaPython(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '2.0'\n", + " expected_major_minor = get_expected_version('Kafka-Python')\n", " actual_major_minor = get_major_minor(kafka.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -215,19 +232,16 @@ "class TestKFPTekton(unittest.TestCase):\n", "\n", " def test_version(self):\n", - " expected_major_minor = '1.6.0'\n", + " unsupported_version = '1.6'\n", "\n", - " self.assertLess(kfp_tekton.__version__, expected_major_minor)\n", + " expected_major_minor = get_expected_version('KFP-Tekton')\n", "\n", + " self.assertLess(expected_major_minor, unsupported_version)\n", + " self.assertLess(kfp_tekton.__version__, unsupported_version)\n", + "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { @@ -250,5 +264,5 @@ } }, "nbformat": 4, - "nbformat_minor": 4 + "nbformat_minor": 5 } diff --git a/jupyter/intel/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml b/jupyter/intel/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml index 71eecacdf..b96eaf89d 100644 --- a/jupyter/intel/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml +++ b/jupyter/intel/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml @@ -36,8 +36,9 @@ spec: livenessProbe: tcpSocket: port: notebook-port - initialDelaySeconds: 5 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: @@ -45,8 +46,9 @@ spec: path: /notebook/opendatahub/jovyan/api port: notebook-port scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: diff --git a/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook.ipynb index c86ed1f54..6829650cf 100644 --- a/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook.ipynb @@ -9,6 +9,9 @@ }, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "from unittest import mock\n", "from platform import python_version\n", @@ -20,16 +23,30 @@ "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '3.11'\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", - "class TestPandas(unittest.TestCase):\n", + "class TestIPEX(unittest.TestCase):\n", " def test_ipex_version(self):\n", - " expected_major_minor = '2.1'\n", - " actual_major_minor = get_major_minor(torch.__version__)\n", + " expected_major_minor = get_expected_version('Intel-PyTorch')\n", + " actual_major_minor = get_major_minor(ipex.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_ipex_mode(self):\n", @@ -37,21 +54,25 @@ " actual_mode = ipex.__version__.split('+')[1]\n", " self.assertEqual(actual_mode, expected_mode, \"incorrect mode\")\n", "\n", + "class TestTorch(unittest.TestCase):\n", " def test_torch_version(self):\n", - " expected_major_minor = '2.1'\n", - " actual_major_minor = get_major_minor(ipex.__version__)\n", + " expected_major_minor = get_expected_version('Intel-PyTorch')\n", + " actual_major_minor = get_major_minor(torch.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "class TestTorchVision(unittest.TestCase):\n", " def test_torchvision_version(self):\n", - " expected_major_minor = '0.16'\n", + " expected_major_minor = get_expected_version('Intel-PyTorch-Vision')\n", " actual_major_minor = get_major_minor(torchvision.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "class TestTorchAudio(unittest.TestCase):\n", " def test_torchvision_version(self):\n", - " expected_major_minor = '2.1'\n", + " expected_major_minor = get_expected_version('Intel-PyTorch-Audio')\n", " actual_major_minor = get_major_minor(torchaudio.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] } diff --git a/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook_cpu.ipynb b/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook_cpu.ipynb index d9d9171da..8d3c776a3 100644 --- a/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook_cpu.ipynb +++ b/jupyter/intel/pytorch/ubi9-python-3.11/test/test_notebook_cpu.ipynb @@ -3,11 +3,15 @@ { "cell_type": "code", "execution_count": null, + "id": "31daa11f-994d-48ad-8367-14dce71abf08", "metadata": { "tags": [] }, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "from unittest import mock\n", "from platform import python_version\n", @@ -19,16 +23,30 @@ "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '3.11'\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", - "class TestPandas(unittest.TestCase):\n", + "class TestIPEX(unittest.TestCase):\n", " def test_ipex_version(self):\n", - " expected_major_minor = '2.1'\n", - " actual_major_minor = get_major_minor(torch.__version__)\n", + " expected_major_minor = get_expected_version('Intel-PyTorch')\n", + " actual_major_minor = get_major_minor(ipex.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_ipex_mode(self):\n", @@ -36,21 +54,26 @@ " actual_mode = ipex.__version__.split('+')[1]\n", " self.assertEqual(actual_mode, expected_mode, \"incorrect mode\")\n", "\n", + "class TestTorch(unittest.TestCase):\n", " def test_torch_version(self):\n", - " expected_major_minor = '2.1'\n", - " actual_major_minor = get_major_minor(ipex.__version__)\n", + " expected_major_minor = get_expected_version('Intel-PyTorch')\n", + " actual_major_minor = get_major_minor(torch.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "class TestTorchVision(unittest.TestCase):\n", " def test_torchvision_version(self):\n", - " expected_major_minor = '0.16'\n", + " expected_major_minor = get_expected_version('Intel-PyTorch-Vision')\n", " actual_major_minor = get_major_minor(torchvision.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "class TestTorchAudio(unittest.TestCase):\n", " def test_torchvision_version(self):\n", - " expected_major_minor = '2.1'\n", + " expected_major_minor = get_expected_version('Intel-PyTorch-Audio')\n", " actual_major_minor = get_major_minor(torchaudio.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] } @@ -75,5 +98,5 @@ } }, "nbformat": 4, - "nbformat_minor": 4 + "nbformat_minor": 5 } diff --git a/jupyter/intel/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml b/jupyter/intel/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml index 1e84a7267..d7ad43bce 100644 --- a/jupyter/intel/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml +++ b/jupyter/intel/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml @@ -36,8 +36,9 @@ spec: livenessProbe: tcpSocket: port: notebook-port - initialDelaySeconds: 5 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: @@ -45,14 +46,15 @@ spec: path: /notebook/opendatahub/jovyan/api port: notebook-port scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: limits: - cpu: 500m - memory: 2Gi + cpu: 5000m + memory: 8Gi requests: - cpu: 500m - memory: 2Gi + cpu: 5000m + memory: 8Gi diff --git a/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb index 874561cbe..6cb561881 100644 --- a/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb @@ -9,6 +9,9 @@ }, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import tensorflow as tf\n", "import intel_extension_for_tensorflow as itex\n", @@ -16,21 +19,40 @@ "import tf2onnx\n", "from platform import python_version\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", + "# See https://issues.redhat.com/browse/RHOAIENG-16587?focusedId=26417096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-26417096\n", + "# for analysis of why this test is failing\n", "class TestTensorflowNotebook(unittest.TestCase):\n", "\n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(python_version().split('.')[:2])\n", + " expected_major_minor = get_expected_version('Python')\n", + " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_tensorflow_version(self):\n", - " expected_major_minor = '2.14' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(tf.__version__.split('.')[:2])\n", + " expected_major_minor = get_expected_version('Intel-TensorFlow')\n", + " actual_major_minor = get_major_minor(tf.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_itex_version(self):\n", - " expected_major_minor = '2.14' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(itex.__version__.split('.')[:2])\n", + " expected_major_minor = get_expected_version('Intel-TensorFlow')\n", + " actual_major_minor = get_major_minor(itex.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_tf2onnx_conversion(self):\n", @@ -82,17 +104,11 @@ " # Train the model\n", " model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n", "\n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestTensorflowNotebook)\n", "unittest.TextTestRunner().run(suite)\n" ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "39e1d04e-2b13-4528-9132-fa2bee12e4a8", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook_cpu.ipynb b/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook_cpu.ipynb index 0c089bb4d..950cfd8b0 100644 --- a/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook_cpu.ipynb +++ b/jupyter/intel/tensorflow/ubi9-python-3.11/test/test_notebook_cpu.ipynb @@ -3,11 +3,15 @@ { "cell_type": "code", "execution_count": null, + "id": "662f869d-1165-492b-87ea-2d74a41ea029", "metadata": { "tags": [] }, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import tensorflow as tf\n", "import intel_extension_for_tensorflow as itex\n", @@ -15,21 +19,40 @@ "import tf2onnx\n", "from platform import python_version\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", + "# See https://issues.redhat.com/browse/RHOAIENG-16587?focusedId=26417096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-26417096\n", + "# for analysis of why this test is failing\n", "class TestTensorflowNotebook(unittest.TestCase):\n", "\n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(python_version().split('.')[:2])\n", + " expected_major_minor = get_expected_version('Python')\n", + " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_tensorflow_version(self):\n", - " expected_major_minor = '2.14' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(tf.__version__.split('.')[:2])\n", + " expected_major_minor = get_expected_version('Intel-TensorFlow')\n", + " actual_major_minor = get_major_minor(tf.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_itex_version(self):\n", - " expected_major_minor = '2.14' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(itex.__version__.split('.')[:2])\n", + " expected_major_minor = get_expected_version('Intel-TensorFlow')\n", + " actual_major_minor = get_major_minor(itex.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_tf2onnx_conversion(self):\n", @@ -81,16 +104,11 @@ " # Train the model\n", " model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n", "\n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestTensorflowNotebook)\n", "unittest.TextTestRunner().run(suite)\n" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { @@ -113,5 +131,5 @@ } }, "nbformat": 4, - "nbformat_minor": 4 + "nbformat_minor": 5 } diff --git a/jupyter/minimal/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/minimal/ubi9-python-3.11/test/test_notebook.ipynb index 078e664a7..075551919 100644 --- a/jupyter/minimal/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/minimal/ubi9-python-3.11/test/test_notebook.ipynb @@ -7,6 +7,9 @@ "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import jupyterlab as jp\n", "from platform import python_version\n", @@ -16,28 +19,43 @@ "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version)\n", + "\n", "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '3.11'\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", "class TestDependenciesVersions(unittest.TestCase):\n", " def test_jupyter_version(self):\n", - " expected_major_minor = '4.2'\n", + " expected_major_minor = get_expected_version('JupyterLab')\n", " actual_major_minor = get_major_minor(jp.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_nbgitpuller_version(self):\n", - " expected_major_minor = '1.2'\n", + " expected_major_minor = get_expected_version('nbgitpuller')\n", " actual_major_minor = get_major_minor(nbgitpuller.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", " def test_nbdime_version(self):\n", - " expected_major_minor = '4.0'\n", + " expected_major_minor = get_expected_version('nbdime')\n", " actual_major_minor = get_major_minor(nbdime.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] } @@ -49,5 +67,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml b/jupyter/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml index e5e15a3f1..960e3708a 100644 --- a/jupyter/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml +++ b/jupyter/pytorch/ubi9-python-3.11/kustomize/base/statefulset.yaml @@ -36,8 +36,9 @@ spec: livenessProbe: tcpSocket: port: notebook-port - initialDelaySeconds: 5 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: @@ -45,8 +46,9 @@ spec: path: /notebook/opendatahub/jovyan/api port: notebook-port scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: diff --git a/jupyter/pytorch/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/pytorch/ubi9-python-3.11/test/test_notebook.ipynb index 30b8dc455..05512bfdb 100644 --- a/jupyter/pytorch/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/pytorch/ubi9-python-3.11/test/test_notebook.ipynb @@ -3,9 +3,13 @@ { "cell_type": "code", "execution_count": null, + "id": "114ce821-59db-4b45-9e0e-abbeb7c7ac13", "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import torch\n", "import math\n", @@ -16,15 +20,32 @@ "from platform import python_version\n", "import torchvision.models as models\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version)\n", + "\n", "class TestPytorchNotebook(unittest.TestCase):\n", " \n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = '.'.join(python_version().split('.')[:2]) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_torch_version(self):\n", - " expected_major_minor = '2.4' # Set the expected version (x.y)\n", + " expected_major_minor = get_expected_version('PyTorch')\n", " actual_major_minor = '.'.join(torch.__version__.split('.')[:2]) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", @@ -147,6 +168,8 @@ " # Check if the ONNX file exists\n", " self.assertTrue(os.path.exists(onnx_path), f\"ONNX file {onnx_path} not found\")\n", " \n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestPytorchNotebook)\n", "unittest.TextTestRunner().run(suite)" ] @@ -159,5 +182,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/rocm/pytorch/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/rocm/pytorch/ubi9-python-3.11/test/test_notebook.ipynb index 1b76043ed..a9422975e 100644 --- a/jupyter/rocm/pytorch/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/rocm/pytorch/ubi9-python-3.11/test/test_notebook.ipynb @@ -3,9 +3,13 @@ { "cell_type": "code", "execution_count": null, + "id": "daa55561-f6b1-4c00-a59f-df9086148d58", "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import torch\n", "import math\n", @@ -16,16 +20,33 @@ "from platform import python_version\n", "import torchvision.models as models\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", "class TestPytorchNotebook(unittest.TestCase):\n", " \n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(python_version().split('.')[:2]) \n", + " expected_major_minor = get_expected_version('Python')\n", + " actual_major_minor = get_major_minor(python_version()) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_torch_version(self):\n", - " expected_major_minor = '2.3' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(torch.__version__.split('.')[:2]) \n", + " expected_major_minor = get_expected_version('ROCm-PyTorch')\n", + " actual_major_minor = get_major_minor(torch.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_tensor_creation(self):\n", @@ -146,7 +167,9 @@ "\n", " # Check if the ONNX file exists\n", " self.assertTrue(os.path.exists(onnx_path), f\"ONNX file {onnx_path} not found\")\n", - " \n", + "\n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestPytorchNotebook)\n", "unittest.TextTestRunner().run(suite)" ] @@ -159,5 +182,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/rocm/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml b/jupyter/rocm/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml index 46a07cced..3f23b2c93 100644 --- a/jupyter/rocm/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml +++ b/jupyter/rocm/tensorflow/ubi9-python-3.11/kustomize/base/statefulset.yaml @@ -36,8 +36,9 @@ spec: livenessProbe: tcpSocket: port: notebook-port - initialDelaySeconds: 5 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: @@ -45,8 +46,9 @@ spec: path: /notebook/opendatahub/jovyan/api port: notebook-port scheme: HTTP - initialDelaySeconds: 10 - periodSeconds: 5 + initialDelaySeconds: 15 + periodSeconds: 10 + timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 resources: diff --git a/jupyter/rocm/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/rocm/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb index 7ca1c73c7..65dd25cb1 100644 --- a/jupyter/rocm/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/rocm/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb @@ -3,25 +3,46 @@ { "cell_type": "code", "execution_count": null, + "id": "d309383b-511b-440d-b680-e732933ba444", "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import tensorflow as tf\n", "import tensorboard\n", "import tf2onnx\n", "from platform import python_version\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", + "\n", "class TestTensorflowNotebook(unittest.TestCase):\n", " \n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(python_version().split('.')[:2]) \n", + " expected_major_minor = get_expected_version('Python')\n", + " actual_major_minor = get_major_minor(python_version()) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_tensorflow_version(self):\n", - " expected_major_minor = '2.14' # Set the expected version (x.y)\n", - " actual_major_minor = '.'.join(tf.__version__.split('.')[:2]) \n", + " expected_major_minor = get_expected_version('ROCm-TensorFlow')\n", + " actual_major_minor = get_major_minor(tf.__version__) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_tf2onnx_conversion(self):\n", @@ -72,7 +93,9 @@ " tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)\n", " # Train the model\n", " model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n", - " \n", + "\n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestTensorflowNotebook)\n", "unittest.TextTestRunner().run(suite)\n" ] @@ -85,5 +108,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb index f86d63152..aac2e9517 100644 --- a/jupyter/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/tensorflow/ubi9-python-3.11/test/test_notebook.ipynb @@ -3,32 +3,69 @@ { "cell_type": "code", "execution_count": null, + "id": "07ae2840-42b4-49a9-92da-250e03bdb13f", "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "import tensorflow as tf\n", "import tensorboard\n", "import tf2onnx\n", "from platform import python_version\n", "\n", + "def get_major_minor(s):\n", + " return '.'.join(s.split('.')[:2])\n", + "\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", + "\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", + "\n", + " return data \n", + "\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version)\n", + "\n", "class TestTensorflowNotebook(unittest.TestCase):\n", " \n", " def test_python_version(self):\n", - " expected_major_minor = '3.11' # Set the expected version (x.y)\n", + " expected_major_minor = get_expected_version('Python')\n", " actual_major_minor = '.'.join(python_version().split('.')[:2]) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_tensorflow_version(self):\n", - " expected_major_minor = '2.17' # Set the expected version (x.y)\n", + " expected_major_minor = get_expected_version('TensorFlow')\n", " actual_major_minor = '.'.join(tf.__version__.split('.')[:2]) \n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", " \n", " def test_tf2onnx_conversion(self):\n", - " # Replace this with an actual TensorFlow model conversion using tf2onnx\n", - " model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(10,))])\n", - " onnx_model = tf2onnx.convert.from_keras(model)\n", - " \n", + " # Sometime around TF 2.17 - some weird issue was introduced w.r.t the interplay between TF Keras and tf2onnx\n", + " # - naively defining a Sequential model doesn't seem to work\n", + " # - https://github.com/tensorflow/tensorflow/issues/63867\n", + " # - https://github.com/onnx/tensorflow-onnx/issues/2319\n", + " # - input_signature required on from_keras function\n", + " # https://github.com/onnx/tensorflow-onnx/issues/2329\n", + "\n", + " # Define the input layer\n", + " inputs = tf.keras.Input(shape=(10,))\n", + "\n", + " # Define the model layers\n", + " flatten_layer = tf.keras.layers.Flatten()(inputs)\n", + " outputs = tf.keras.layers.Dense(1)(flatten_layer)\n", + "\n", + " # Create the model\n", + " model = tf.keras.Model(inputs=inputs, outputs=outputs) \n", + "\n", + " # Export the model to ONNX format\n", + " onnx_model = tf2onnx.convert.from_keras(model, input_signature=[tf.TensorSpec(model.inputs[0].shape)])\n", + "\n", " self.assertTrue(onnx_model is not None)\n", "\n", " def test_mnist_model(self):\n", @@ -72,7 +109,9 @@ " tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)\n", " # Train the model\n", " model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])\n", - " \n", + "\n", + "expected_versions = load_expected_versions()\n", + "\n", "suite = unittest.TestLoader().loadTestsFromTestCase(TestTensorflowNotebook)\n", "unittest.TextTestRunner().run(suite)\n" ] @@ -85,5 +124,5 @@ "orig_nbformat": 4 }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/jupyter/trustyai/ubi9-python-3.11/test/test_notebook.ipynb b/jupyter/trustyai/ubi9-python-3.11/test/test_notebook.ipynb index 8d753cf3b..ec3a93338 100644 --- a/jupyter/trustyai/ubi9-python-3.11/test/test_notebook.ipynb +++ b/jupyter/trustyai/ubi9-python-3.11/test/test_notebook.ipynb @@ -7,6 +7,9 @@ "metadata": {}, "outputs": [], "source": [ + "from pathlib import Path\n", + "import json\n", + "import re\n", "import unittest\n", "from unittest import mock\n", "import pandas as pd\n", @@ -37,206 +40,30 @@ "def get_major_minor(s):\n", " return '.'.join(s.split('.')[:2])\n", "\n", - "class TestPythonVersion(unittest.TestCase):\n", - " def test_version(self):\n", - " expected_major_minor = '3.11'\n", - " actual_major_minor = get_major_minor(python_version())\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - "class TestDependenciesVersions(unittest.TestCase):\n", - " def test_jupyter_version(self):\n", - " expected_major_minor = '4.2'\n", - " actual_major_minor = get_major_minor(jp.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_nbgitpuller_version(self):\n", - " expected_major_minor = '1.2'\n", - " actual_major_minor = get_major_minor(nbgitpuller.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_nbdime_version(self):\n", - " expected_major_minor = '4.0'\n", - " actual_major_minor = get_major_minor(nbdime.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - "class TestPandas(unittest.TestCase):\n", - " def test_version(self):\n", - " expected_major_minor = '1.5'\n", - " actual_major_minor = get_major_minor(pd.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_dataframe_creation(self):\n", - " sample_df = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})\n", - " self.assertIsInstance(sample_df, pd.core.frame.DataFrame)\n", - "\n", - " def test_equal_dataframes(self):\n", - " df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})\n", - " df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 4.0]})\n", - " self.assertIsNone(assert_frame_equal(df1, df2, check_dtype=False), \"Dataframes provided are unequal\")\n", - "\n", - " def test_unequal_dataframes(self):\n", - " df1 = pd.DataFrame({'a': [1, 2], 'b': [3, 4]})\n", - " df2 = pd.DataFrame({'a': [1, 2], 'b': [3.0, 5.0]})\n", - " with self.assertRaises(AssertionError):\n", - " assert_frame_equal(df1, df2, check_dtype=False)\n", - "\n", - " def test_dataframe_shape(self):\n", - " random_data = {\n", - " 'apples': [3, 2, 0, 1],\n", - " 'oranges': [0, 3, 7, 2]\n", - " }\n", - " sample_df = pd.DataFrame(random_data)\n", - " self.assertEqual(sample_df.shape, (4,2))\n", - "\n", - " def test_index_out_of_bounds(self):\n", - " random_data = {\n", - " 'apples': [3, 2, 0, 1],\n", - " 'oranges': [0, 3, 7, 2]\n", - " }\n", - " sample_df = pd.DataFrame(random_data)\n", - " with self.assertRaises(IndexError):\n", - " print(sample_df.iat[0,3])\n", - "\n", - " def test_sampling(self):\n", - " random_data = {\n", - " 'apples': [3, 2, 0, 1],\n", - " 'oranges': [0, 3, 7, 2]\n", - " }\n", - " sample_df = pd.DataFrame(random_data)\n", - " half_sampled_df = sample_df.sample(frac = 0.5)\n", - " self.assertEqual(len(half_sampled_df), 2)\n", - "\n", - " def test_drop(self):\n", - " random_data = {\n", - " 'apples': [3, 2, 0, 1],\n", - " 'oranges': [0, 3, 7, 2]\n", - " }\n", - " sample_df = pd.DataFrame(random_data)\n", - " self.assertEqual(sample_df.drop(columns=['apples']).shape, (4, 1))\n", - "\n", - "class TestNumpy(unittest.TestCase):\n", - " def test_version(self):\n", - " expected_major_minor = '1.24'\n", - " actual_major_minor = get_major_minor(np.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_array_creation(self):\n", - " arr = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\n", - " self.assertIsInstance(arr, np.ndarray)\n", - "\n", - " def test_array_opeartions(self):\n", - " arr = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\n", - "\n", - " self.assertEqual(arr.sum(), 45)\n", - " self.assertEqual(len(arr), 9)\n", - " self.assertEqual(arr.max(), 9)\n", - " self.assertEqual(arr.min(), 1)\n", - "\n", - " def test_array_statistical_functions(self):\n", - " arr = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\n", - "\n", - " self.assertEqual(np.median(arr), 5)\n", - " self.assertEqual(np.mean(arr), 5)\n", - " self.assertEqual(np.std(arr), 2.581988897471611)\n", - "\n", - "class TestScipy(unittest.TestCase):\n", - " def test_version(self):\n", - " expected_major_minor = '1.14'\n", - " actual_major_minor = get_major_minor(scipy.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_scipy_special(self):\n", - " self.assertEqual(special.exp10(3), 1000.0)\n", - " self.assertEqual(special.exp2(10), 1024.0)\n", - " self.assertEqual(special.sindg(90), 1)\n", - " self.assertEqual(special.cosdg(0), 1)\n", - "\n", - " def test_scipy_integrate(self):\n", - " a= lambda x:special.exp10(x)\n", - " b = integrate.quad(a, 0, 1)\n", - " self.assertEqual(b, (3.9086503371292665, 4.3394735994897923e-14))\n", - "\n", - "class TestSKLearn(unittest.TestCase):\n", - " def test_version(self):\n", - " expected_major_minor = '1.2'\n", - " actual_major_minor = get_major_minor(sklearn.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_sklearn_dataset(self):\n", - " data_set = datasets.load_iris()\n", - " self.assertIsInstance(data_set, sklearn.utils._bunch.Bunch)\n", + "def load_expected_versions() -> dict:\n", + " lock_file = Path('./expected_versions.json')\n", + " data = {}\n", "\n", - " def test_sklearn_train_test_split(self):\n", - " my_iris = datasets.load_iris()\n", - " X = my_iris.data\n", - " Y = my_iris.target\n", + " with open(lock_file, 'r') as file:\n", + " data = json.load(file)\n", "\n", - " X_traindata, X_testdata, Y_traindata, Y_testdata = train_test_split(\n", - " X, Y, test_size = 0.3, random_state = 1)\n", + " return data \n", "\n", - " self.assertEqual(X_traindata.shape, (105, 4))\n", - " self.assertEqual(X_testdata.shape, (45, 4))\n", - " self.assertEqual(Y_traindata.shape, (105,))\n", - " self.assertEqual(Y_testdata.shape, (45,))\n", - "\n", - "class TestMatplotlib(unittest.TestCase):\n", - "\n", - " def test_version(self):\n", - " expected_major_minor = '3.6'\n", - " actual_major_minor = get_major_minor(matplotlib.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_matplotlib_figure_creation(self):\n", - " self.assertIsInstance(plt.figure(figsize=(8,5)), matplotlib.figure.Figure)\n", - "\n", - "class TestKafkaPython(unittest.TestCase):\n", - "\n", - " def test_version(self):\n", - " expected_major_minor = '2.2'\n", - " actual_major_minor = get_major_minor(kafka.__version__)\n", - " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", - "\n", - " def test_buffer_pool(self):\n", - " pool = SimpleBufferPool(1000, 1000)\n", - "\n", - " buf1 = pool.allocate(1000, 1000)\n", - " message = ''.join(map(str, range(100)))\n", - " buf1.write(message.encode('utf-8'))\n", - " pool.deallocate(buf1)\n", - "\n", - " buf2 = pool.allocate(1000, 1000)\n", - " self.assertEqual(buf2.read(), b'')\n", - "\n", - " def test_session_timeout_larger_than_request_timeout_raises(self):\n", - " with self.assertRaises(KafkaConfigurationError):\n", - " KafkaConsumer(bootstrap_servers='localhost:9092', api_version=(0, 9), group_id='foo', session_timeout_ms=50000, request_timeout_ms=40000)\n", - "\n", - "class TestBoto3(unittest.TestCase):\n", + "def get_expected_version(dependency_name: str) -> str:\n", + " raw_value = expected_versions.get(dependency_name)\n", + " raw_version = re.sub(r'^\\D+', '', raw_value)\n", + " return get_major_minor(raw_version) \n", "\n", + "class TestPythonVersion(unittest.TestCase):\n", " def test_version(self):\n", - " expected_major_minor = '1.35'\n", - " actual_major_minor = get_major_minor(boto3.__version__)\n", + " expected_major_minor = get_expected_version('Python')\n", + " actual_major_minor = get_major_minor(python_version())\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", - " def setUp(self):\n", - " self.session_patch = mock.patch('boto3.Session', autospec=True)\n", - " self.Session = self.session_patch.start()\n", - "\n", - " def tearDown(self):\n", - " boto3.DEFAULT_SESSION = None\n", - " self.session_patch.stop()\n", - "\n", - " def test_create_default_session(self):\n", - " session = self.Session.return_value\n", - "\n", - " boto3.setup_default_session()\n", - "\n", - " self.assertEqual(boto3.DEFAULT_SESSION, session)\n", - "\n", "class TestTrustyaiNotebook(unittest.TestCase):\n", "\n", " def test_trustyai_version(self):\n", - " expected_major_minor = '0.6' # Set the expected version (x.y)\n", + " expected_major_minor = get_expected_version('TrustyAI')\n", " actual_major_minor = get_major_minor(trustyai.__version__)\n", " self.assertEqual(actual_major_minor, expected_major_minor, \"incorrect version\")\n", "\n", @@ -275,6 +102,7 @@ " self.assertTrue(score <= -0.15670061634672994)\n", " print(\"On the test_bias_metrics test case the statistical_parity_difference score for this dataset, as expected, is outside the threshold [-0.1,0.1], which classifies the model as unfair.\")\n", "\n", + "expected_versions = load_expected_versions()\n", "unittest.main(argv=[''], verbosity=2, exit=False)" ] } diff --git a/manifests/base/jupyter-datascience-notebook-imagestream.yaml b/manifests/base/jupyter-datascience-notebook-imagestream.yaml index c2d8c5aeb..7539f82e8 100644 --- a/manifests/base/jupyter-datascience-notebook-imagestream.yaml +++ b/manifests/base/jupyter-datascience-notebook-imagestream.yaml @@ -24,6 +24,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "Boto3", "version": "1.35"}, {"name": "Kafka-Python-ng", "version": "2.2"}, {"name": "Kfp", "version": "2.9"}, diff --git a/manifests/base/jupyter-pytorch-notebook-imagestream.yaml b/manifests/base/jupyter-pytorch-notebook-imagestream.yaml index 93703bf83..33d12bb91 100644 --- a/manifests/base/jupyter-pytorch-notebook-imagestream.yaml +++ b/manifests/base/jupyter-pytorch-notebook-imagestream.yaml @@ -27,6 +27,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "PyTorch", "version": "2.4"}, {"name": "Tensorboard", "version": "2.17"}, {"name": "Boto3", "version": "1.35"}, diff --git a/manifests/base/jupyter-rocm-pytorch-notebook-imagestream.yaml b/manifests/base/jupyter-rocm-pytorch-notebook-imagestream.yaml index d9b0e2a99..4c60e37c6 100644 --- a/manifests/base/jupyter-rocm-pytorch-notebook-imagestream.yaml +++ b/manifests/base/jupyter-rocm-pytorch-notebook-imagestream.yaml @@ -26,6 +26,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "ROCm-PyTorch", "version": "2.4"}, {"name": "Tensorboard", "version": "2.16"}, {"name": "Kafka-Python-ng", "version": "2.2"}, diff --git a/manifests/base/jupyter-rocm-tensorflow-notebook-imagestream.yaml b/manifests/base/jupyter-rocm-tensorflow-notebook-imagestream.yaml index 5e2bcb0f1..5284295a9 100644 --- a/manifests/base/jupyter-rocm-tensorflow-notebook-imagestream.yaml +++ b/manifests/base/jupyter-rocm-tensorflow-notebook-imagestream.yaml @@ -26,6 +26,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "ROCm-TensorFlow", "version": "2.14"}, {"name": "Tensorboard", "version": "2.14"}, {"name": "Kafka-Python-ng", "version": "2.2"}, diff --git a/manifests/base/jupyter-tensorflow-notebook-imagestream.yaml b/manifests/base/jupyter-tensorflow-notebook-imagestream.yaml index 843e839f5..5b6a9eb27 100644 --- a/manifests/base/jupyter-tensorflow-notebook-imagestream.yaml +++ b/manifests/base/jupyter-tensorflow-notebook-imagestream.yaml @@ -27,6 +27,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "TensorFlow", "version": "2.17"}, {"name": "Tensorboard", "version": "2.17"}, {"name": "Nvidia-CUDA-CU12-Bundle", "version": "12.3"}, diff --git a/manifests/base/jupyter-trustyai-notebook-imagestream.yaml b/manifests/base/jupyter-trustyai-notebook-imagestream.yaml index a60012f50..2c7ad1ed7 100644 --- a/manifests/base/jupyter-trustyai-notebook-imagestream.yaml +++ b/manifests/base/jupyter-trustyai-notebook-imagestream.yaml @@ -24,6 +24,7 @@ spec: # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "TrustyAI", "version": "0.6"}, {"name": "Transformers", "version": "4.36"}, {"name": "Datasets", "version": "2.21"}, diff --git a/manifests/overlays/additional/jupyter-intel-ml-notebook-imagestream.yaml b/manifests/overlays/additional/jupyter-intel-ml-notebook-imagestream.yaml index 7818121dd..20218c80e 100644 --- a/manifests/overlays/additional/jupyter-intel-ml-notebook-imagestream.yaml +++ b/manifests/overlays/additional/jupyter-intel-ml-notebook-imagestream.yaml @@ -19,16 +19,19 @@ spec: # language=json opendatahub.io/notebook-software: | [ - {"name": "Python", "version": "v3.9"}, + {"name": "Python", "version": "v3.11"}, {"name": "Intel-ML", "version": "2.14"} ] # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "Intel-ML", "version": "2.14"}, {"name": "Kafka-Python", "version": "2.0"}, + {"name": "KFP-Tekton", "version": "1.5"}, {"name": "Matplotlib", "version": "3.6"}, {"name": "Numpy", "version": "1.24"}, + {"name": "Modin", "version": "0.24"}, {"name": "Pandas", "version": "2.1"}, {"name": "Scikit-learn", "version": "1.3"}, {"name": "Scipy", "version": "1.11"}, diff --git a/manifests/overlays/additional/jupyter-intel-pytorch-notebook-imagestream.yaml b/manifests/overlays/additional/jupyter-intel-pytorch-notebook-imagestream.yaml index e5918a5de..fac84ba1d 100644 --- a/manifests/overlays/additional/jupyter-intel-pytorch-notebook-imagestream.yaml +++ b/manifests/overlays/additional/jupyter-intel-pytorch-notebook-imagestream.yaml @@ -19,13 +19,16 @@ spec: # language=json opendatahub.io/notebook-software: | [ - {"name": "Python", "version": "v3.9"}, + {"name": "Python", "version": "v3.11"}, {"name": "Intel-PyTorch", "version": "2.1"} ] # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "Intel-PyTorch", "version": "2.1"}, + {"name": "Intel-PyTorch-Vision", "version": "0.16"}, + {"name": "Intel-PyTorch-Audio", "version": "2.1"}, {"name": "Tensorboard", "version": "2.14"}, {"name": "Kafka-Python", "version": "2.0"}, {"name": "Matplotlib", "version": "3.6"}, diff --git a/manifests/overlays/additional/jupyter-intel-tensorflow-notebook-imagestream.yaml b/manifests/overlays/additional/jupyter-intel-tensorflow-notebook-imagestream.yaml index a2711493c..6e7f89dad 100644 --- a/manifests/overlays/additional/jupyter-intel-tensorflow-notebook-imagestream.yaml +++ b/manifests/overlays/additional/jupyter-intel-tensorflow-notebook-imagestream.yaml @@ -19,12 +19,13 @@ spec: # language=json opendatahub.io/notebook-software: | [ - {"name": "Python", "version": "v3.9"}, + {"name": "Python", "version": "v3.11"}, {"name": "Intel-TensorFlow", "version": "2.14"} ] # language=json opendatahub.io/notebook-python-dependencies: | [ + {"name": "JupyterLab","version": "4.2"}, {"name": "Intel-TensorFlow", "version": "2.14"}, {"name": "Tensorboard", "version": "2.14"}, {"name": "Kafka-Python", "version": "2.0"}, diff --git a/scripts/test_jupyter_with_papermill.sh b/scripts/test_jupyter_with_papermill.sh new file mode 100755 index 000000000..1f0c4f99b --- /dev/null +++ b/scripts/test_jupyter_with_papermill.sh @@ -0,0 +1,430 @@ +#! /usr/bin/env bash + +## Description: +## +## This script is intended to be invoked via the Makefile test-% target of the notebooks repository and assumes the deploy9-% target +## has been previously executed. It replaces the legacy 'test_with_papermill' function previously defined in the Makefile. +## +## The script will first check to ensure a notebook workload is running and have a k8s service object exposed. Once verified: +## - the relevant imagestream manifest from https://github.com/opendatahub-io/notebooks/tree/main/manifests/base is copied +## into the running pod to act as the "source of truth" when asserting against installed version of py packages +## - a test_notebook.ipynb will be copied into the running pod if it is defined in jupyter/*/test/test_notebook.ipynb +## - for images inherited from the datascience notebook image, the minimal and datascience notebook test files are +## sequentially copied into the running pod +## - for each test_notebook.ipynb file that is copied into the running pod, a test suite is invoked via papermill +## - test execution is considered failed if the papermill output contains the string 'FAILED' +## +## Currently this script only supports jupyter notebooks running on ubi9. +## +## Dependencies: +## +## - git: https://www.man7.org/linux/man-pages/man1/git.1.html +## - kubectl: https://kubernetes.io/docs/reference/kubectl/ +## - a local copy of kubectl is downloaded via the Makefile bin/kubectl target, and stored in bin/kubectl within the notebooks repo +## - yq: https://mikefarah.gitbook.io/yq +## - a local copy of yq is downloaded via the Makefile bin/yq target, and stored in bin/yq within the notebooks repo +## - wget: https://www.man7.org/linux/man-pages/man1/wget.1.html +## - curl: https://www.man7.org/linux/man-pages/man1/curl.1.html +## - kill: https://www.man7.org/linux/man-pages/man1/kill.1.html +## +## Usage: +## +## test_jupyter_with_papermill.sh +## - Intended to be invoked from the test-% target of the Makefile +## - Arguments +## - +## - the resolved wildcard value from the Makefile test-% pattern-matching rule +## +## + + +set -uox pipefail + +# Description: +# Returns the underlying operating system of the notebook based on the notebook name +# - presently, all jupyter notebooks run on ubi9 +# +# Arguments: +# $1 : Name of the notebook workload running on the cluster +# +# Returns: +# Name of operating system for the notebook or empty string if not recognized +function _get_os_flavor() +{ + local full_notebook_name="${1:-}" + + local os_flavor= + case "${full_notebook_name}" in + *ubi9-*) + os_flavor='ubi9' + ;; + *) + ;; + esac + + printf '%s' "${os_flavor}" +} + +# Description: +# Returns the accelerator of the notebook based on the notebook name +# - Due to existing build logic, cuda- prefix missing on pytorch target name +# +# Note: intel notebooks being deprecated soon +# +# Arguments: +# $1 : Name of the notebook workload running on the cluster +# +# Returns: +# Name of accelerator required for the notebook or empty string if none required +function _get_accelerator_flavor() +{ + local full_notebook_name="${1:-}" + + local accelerator_flavor= + case "${full_notebook_name}" in + *intel-*) + accelerator_flavor='intel' + ;; + *cuda-* | jupyter-pytorch-*) + accelerator_flavor='cuda' + ;; + *rocm-*) + accelerator_flavor='rocm' + ;; + *) + ;; + esac + + printf '%s' "${accelerator_flavor}" +} + +# Description: +# Returns the absolute path of notebook resources in the notebooks/ repo based on the notebook name +# +# Arguments: +# $1 : Name of the notebook identifier +# $2 : [optional] Subdirectory to append to computed absolute path +# - path should NOT start with a leading / +# +# Returns: +# Absolute path to the jupyter notebook directory for the given notebook test target +function _get_jupyter_notebook_directory() +{ + local notebook_id="${1:-}" + local subpath="${2:-}" + + local jupyter_base="${root_repo_directory}/jupyter" + local directory="${jupyter_base}/${notebook_id}/${os_flavor}-${python_flavor}${subpath:+"/$subpath"}" + + printf '%s' "${directory}" +} + +# Description: +# Returns the notebook name as defined by the app label of the relevant kustomization.yaml +# Unfortunately a necessary preprocessing function due to numerous naming inconsistencies +# with the Makefile targets and notebooks repo +# +# Arguments: +# $1 : Value of the test-% wildcard from the notebooks repo Makefile +# +# Returns: +# Name of the notebook as defined by the workload app label +function _get_notebook_name() +{ + local test_target="${1:-}" + + local raw_notebook_name= + raw_notebook_name=$( tr '.' '-' <<< "${test_target#'cuda-'}" ) + + local jupyter_notebook_prefix='jupyter' + local rocm_target_prefix="rocm-${jupyter_notebook_prefix}" + + local notebook_name= + case "${raw_notebook_name}" in + *$jupyter_minimal_notebook_id*) + local jupyter_stem="${raw_notebook_name#*"$jupyter_notebook_prefix"}" + notebook_name="${jupyter_notebook_prefix}${jupyter_stem}" + ;; + $rocm_target_prefix*) + notebook_name=jupyter-rocm${raw_notebook_name#"$rocm_target_prefix"} + ;; + *) + notebook_name="${raw_notebook_name}" + ;; + esac + + printf '%s' "${notebook_name}" +} + +# Description: +# A blocking function that queries the cluster to until the notebook workload enters a Ready state +# Once the workload is Ready, the function will port-forward to the relevant Service resource and attempt +# to ping the Jupyterlab API endpoint. Upon success, the port-forward process is terminated. +# +# Arguments: +# $1 : Name of the notebook as defined by the workload app label +# +# Returns: +# Name of the notebook as defined by the workload app label +function _wait_for_workload() +{ + local notebook_name="${1:-}" + + "${kbin}" wait --for=condition=ready pod -l app="${notebook_name}" --timeout=600s + "${kbin}" port-forward "svc/${notebook_name}-notebook" 8888:8888 & + local pf_pid=$! + curl --retry 5 --retry-delay 5 --retry-connrefused http://localhost:8888/notebook/opendatahub/jovyan/api ; + kill ${pf_pid} +} + +# Description: +# Computes the absolute path of the imagestream manifest for the notebook under test +# Note: intel notebooks being deprecated soon +# +# +# Arguments: +# $1 : Name of the notebook identifier +# +# Returns: +# Absolute path to the iamgestream manifest file corresponding to the notebook under test +function _get_source_of_truth_filepath() +{ + local notebook_id="${1##*/}" + + local manifest_directory="${root_repo_directory}/manifests" + local imagestream_directory="${manifest_directory}/base" + if [ "${accelerator_flavor}" = 'intel' ]; then + imagestream_directory="${manifest_directory}/overlays/additional" + fi + + local file_suffix='notebook-imagestream.yaml' + local filename= + case "${notebook_id}" in + *$jupyter_minimal_notebook_id*) + filename="jupyter-${accelerator_flavor:+"$accelerator_flavor"-}${notebook_id}-${file_suffix}" + if [ "${accelerator_flavor}" = 'cuda' ]; then + filename="jupyter-${notebook_id}-gpu-${file_suffix}" + fi + ;; + *$jupyter_datascience_notebook_id* | *$jupyter_trustyai_notebook_id*) + filename="jupyter-${notebook_id}-${file_suffix}" + ;; + *$jupyter_ml_notebook_id*) + filename="jupyter-intel-ml-${file_suffix}" + ;; + *$jupyter_pytorch_notebook_id* | *$jupyter_tensorflow_notebook_id*) + filename="jupyter-${accelerator_flavor:+"$accelerator_flavor"-}${notebook_id}-${file_suffix}" + if [ "${accelerator_flavor}" = 'cuda' ]; then + filename="jupyter-${notebook_id}-${file_suffix}" + fi + ;; + esac + + local filepath="${imagestream_directory}/${filename}" + + if ! [ -e "${filepath}" ]; then + printf '%s\n' "Unable to determine imagestream manifest for '${test_target}'. Computed filepath '${filepath}' does not exist." + exit 1 + fi + + printf '%s' "${filepath}" +} + +# Description: +# Creates an 'expected_version.json' file based on the relevant imagestream manifest within the notebooks repo relevant to the notebook under test on the +# running pod to be used as the "source of truth" for test_notebook.ipynb tests that assert on package version. +# +# Each test suite that asserts against package versions must include necessary logic to honor this file. +# +# Note: intel notebooks being deprecated soon +# +# Arguments: +# $1 : Name of the notebook identifier +function _create_test_versions_source_of_truth() +{ + local notebook_id="${1:-}" + + local version_filename='expected_versions.json' + + local test_version_truth_filepath= + test_version_truth_filepath="$( _get_source_of_truth_filepath "${notebook_id}" )" + + local nbdime_version='4.0' + if [ "${accelerator_flavor}" = 'intel' ]; then + nbdime_version='3.2' + fi + local nbgitpuller_version='1.2' + + expected_versions=$("${yqbin}" '.spec.tags[0].annotations | .["opendatahub.io/notebook-software"] + .["opendatahub.io/notebook-python-dependencies"]' "${test_version_truth_filepath}" | + "${yqbin}" -N -p json -o yaml | + nbdime_version=${nbdime_version} nbgitpuller_version=${nbgitpuller_version} "${yqbin}" '. + [{"name": "nbdime", "version": strenv(nbdime_version)},{"name": "nbgitpuller", "version": strenv(nbgitpuller_version)}]' | + "${yqbin}" -N -o json '[ .[] | (.name | key) = "key" | (.version | key) = "value" ] | from_entries') + + # Following disabled shellcheck intentional as the intended behavior is for those ${1}, ${2} variables to only be expanded when running within kubernetes + # shellcheck disable=SC2016 + "${kbin}" exec "${notebook_workload_name}" -- /bin/sh -c 'touch "${1}"; printf "%s\n" "${2}" > "${1}"' -- "${version_filename}" "${expected_versions}" +} + +# Description: +# Main "test runner" function that copies the relevant test_notebook.ipynb file for the notebook under test into +# the running pod and then invokes papermill within the pod to actually execute test suite. +# +# Script will return non-zero exit code in the event all unit tests were not successfully executed. Diagnostic messages +# are printed in the event of a failure. +# +# Arguments: +# $1 : Name of the notebook identifier +function _run_test() +{ + local notebook_id="${1:-}" + + local test_notebook_file='test_notebook.ipynb' + local repo_test_directory= + repo_test_directory="$(_get_jupyter_notebook_directory "${notebook_id}" "test")" + local output_file_prefix= + output_file_prefix=$(tr '/' '-' <<< "${notebook_id}_${os_flavor}") + + "${kbin}" cp "${repo_test_directory}/${test_notebook_file}" "${notebook_workload_name}:./${test_notebook_file}" + + if ! "${kbin}" exec "${notebook_workload_name}" -- /bin/sh -c "python3 -m papermill ${test_notebook_file} ${output_file_prefix}_output.ipynb --kernel python3 --stderr-file ${output_file_prefix}_error.txt" ; then + echo "ERROR: The ${notebook_id} ${os_flavor} notebook encountered a failure. To investigate the issue, you can review the logs located in the ocp-ci cluster on 'artifacts/notebooks-e2e-tests/jupyter-${notebook_id}-${os_flavor}-${python_flavor}-test-e2e' directory or run 'cat ${output_file_prefix}_error.txt' within your container. The make process has been aborted." + exit 1 + fi + + local test_result= + test_result=$("${kbin}" exec "${notebook_workload_name}" -- /bin/sh -c "grep FAILED ${output_file_prefix}_error.txt" 2>&1) + case "$?" in + 0) + printf '\n\n%s\n' "ERROR: The ${notebook_id} ${os_flavor} notebook encountered a test failure. The make process has been aborted." + "${kbin}" exec "${notebook_workload_name}" -- /bin/sh -c "cat ${output_file_prefix}_error.txt" + exit 1 + ;; + 1) + printf '\n%s\n\n' "The ${notebook_id} ${os_flavor} notebook tests ran successfully" + ;; + 2) + printf '\n\n%s\n' "ERROR: The ${notebook_id} ${os_flavor} notebook encountered an unexpected failure. The make process has been aborted." + printf '%s\n\n' "${test_result}" + exit 1 + ;; + *) + esac +} + +# Description: +# Checks if the notebook under test is derived from the datasciences notebook. This determination is subsequently used to know whether or not +# additional papermill tests should be invoked against the running notebook resource. +# +# The notebook_id argument provided to the function is simply checked against a hard-coded array of notebook ids known to inherit from the +# datascience notebook. +# +# Returns successful exit code if the notebook inherits from the datascience image. +# +# Arguments: +# $1 : Name of the notebook identifier +function _image_derived_from_datascience() +{ + local notebook_id="${1:-}" + + local datascience_derived_images=("${jupyter_datascience_notebook_id}" "${jupyter_trustyai_notebook_id}" "${jupyter_tensorflow_notebook_id}" "${jupyter_pytorch_notebook_id}") + + printf '%s\0' "${datascience_derived_images[@]}" | grep -Fz -- "${notebook_id}" +} + +# Description: +# Convenience function that will invoke the minimal and datascience papermill tests against the running notebook workload +function _test_datascience_notebook() +{ + _run_test "${jupyter_minimal_notebook_id}" + _run_test "${jupyter_datascience_notebook_id}" +} + +# Description: +# "Orchestration" function computes necessary parameters and prepares the running notebook workload for papermill tests to be invoked +# - notebook_id is calculated based on the workload name and computed accelerator value +# - Appropriate "source of truth" file to be used in asserting package version is copied into the running pod +# - papermill is installed on the running pod +# - All relevant tests based on the notebook_id are invoked +function _handle_test() +{ + local notebook_id= + + # Due to existing logic - cuda accelerator value needs to be treated as empty string + local accelerator_flavor="${accelerator_flavor}" + accelerator_flavor="${accelerator_flavor##'cuda'}" + + + case "${notebook_workload_name}" in + *${jupyter_minimal_notebook_id}-*) + notebook_id="${jupyter_minimal_notebook_id}" + ;; + *${jupyter_datascience_notebook_id}-*) + notebook_id="${jupyter_datascience_notebook_id}" + ;; + *-${jupyter_trustyai_notebook_id}-*) + notebook_id="${jupyter_trustyai_notebook_id}" + ;; + *-${jupyter_ml_notebook_id}-*) + notebook_id="${accelerator_flavor:+$accelerator_flavor/}${jupyter_ml_notebook_id}" + ;; + *${jupyter_tensorflow_notebook_id}-*) + notebook_id="${accelerator_flavor:+$accelerator_flavor/}${jupyter_tensorflow_notebook_id}" + ;; + *${jupyter_pytorch_notebook_id}-*) + notebook_id="${accelerator_flavor:+$accelerator_flavor/}${jupyter_pytorch_notebook_id}" + ;; + *) + printf '%s\n' "No matching condition found for ${notebook_workload_name}." + exit 1 + ;; + esac + + _create_test_versions_source_of_truth "${notebook_id}" + + "${kbin}" exec "${notebook_workload_name}" -- /bin/sh -c "python3 -m pip install papermill" + + if _image_derived_from_datascience "${notebook_id}" ; then + _test_datascience_notebook + fi + + if [ -n "${notebook_id}" ] && ! [ "${notebook_id}" = "${jupyter_datascience_notebook_id}" ]; then + _run_test "${notebook_id}" + fi +} + +test_target="${1:-}" + +# Hard-coded list of supported "notebook_id" values - based on notebooks/ repo Makefile +jupyter_minimal_notebook_id='minimal' +jupyter_datascience_notebook_id='datascience' +jupyter_trustyai_notebook_id='trustyai' +jupyter_ml_notebook_id='ml' # Note: intel notebooks being deprecated soon +jupyter_pytorch_notebook_id='pytorch' +jupyter_tensorflow_notebook_id='tensorflow' + +notebook_name=$( _get_notebook_name "${test_target}" ) +python_flavor="python-${test_target//*-python-/}" +os_flavor=$(_get_os_flavor "${test_target}") +accelerator_flavor=$(_get_accelerator_flavor "${test_target}") + +root_repo_directory=$(readlink -f "$(git rev-parse --show-toplevel)") + +kbin=$(readlink -f "${root_repo_directory}/bin/kubectl") +if ! [ -e "${kbin}" ]; then + printf "%s" "missing bin/kubectl" + exit 1 +fi + +yqbin=$(readlink -f "${root_repo_directory}/bin/yq") +if ! [ -e "${yqbin}" ]; then + printf "%s" "missing bin/yq" + exit 1 +fi + +printf '%s\n' "Waiting for ${notebook_name} workload to be ready. This could take a few minutes..." +_wait_for_workload "${notebook_name}" + +notebook_workload_name=$("${kbin}" get pods -l app="${notebook_name}" -o jsonpath='{.items[0].metadata.name}') + +_handle_test +