From 3c2c40a78adcf92bbe617375d38b915cec7bd9ea Mon Sep 17 00:00:00 2001
From: Julien Herzen <j.herzen@gmail.com>
Date: Wed, 22 Feb 2023 17:20:31 +0100
Subject: [PATCH 01/12] make prophet optional again, adapt install readme

---
 INSTALL.md                         | 62 ++++++++++++++++--------------
 requirements/core.txt              |  1 -
 requirements/dev-all-noprophet.txt |  4 ++
 requirements/dev-all.txt           |  1 +
 requirements/prophet.txt           |  1 +
 setup.py                           |  6 ++-
 6 files changed, 43 insertions(+), 32 deletions(-)
 create mode 100644 requirements/dev-all-noprophet.txt
 create mode 100644 requirements/prophet.txt

diff --git a/INSTALL.md b/INSTALL.md
index 8d57249bd3..1a2ede5bf0 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -6,12 +6,12 @@ Below, we detail how to install Darts using either `conda` or `pip`.
 Currently only the x86_64 architecture with Python 3.7-3.10
 is fully supported with conda; consider using PyPI if you are running into troubles.
 
-To create a conda environment for Python 3.9
+Create a conda environment (e.g., for Python 3.10):
 (after installing [conda](https://docs.conda.io/en/latest/miniconda.html)):
 
-    conda create --name <env-name> python=3.9
+    conda create --name <env-name> python=3.10
 
-Don't forget to activate your virtual environment
+Activate your virtual environment
 
     conda activate <env-name>
 
@@ -19,10 +19,7 @@ As some models have relatively heavy dependencies, we provide two conda-forge pa
 
 * Install darts with all available models (recommended): `conda install -c conda-forge -c pytorch u8darts-all`.
 * Install core + neural networks (PyTorch): `conda install -c conda-forge -c pytorch u8darts-torch`
-* Install core only (without neural networks or AutoARIMA): `conda install -c conda-forge u8darts`
-
-For GPU support, please follow the instructions to install CUDA in the [PyTorch installation guide](https://pytorch.org/get-started/locally/).
-
+* Install core only (without neural networks): `conda install -c conda-forge u8darts`
 
 ## From PyPI
 Install darts with all available models: `pip install darts`.
@@ -37,31 +34,38 @@ we also maintain the `u8darts` package, which provides the following alternate l
 * Install core + neural networks (PyTorch): `pip install "u8darts[torch]"`
 * Install core + AutoARIMA: `pip install "u8darts[pmdarima]"`
 
-### Enabling Support for LightGBM
+## Other Information
 
-To enable support for LightGBM in Darts, please follow the
-[installation instructions](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html) for your OS.
+### Issues with LightGBM
+If you run into issues with LightGBM when installing Darts, please follow the
+[installation instructions](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html) for your OS,
+and then try re-installing Darts.
 
-#### MacOS Issues with LightGBM
-At the time of writing, there is an issue with ``libomp`` 12.0.1 that results in
-[segmentation fault on Mac OS Big Sur](https://github.com/microsoft/LightGBM/issues/4229).
-Here's the procedure to downgrade the ``libomp`` library (from the
-[original Github issue](https://github.com/microsoft/LightGBM/issues/4229#issue-867528353)):
-* [Install brew](https://brew.sh/) if you don't already have it.
-* Install `wget` if you don't already have it : `brew install wget`.
-* Run the commands below:
+For instance, on MacOS you may have to run this (using [brew](https://brew.sh/)):
 ```
-wget https://raw.githubusercontent.com/Homebrew/homebrew-core/fb8323f2b170bd4ae97e1bac9bf3e2983af3fdb0/Formula/libomp.rb
-brew unlink libomp
-brew install libomp.rb
+brew install cmake
+brew install libomp
 ```
+and then again: `pip install darts`.
+
+### Enabling support for Prophet
+By default, as of 0.24.0, `darts` does not have Prophet as a dependency anymore, because its build
+process was too often causing issues, [notably on Apple silicon](https://github.com/facebook/prophet/issues/2002).
+
+If you want to use Darts' `Prophet` model, you will need to install the `prophet` package (version 1.1 or more recent).
+We refer to the [Prophet README install guide](https://github.com/facebook/prophet#installation-in-python---pypi-release)
+
+### Enabling GPU support
+Darts relies on PyTorch for the neural network models.
+For GPU support, please follow the instructions to install CUDA in the [PyTorch installation guide](https://pytorch.org/get-started/locally/).
 
-#### Test environment Apple M1 processor
+### Using an emulated x64 environment on Apple Silicon.
+The installation of `darts` has been tested to work on Apple silicon (Python 3.10, OSX Ventura 13.2.1).
+It requires following the instructions to make LightGBM work 
+[here](https://github.com/unit8co/darts/blob/master/INSTALL.md#issues-with-lightgbm).
 
-We currently recommend to run Darts in an x_64 emulated environment on Mac computers with the Silicon M1 processor,
-instead of trying to install directly with native arm64 packages, many of the dependent packages still have compatibility 
-issues. The following is a proposed procedure, if you tested other procedures on similar hardware and they worked, 
-please let us know about them by opening an issue or by updating this file and opening a PR. 
+If you still run into some issues with Apple silicon, you can consider using rosetta
+to use an emulated x64 environment by following the steps below:
 
 Before you start make sure that you have rosetta2 installed by running: 
 ```
@@ -85,8 +89,8 @@ Below are the necessary instructions to create and configure the environment:
   conda activate env_name
   ```
 - Install darts: `pip install darts`
-  - With this method of installation, lightgbm might still have issues finding the libomp library.
-  The following procedure is to guarantee that the correct libomp (11.1.0) library is linked.
+  - If after this you still run into issues with lightgbm having issues finding the libomp library,
+  the following procedure guarantees that the correct libomp (11.1.0) library is linked.
     - Unlink the existing libomp, from terminal : `brew unlink libomp`
     - Setup a homebrew installer that is compatible with x_64 packages (follow this [blog](https://medium.com/mkdir-awesome/how-to-install-x86-64-homebrew-packages-on-apple-m1-macbook-54ba295230f) 
     post):
@@ -106,7 +110,7 @@ Below are the necessary instructions to create and configure the environment:
     ```
     - Verify that your lightgbm works by importing lightgbm from your python env. It should not give library loading errors. 
 
-## Running the examples only, without installing:
+### Running the examples only, without installing:
 
 If the conda setup is causing too many problems, we also provide a Docker image with everything set up for you and ready-to-use Python notebooks with demo examples.
 To run the example notebooks without installing our libraries natively on your machine, you can use our Docker image:
diff --git a/requirements/core.txt b/requirements/core.txt
index a1b6758911..7cd507f6a1 100644
--- a/requirements/core.txt
+++ b/requirements/core.txt
@@ -7,7 +7,6 @@ nfoursid>=1.0.0
 numpy>=1.19.0
 pandas>=1.0.5
 pmdarima>=1.8.0
-prophet>=1.1.1
 pyod>=0.9.5
 requests>=2.22.0
 scikit-learn>=1.0.1
diff --git a/requirements/dev-all-noprophet.txt b/requirements/dev-all-noprophet.txt
new file mode 100644
index 0000000000..52bca02e7b
--- /dev/null
+++ b/requirements/dev-all-noprophet.txt
@@ -0,0 +1,4 @@
+-r core.txt
+-r dev.txt
+-r release.txt
+-r torch.txt
diff --git a/requirements/dev-all.txt b/requirements/dev-all.txt
index 52bca02e7b..2dbb7ab411 100644
--- a/requirements/dev-all.txt
+++ b/requirements/dev-all.txt
@@ -2,3 +2,4 @@
 -r dev.txt
 -r release.txt
 -r torch.txt
+-r prophet.txt
diff --git a/requirements/prophet.txt b/requirements/prophet.txt
new file mode 100644
index 0000000000..fd61edd6e5
--- /dev/null
+++ b/requirements/prophet.txt
@@ -0,0 +1 @@
+prophet>=1.1.1
diff --git a/setup.py b/setup.py
index fc16ca5c95..90e81ffa98 100644
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,9 @@ def read_requirements(path):
 base_reqs = read_requirements("requirements/core.txt")
 torch_reqs = read_requirements("requirements/torch.txt")
 
-all_reqs = base_reqs + torch_reqs
+# Note: Prophet is not included in darts package by default
+
+reqs = base_reqs + torch_reqs
 
 with open("README.md") as fh:
     LONG_DESCRIPTION = fh.read()
@@ -38,7 +40,7 @@ def read_requirements(path):
     maintainer_email="darts@unit8.co",
     license="Apache License 2.0",
     packages=find_packages(),
-    install_requires=all_reqs,
+    install_requires=reqs,
     package_data={
         "darts": ["py.typed"],
     },

From cd717d77228baba2d1f0cddecb836b368495308b Mon Sep 17 00:00:00 2001
From: Julien Herzen <j.herzen@gmail.com>
Date: Thu, 23 Feb 2023 09:47:50 +0100
Subject: [PATCH 02/12] adapt u8darts setup as well

---
 setup_u8darts.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/setup_u8darts.py b/setup_u8darts.py
index 2034b05115..d10d5776e3 100644
--- a/setup_u8darts.py
+++ b/setup_u8darts.py
@@ -9,8 +9,9 @@ def read_requirements(path):
 
 base_reqs = read_requirements("requirements/core.txt")
 torch_reqs = read_requirements("requirements/torch.txt")
+prophet_reqs = read_requirements("requirements/prophet.txt")
 
-all_reqs = base_reqs + torch_reqs
+all_reqs = base_reqs + torch_reqs + prophet_reqs
 
 with open("README.md") as fh:
     LONG_DESCRIPTION = fh.read()
@@ -39,7 +40,7 @@ def read_requirements(path):
     license="Apache License 2.0",
     packages=find_packages(),
     install_requires=base_reqs,
-    extras_require={"all": all_reqs, "torch": torch_reqs},
+    extras_require={"all": all_reqs, "torch": torch_reqs, "prophet": prophet_reqs},
     package_data={
         "darts": ["py.typed"],
     },

From 8f2c1b47fcd1c7dea072aa151a5311a58c9d71aa Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Wed, 2 Aug 2023 10:24:02 +0200
Subject: [PATCH 03/12] remove prophet, lightgbm, catboost for core
 requirements

---
 requirements/core.txt                                     | 2 --
 requirements/{dev-all-noprophet.txt => dev-all-torch.txt} | 0
 requirements/dev-all.txt                                  | 2 +-
 requirements/no-torch.txt                                 | 3 +++
 requirements/prophet.txt                                  | 1 -
 setup.py                                                  | 2 +-
 setup_u8darts.py                                          | 6 +++---
 7 files changed, 8 insertions(+), 8 deletions(-)
 rename requirements/{dev-all-noprophet.txt => dev-all-torch.txt} (100%)
 create mode 100644 requirements/no-torch.txt
 delete mode 100644 requirements/prophet.txt

diff --git a/requirements/core.txt b/requirements/core.txt
index 76ed32df1a..c88794026a 100644
--- a/requirements/core.txt
+++ b/requirements/core.txt
@@ -1,7 +1,5 @@
-catboost>=1.0.6,<1.2.0
 holidays>=0.11.1
 joblib>=0.16.0
-lightgbm>=3.2.0
 matplotlib>=3.3.0
 nfoursid>=1.0.0
 numpy>=1.19.0
diff --git a/requirements/dev-all-noprophet.txt b/requirements/dev-all-torch.txt
similarity index 100%
rename from requirements/dev-all-noprophet.txt
rename to requirements/dev-all-torch.txt
diff --git a/requirements/dev-all.txt b/requirements/dev-all.txt
index 2dbb7ab411..f2615b17fe 100644
--- a/requirements/dev-all.txt
+++ b/requirements/dev-all.txt
@@ -2,4 +2,4 @@
 -r dev.txt
 -r release.txt
 -r torch.txt
--r prophet.txt
+-r no-torch.txt
diff --git a/requirements/no-torch.txt b/requirements/no-torch.txt
new file mode 100644
index 0000000000..ebe1b7daf2
--- /dev/null
+++ b/requirements/no-torch.txt
@@ -0,0 +1,3 @@
+prophet>=1.1.1
+catboost>=1.0.6
+lightgbm>=3.2.0
diff --git a/requirements/prophet.txt b/requirements/prophet.txt
deleted file mode 100644
index fd61edd6e5..0000000000
--- a/requirements/prophet.txt
+++ /dev/null
@@ -1 +0,0 @@
-prophet>=1.1.1
diff --git a/setup.py b/setup.py
index 6a87d4ecd5..821fcbd930 100644
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@ def read_requirements(path):
 base_reqs = read_requirements("requirements/core.txt")
 torch_reqs = read_requirements("requirements/torch.txt")
 
-# Note: Prophet is not included in darts package by default
+# Note: Prophet, LightGBM, Catboost are not included in darts package by default
 
 reqs = base_reqs + torch_reqs
 
diff --git a/setup_u8darts.py b/setup_u8darts.py
index 4a21b2e328..445c2f74d0 100644
--- a/setup_u8darts.py
+++ b/setup_u8darts.py
@@ -9,9 +9,9 @@ def read_requirements(path):
 
 base_reqs = read_requirements("requirements/core.txt")
 torch_reqs = read_requirements("requirements/torch.txt")
-prophet_reqs = read_requirements("requirements/prophet.txt")
+no_torch_reqs = read_requirements("requirements/no-torch.txt")
 
-all_reqs = base_reqs + torch_reqs + prophet_reqs
+all_reqs = base_reqs + torch_reqs + no_torch_reqs
 
 with open("README.md") as fh:
     LONG_DESCRIPTION = fh.read()
@@ -40,7 +40,7 @@ def read_requirements(path):
     license="Apache License 2.0",
     packages=find_packages(),
     install_requires=base_reqs,
-    extras_require={"all": all_reqs, "torch": torch_reqs, "prophet": prophet_reqs},
+    extras_require={"all": all_reqs, "torch": torch_reqs, "no-torch": no_torch_reqs},
     package_data={
         "darts": ["py.typed"],
     },

From 6b38ce9bbfe733b01c431353173958be0f74b312 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Wed, 2 Aug 2023 10:57:14 +0200
Subject: [PATCH 04/12] simplify the module importing

---
 darts/models/__init__.py | 70 +++++++---------------------------------
 darts/models/utils.py    | 19 +++++++++++
 2 files changed, 30 insertions(+), 59 deletions(-)
 create mode 100644 darts/models/utils.py

diff --git a/darts/models/__init__.py b/darts/models/__init__.py
index c288320be4..08513bba7b 100644
--- a/darts/models/__init__.py
+++ b/darts/models/__init__.py
@@ -26,6 +26,7 @@
 from darts.models.forecasting.tbats_model import BATS, TBATS
 from darts.models.forecasting.theta import FourTheta, Theta
 from darts.models.forecasting.varima import VARIMA
+from darts.models.utils import NotImportedModule
 
 try:
     from darts.models.forecasting.block_rnn_model import BlockRNNModel
@@ -50,48 +51,17 @@
 try:
     from darts.models.forecasting.lgbm import LightGBMModel
 except ModuleNotFoundError:
-    logger.warning(
-        "Support for LightGBM not available. "
-        "To enable LightGBM support in Darts, follow the detailed "
-        "install instructions for LightGBM in the README: "
-        "https://github.com/unit8co/darts/blob/master/INSTALL.md"
-    )
-    # TODO: simpler option would be to write LightGBMModel=None
-    # Prevent a second ImportError that would interrupt the import
-    class NotImportedLightGBM:
-        usable = False
-
-    LightGBMModel = NotImportedLightGBM()
+    LightGBMModel = NotImportedModule(module_name="LightGBM")
 
 try:
     from darts.models.forecasting.prophet_model import Prophet
 except ImportError:
-    logger.warning(
-        "The prophet module could not be imported. "
-        "To enable support for Prophet model, follow "
-        "the instruction in the README: "
-        "https://github.com/unit8co/darts/blob/master/INSTALL.md"
-    )
-
-    class NotImportedProphet:
-        usable = False
-
-    Prophet = NotImportedProphet()
+    Prophet = NotImportedModule(module_name="Prophet")
 
 try:
     from darts.models.forecasting.catboost_model import CatBoostModel
 except ModuleNotFoundError:
-    logger.warning(
-        "The catboost module could not be imported. "
-        "To enable support for CatBoostModel model, "
-        "follow the instruction in the README: "
-        "https://github.com/unit8co/darts/blob/master/INSTALL.md"
-    )
-
-    class NotImportedCatBoostModel:
-        usable = False
-
-    CatBoostModel = NotImportedCatBoostModel()
+    CatBoostModel = NotImportedModule(module_name="CatBoost")
 
 try:
     from darts.models.forecasting.croston import Croston
@@ -102,39 +72,21 @@ class NotImportedCatBoostModel:
 
 except ImportError:
     logger.warning(
-        "The statsforecast module could not be imported. "
+        "The StatsForecast module could not be imported. "
         "To enable support for the StatsForecastAutoARIMA, "
         "StatsForecastAutoETS and Croston models, please consider "
         "installing it."
     )
-
-    class NotImportedStatsForecastAutoARIMA:
-        usable = False
-
-    StatsForecastAutoARIMA = NotImportedStatsForecastAutoARIMA()
-
-    class NotImportedStatsForecastAutoETS:
-        usable = False
-
-    StatsForecastAutoETS = NotImportedStatsForecastAutoETS()
-
-    class NotImportedCroston:
-        usable = False
-
-    Croston = NotImportedCroston()
+    Croston = NotImportedModule(module_name="StatsForecast", warn=False)
+    StatsForecastAutoARIMA = NotImportedModule(module_name="StatsForecast", warn=False)
+    StatsForecastAutoCES = NotImportedModule(module_name="StatsForecast", warn=False)
+    StatsForecastAutoETS = NotImportedModule(module_name="StatsForecast", warn=False)
+    StatsForecastAutoTheta = NotImportedModule(module_name="StatsForecast", warn=False)
 
 try:
     from darts.models.forecasting.xgboost import XGBModel
 except ImportError:
-    logger.warning(
-        "The xgboost module could not be imported. "
-        "To enable support for XGBoost model, install the xgboost package."
-    )
-
-    class NotImportedXGBModel:
-        usable = False
-
-    XGBModel = NotImportedXGBModel()
+    XGBModel = NotImportedModule(module_name="XGBoost")
 
 from darts.models.filtering.gaussian_process_filter import GaussianProcessFilter
 from darts.models.filtering.kalman_filter import KalmanFilter
diff --git a/darts/models/utils.py b/darts/models/utils.py
new file mode 100644
index 0000000000..31a28a8bac
--- /dev/null
+++ b/darts/models/utils.py
@@ -0,0 +1,19 @@
+from darts.logging import get_logger
+
+logger = get_logger(__name__)
+
+
+class NotImportedModule:
+    """Helper class for handling import errors of optional dependencies."""
+
+    usable = False
+
+    def __init__(self, module_name: str, warn: bool = True):
+        if warn:
+            txt = (
+                f"The {module_name} module could not be imported. "
+                "To enable LightGBM support in Darts, follow the detailed "
+                "install instructions for LightGBM in the README: "
+                "https://github.com/unit8co/darts/blob/master/INSTALL.md"
+            )
+            logger.warning(txt)

From 17904a9b9eac95c4ecf798d3ef0c5a7d2920ece0 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Wed, 2 Aug 2023 12:56:13 +0200
Subject: [PATCH 05/12] update tests with new optional dependencies

---
 darts/models/forecasting/regression_model.py  |  12 +-
 .../explainability/test_shap_explainer.py     |  31 +-
 .../forecasting/test_historical_forecasts.py  |  11 +-
 .../test_local_forecasting_models.py          |  12 +-
 .../forecasting/test_probabilistic_models.py  |  13 +-
 .../tests/models/forecasting/test_prophet.py  |   4 +-
 .../forecasting/test_regression_models.py     | 377 ++++++++++--------
 7 files changed, 263 insertions(+), 197 deletions(-)

diff --git a/darts/models/forecasting/regression_model.py b/darts/models/forecasting/regression_model.py
index ce39af0d3e..1849dd8d1c 100644
--- a/darts/models/forecasting/regression_model.py
+++ b/darts/models/forecasting/regression_model.py
@@ -55,16 +55,6 @@
 
 logger = get_logger(__name__)
 
-try:
-    from catboost import CatBoostRegressor
-except ModuleNotFoundError:
-    logger.warning(
-        "The catboost module could not be imported. "
-        "To enable support for CatBoostRegressor, "
-        "follow the instruction in the README: "
-        "https://github.com/unit8co/darts/blob/master/INSTALL.md"
-    )
-
 
 class RegressionModel(GlobalForecastingModel):
     def __init__(
@@ -533,7 +523,7 @@ def fit(
                     self.model = MultiOutputRegressor(
                         self.model, n_jobs=n_jobs_multioutput_wrapper
                     )
-                elif isinstance(self.model, CatBoostRegressor):
+                elif self.model.__class__.__name__ == "CatBoostRegressor":
                     if (
                         self.model.get_params()["loss_function"]
                         == "RMSEWithUncertainty"
diff --git a/darts/tests/explainability/test_shap_explainer.py b/darts/tests/explainability/test_shap_explainer.py
index 754282faa1..9e99361a7e 100644
--- a/darts/tests/explainability/test_shap_explainer.py
+++ b/darts/tests/explainability/test_shap_explainer.py
@@ -16,11 +16,10 @@
 from darts.explainability.explainability_result import ShapExplainabilityResult
 from darts.explainability.shap_explainer import ShapExplainer
 from darts.models import (
-    CatBoostModel,
     ExponentialSmoothing,
-    LightGBMModel,
     LinearRegressionModel,
     RegressionModel,
+    XGBModel,
 )
 from darts.tests.base_test_class import DartsBaseTestClass
 
@@ -131,7 +130,7 @@ class ShapExplainerTestCase(DartsBaseTestClass):
     def test_creation(self):
 
         # Model should be fitted first
-        m = LightGBMModel(
+        m = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -158,7 +157,7 @@ def test_creation(self):
                 self.target_ts,
             )
 
-        m = LightGBMModel(
+        m = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -249,7 +248,7 @@ def test_creation(self):
         )
 
         # CatBoost
-        m = CatBoostModel(
+        m = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -6],
             lags_future_covariates=[0],
@@ -270,7 +269,7 @@ def test_creation(self):
             ShapExplainer(m, shap_method="bad_choice")
 
     def test_explain(self):
-        m = LightGBMModel(
+        m = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -429,7 +428,7 @@ def test_explain(self):
         self.assertTrue(isinstance(shap_explain.explain(), ShapExplainabilityResult))
 
     def test_explain_with_lags_future_covariates_series_of_same_length_as_target(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[2],
@@ -465,7 +464,7 @@ def test_explain_with_lags_future_covariates_series_extending_into_future(self):
         fut_cov = np.random.normal(0, 1, len(days)).astype("float32")
         fut_cov_ts = TimeSeries.from_times_and_values(days, fut_cov.reshape(-1, 1))
 
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[2],
@@ -500,7 +499,7 @@ def test_explain_with_lags_covariates_series_older_timestamps_than_target(self):
         past_cov = np.random.normal(0, 1, len(days)).astype("float32")
         past_cov_ts = TimeSeries.from_times_and_values(days, past_cov.reshape(-1, 1))
 
-        model = LightGBMModel(
+        model = XGBModel(
             lags=None,
             lags_past_covariates=[-1, -2],
             lags_future_covariates=[-1, -2],
@@ -526,7 +525,7 @@ def test_explain_with_lags_covariates_series_older_timestamps_than_target(self):
             self.assertEqual(explanation.start_time(), self.target_ts.start_time())
 
     def test_plot(self):
-        m_0 = LightGBMModel(
+        m_0 = XGBModel(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -619,7 +618,7 @@ def test_plot(self):
         plt.close()
 
     def test_feature_values_align_with_input(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             output_chunk_length=1,
         )
@@ -645,7 +644,7 @@ def test_feature_values_align_with_input(self):
         )
 
     def test_feature_values_align_with_raw_output_shap(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             output_chunk_length=1,
         )
@@ -671,7 +670,7 @@ def test_feature_values_align_with_raw_output_shap(self):
         ), "The shape of the feature values should be the same as the shap values"
 
     def test_shap_explanation_object_validity(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             lags_past_covariates=2,
             lags_future_covariates=[1],
@@ -693,7 +692,7 @@ def test_shap_explanation_object_validity(self):
         )
 
     def test_shap_selected_components(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             lags_past_covariates=2,
             lags_future_covariates=[1],
@@ -729,7 +728,7 @@ def test_shap_selected_components(self):
 
     def test_shapley_with_static_cov(self):
         ts = self.target_ts_with_static_covs
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             output_chunk_length=1,
         )
@@ -773,7 +772,7 @@ def test_shapley_with_static_cov(self):
             ]
 
     def test_shapley_multiple_series_with_different_static_covs(self):
-        model = LightGBMModel(
+        model = XGBModel(
             lags=4,
             output_chunk_length=1,
         )
diff --git a/darts/tests/models/forecasting/test_historical_forecasts.py b/darts/tests/models/forecasting/test_historical_forecasts.py
index a84b9c4af5..669b229c2d 100644
--- a/darts/tests/models/forecasting/test_historical_forecasts.py
+++ b/darts/tests/models/forecasting/test_historical_forecasts.py
@@ -16,6 +16,7 @@
     LightGBMModel,
     LinearRegressionModel,
     NaiveSeasonal,
+    NotImportedModule,
 )
 from darts.tests.base_test_class import DartsBaseTestClass
 from darts.utils import timeseries_generation as tg
@@ -42,11 +43,11 @@
     )
     TORCH_AVAILABLE = False
 
-models_reg_no_cov_cls_kwargs = [
-    (LinearRegressionModel, {"lags": 8}, (8, 1)),
-    (CatBoostModel, {"lags": 6}, (6, 1)),
-    (LightGBMModel, {"lags": 4}, (4, 1)),
-]
+models_reg_no_cov_cls_kwargs = [(LinearRegressionModel, {"lags": 8}, (8, 1))]
+if not isinstance(CatBoostModel, NotImportedModule):
+    models_reg_no_cov_cls_kwargs.append((CatBoostModel, {"lags": 6}, (6, 1)))
+if not isinstance(LightGBMModel, NotImportedModule):
+    models_reg_no_cov_cls_kwargs.append((LightGBMModel, {"lags": 4}, (4, 1)))
 
 models_reg_cov_cls_kwargs = [
     # target + past covariates
diff --git a/darts/tests/models/forecasting/test_local_forecasting_models.py b/darts/tests/models/forecasting/test_local_forecasting_models.py
index 1e77141b49..1e9959b773 100644
--- a/darts/tests/models/forecasting/test_local_forecasting_models.py
+++ b/darts/tests/models/forecasting/test_local_forecasting_models.py
@@ -29,6 +29,7 @@
     NaiveMean,
     NaiveMovingAverage,
     NaiveSeasonal,
+    NotImportedModule,
     Prophet,
     RandomForest,
     RegressionModel,
@@ -72,7 +73,6 @@
     (KalmanForecaster(dim_x=3), 20),
     (LinearRegressionModel(lags=12), 13),
     (RandomForest(lags=12, n_estimators=5, max_depth=3), 14),
-    (Prophet(), 9.0),
     (AutoARIMA(), 12),
     (TBATS(use_trend=True, use_arma_errors=True, use_box_cox=True), 8.5),
     (BATS(use_trend=True, use_arma_errors=True, use_box_cox=True), 11),
@@ -93,7 +93,6 @@
     ARIMA(),
     StatsForecastAutoARIMA(season_length=12),
     StatsForecastAutoETS(season_length=12),
-    Prophet(),
     AutoARIMA(),
 ]
 
@@ -102,9 +101,12 @@
     VARIMA(1, 0, 0),
     ARIMA(),
     AutoARIMA(),
-    Prophet(),
     KalmanForecaster(dim_x=30),
 ]
+if not isinstance(Prophet, NotImportedModule):
+    models.append((Prophet(), 9.0))
+    dual_models.append(Prophet())
+    encoder_support_models.append(Prophet())
 
 
 class LocalForecastingModelsTestCase(DartsBaseTestClass):
@@ -612,10 +614,10 @@ def test_model_str_call(self):
             (ExponentialSmoothing(), "ExponentialSmoothing()"),  # no params changed
             (ARIMA(1, 1, 1), "ARIMA(p=1, q=1)"),  # default value for a param
             (
-                Prophet(
+                KalmanForecaster(
                     add_encoders={"cyclic": {"past": ["month"]}}
                 ),  # data structure param
-                "Prophet(add_encoders={'cyclic': {'past': ['month']}})",
+                "KalmanForecaster(add_encoders={'cyclic': {'past': ['month']}})",
             ),
             (
                 TBATS(
diff --git a/darts/tests/models/forecasting/test_probabilistic_models.py b/darts/tests/models/forecasting/test_probabilistic_models.py
index 9681afaa0a..0c149e7849 100644
--- a/darts/tests/models/forecasting/test_probabilistic_models.py
+++ b/darts/tests/models/forecasting/test_probabilistic_models.py
@@ -12,6 +12,8 @@
     ExponentialSmoothing,
     LightGBMModel,
     LinearRegressionModel,
+    NotImportedModule,
+    XGBModel,
 )
 from darts.models.forecasting.forecasting_model import GlobalForecastingModel
 from darts.tests.base_test_class import DartsBaseTestClass
@@ -57,6 +59,9 @@
     )
     TORCH_AVAILABLE = False
 
+lgbm_available = not isinstance(LightGBMModel, NotImportedModule)
+cb_available = not isinstance(CatBoostModel, NotImportedModule)
+
 models_cls_kwargs_errs = [
     (ExponentialSmoothing, {}, 0.3),
     (ARIMA, {"p": 1, "d": 0, "q": 1, "random_state": 42}, 0.03),
@@ -227,7 +232,11 @@ def test_predict_likelihood_parameters_regression_models(self):
         """
         seed = 142857
         n_times, n_samples = 100, 1
-        model_classes = [LinearRegressionModel, LightGBMModel, CatBoostModel]
+        model_classes = [LinearRegressionModel, XGBModel]
+        if lgbm_available:
+            model_classes.append(LightGBMModel)
+        if cb_available:
+            model_classes.append(CatBoostModel)
 
         for n_comp in [1, 3]:
             list_lkl = [
@@ -254,7 +263,7 @@ def test_predict_likelihood_parameters_regression_models(self):
 
             for model_cls in model_classes:
                 # Catboost is the only regression model supporting the GaussianLikelihood
-                if isinstance(model_cls, CatBoostModel):
+                if cb_available and issubclass(model_cls, CatBoostModel):
                     list_lkl.append(
                         {
                             "kwargs": {"likelihood": "gaussian"},
diff --git a/darts/tests/models/forecasting/test_prophet.py b/darts/tests/models/forecasting/test_prophet.py
index af8bffca41..ce4d8c34d8 100644
--- a/darts/tests/models/forecasting/test_prophet.py
+++ b/darts/tests/models/forecasting/test_prophet.py
@@ -1,3 +1,4 @@
+import unittest
 from unittest.mock import Mock
 
 import numpy as np
@@ -6,13 +7,14 @@
 
 from darts import TimeSeries
 from darts.logging import get_logger
-from darts.models import Prophet
+from darts.models import NotImportedModule, Prophet
 from darts.tests.base_test_class import DartsBaseTestClass
 from darts.utils import timeseries_generation as tg
 
 logger = get_logger(__name__)
 
 
+@unittest.skipUnless(not isinstance(Prophet, NotImportedModule), "requires prophet")
 class ProphetTestCase(DartsBaseTestClass):
     def test_add_seasonality_calls(self):
         # test if adding seasonality at model creation and with method model.add_seasonality() are equal
diff --git a/darts/tests/models/forecasting/test_regression_models.py b/darts/tests/models/forecasting/test_regression_models.py
index 3a61446a1b..5ba156a1c8 100644
--- a/darts/tests/models/forecasting/test_regression_models.py
+++ b/darts/tests/models/forecasting/test_regression_models.py
@@ -1,6 +1,7 @@
 import copy
 import functools
 import math
+import unittest
 from unittest.mock import patch
 
 import numpy as np
@@ -21,6 +22,7 @@
     CatBoostModel,
     LightGBMModel,
     LinearRegressionModel,
+    NotImportedModule,
     RandomForest,
     RegressionModel,
     XGBModel,
@@ -32,6 +34,10 @@
 
 logger = get_logger(__name__)
 
+# replace catboost and lgbm with xgb in case of core requirements
+cb_available = not isinstance(CatBoostModel, NotImportedModule)
+lgbm_available = not isinstance(LightGBMModel, NotImportedModule)
+
 
 def train_test_split(series, split_ts):
     """
@@ -161,36 +167,9 @@ class RegressionModelsTestCase(DartsBaseTestClass):
         RandomForest,
         LinearRegressionModel,
         RegressionModel,
-        LightGBMModel,
-        CatBoostModel,
     ]
 
     # register likelihood regression models
-    QuantileCatBoostModel = partialclass(
-        CatBoostModel,
-        likelihood="quantile",
-        quantiles=[0.05, 0.5, 0.95],
-        random_state=42,
-    )
-    PoissonCatBoostModel = partialclass(
-        CatBoostModel,
-        likelihood="poisson",
-        random_state=42,
-    )
-    NormalCatBoostModel = partialclass(
-        CatBoostModel,
-        likelihood="gaussian",
-        random_state=42,
-    )
-    QuantileLightGBMModel = partialclass(
-        LightGBMModel,
-        likelihood="quantile",
-        quantiles=[0.05, 0.5, 0.95],
-        random_state=42,
-    )
-    PoissonLightGBMModel = partialclass(
-        LightGBMModel, likelihood="poisson", random_state=42
-    )
     QuantileLinearRegressionModel = partialclass(
         LinearRegressionModel,
         likelihood="quantile",
@@ -213,77 +192,124 @@ class RegressionModelsTestCase(DartsBaseTestClass):
     # targets for poisson regression must be positive, so we exclude them for some tests
     models.extend(
         [
-            QuantileLightGBMModel,
             QuantileLinearRegressionModel,
-            QuantileCatBoostModel,
-            PoissonLightGBMModel,
             PoissonLinearRegressionModel,
-            PoissonCatBoostModel,
-            NormalCatBoostModel,
             PoissonXGBModel,
             QuantileXGBModel,
         ]
     )
 
-    lgbm_w_categorical_covariates = LightGBMModel(
-        lags=1,
-        lags_past_covariates=1,
-        lags_future_covariates=[1],
-        output_chunk_length=1,
-        categorical_future_covariates=["fut_cov_promo_mechanism"],
-        categorical_past_covariates=["past_cov_cat_dummy"],
-        categorical_static_covariates=["product_id"],
-    )
-
     univariate_accuracies = [
         0.03,  # RandomForest
         1e-13,  # LinearRegressionModel
         1e-13,  # RegressionModel
-        0.3,  # LightGBMModel
-        0.75,  # CatBoostModel
-        0.5,  # QuantileLightGBMModel
         0.8,  # QuantileLinearRegressionModel
-        1e-03,  # QuantileCatBoostModel
-        0.4,  # PoissonLightGBMModel
         0.4,  # PoissonLinearRegressionModel
-        1e-01,  # PoissonCatBoostModel
-        1e-05,  # NormalCatBoostModel
         1e-01,  # PoissonXGBModel
         0.5,  # QuantileXGBModel
     ]
     multivariate_accuracies = [
-        0.3,
-        1e-13,
-        1e-13,
-        0.4,
-        0.75,  # CatBoostModel
-        0.4,  # QuantileLightGBMModel
+        0.3,  # RandomForest
+        1e-13,  # LinearRegressionModel
+        1e-13,  # RegressionModel
         0.8,  # QuantileLinearRegressionModel
-        1e-03,
-        0.4,
-        0.4,
-        0.15,
-        1e-05,
-        0.15,
-        0.4,
+        0.4,  # PoissonLinearRegressionModel
+        0.15,  # PoissonXGBModel
+        0.4,  # QuantileXGBModel
     ]
     multivariate_multiseries_accuracies = [
         0.05,  # RandomForest
         1e-13,  # LinearRegressionModel
         1e-13,  # RegressionModel
-        0.05,  # LightGBMModel
-        0.75,  # CatBoostModel
-        0.4,  # QuantileLightGBMModel
         0.8,  # QuantileLinearRegressionModel
-        1e-03,  # QuantileCatBoostModel
-        0.4,  # PoissonLightGBMModel
         0.4,  # PoissonLinearRegressionModel
-        1e-01,  # PoissonCatBoostModel
-        1e-03,  # NormalCatBoostModel
         1e-01,  # PoissonXGBModel
         0.4,  # QuantileXGBModel
     ]
 
+    lgbm_w_categorical_covariates = NotImportedModule
+    if lgbm_available:
+        QuantileLightGBMModel = partialclass(
+            LightGBMModel,
+            likelihood="quantile",
+            quantiles=[0.05, 0.5, 0.95],
+            random_state=42,
+        )
+        PoissonLightGBMModel = partialclass(
+            LightGBMModel, likelihood="poisson", random_state=42
+        )
+        models += [
+            LightGBMModel,
+            QuantileLightGBMModel,
+            PoissonLightGBMModel,
+        ]
+
+        lgbm_w_categorical_covariates = LightGBMModel(
+            lags=1,
+            lags_past_covariates=1,
+            lags_future_covariates=[1],
+            output_chunk_length=1,
+            categorical_future_covariates=["fut_cov_promo_mechanism"],
+            categorical_past_covariates=["past_cov_cat_dummy"],
+            categorical_static_covariates=["product_id"],
+        )
+        univariate_accuracies += [
+            0.3,  # LightGBMModel
+            0.5,  # QuantileLightGBMModel
+            0.4,  # PoissonLightGBMModel
+        ]
+        multivariate_accuracies += [
+            0.4,  # LightGBMModel
+            0.4,  # QuantileLightGBMModel
+            0.4,  # PoissonLightGBMModel
+        ]
+        multivariate_multiseries_accuracies += [
+            0.05,  # LightGBMModel
+            0.4,  # QuantileLightGBMModel
+            0.4,  # PoissonLightGBMModel
+        ]
+    if cb_available:
+        QuantileCatBoostModel = partialclass(
+            CatBoostModel,
+            likelihood="quantile",
+            quantiles=[0.05, 0.5, 0.95],
+            random_state=42,
+        )
+        PoissonCatBoostModel = partialclass(
+            CatBoostModel,
+            likelihood="poisson",
+            random_state=42,
+        )
+        NormalCatBoostModel = partialclass(
+            CatBoostModel,
+            likelihood="gaussian",
+            random_state=42,
+        )
+        models += [
+            CatBoostModel,
+            QuantileCatBoostModel,
+            PoissonCatBoostModel,
+            NormalCatBoostModel,
+        ]
+        univariate_accuracies += [
+            0.75,  # CatBoostModel
+            1e-03,  # QuantileCatBoostModel
+            1e-01,  # PoissonCatBoostModel
+            1e-05,  # NormalCatBoostModel
+        ]
+        multivariate_accuracies += [
+            0.75,  # CatBoostModel
+            1e-03,  # QuantileCatBoostModel
+            0.15,  # PoissonCatBoostModel
+            1e-05,  # NormalCatBoostModel
+        ]
+        multivariate_multiseries_accuracies += [
+            0.75,  # CatBoostModel
+            1e-03,  # QuantileCatBoostModel
+            1e-01,  # PoissonCatBoostModel
+            1e-03,  # NormalCatBoostModel
+        ]
+
     # dummy feature and target TimeSeries instances
     target_series, past_covariates, future_covariates = dummy_timeseries(
         length=100,
@@ -1077,22 +1103,24 @@ def test_models_accuracy_multiseries_multivariate(self):
 
     def test_min_train_series_length(self):
         mutli_models_modes = [True, False]
+        lgbm_cls = LightGBMModel if lgbm_available else XGBModel
+        cb_cls = CatBoostModel if cb_available else XGBModel
         for mode in mutli_models_modes:
-            model = LightGBMModel(lags=4, multi_models=mode)
+            model = lgbm_cls(lags=4, multi_models=mode)
             min_train_series_length_expected = (
                 -model.lags["target"][0] + model.output_chunk_length + 1
             )
             self.assertEqual(
                 min_train_series_length_expected, model.min_train_series_length
             )
-            model = CatBoostModel(lags=2, multi_models=mode)
+            model = cb_cls(lags=2, multi_models=mode)
             min_train_series_length_expected = (
                 -model.lags["target"][0] + model.output_chunk_length + 1
             )
             self.assertEqual(
                 min_train_series_length_expected, model.min_train_series_length
             )
-            model = LightGBMModel(lags=[-4, -3, -2], multi_models=mode)
+            model = lgbm_cls(lags=[-4, -3, -2], multi_models=mode)
             min_train_series_length_expected = (
                 -model.lags["target"][0] + model.output_chunk_length + 1
             )
@@ -1178,20 +1206,25 @@ def test_multioutput_validation(self):
         lags = 4
 
         models = [
-            LightGBMModel(lags=lags, output_chunk_length=1, multi_models=True),
-            LightGBMModel(lags=lags, output_chunk_length=1, multi_models=False),
-            LightGBMModel(lags=lags, output_chunk_length=2, multi_models=True),
-            LightGBMModel(lags=lags, output_chunk_length=2, multi_models=False),
-            CatBoostModel(lags=lags, output_chunk_length=1, multi_models=True),
-            CatBoostModel(lags=lags, output_chunk_length=1, multi_models=False),
-            CatBoostModel(lags=lags, output_chunk_length=2, multi_models=True),
-            CatBoostModel(lags=lags, output_chunk_length=2, multi_models=False),
             XGBModel(lags=lags, output_chunk_length=1, multi_models=True),
             XGBModel(lags=lags, output_chunk_length=1, multi_models=False),
             XGBModel(lags=lags, output_chunk_length=2, multi_models=True),
             XGBModel(lags=lags, output_chunk_length=2, multi_models=False),
         ]
-
+        if lgbm_available:
+            models += [
+                LightGBMModel(lags=lags, output_chunk_length=1, multi_models=True),
+                LightGBMModel(lags=lags, output_chunk_length=1, multi_models=False),
+                LightGBMModel(lags=lags, output_chunk_length=2, multi_models=True),
+                LightGBMModel(lags=lags, output_chunk_length=2, multi_models=False),
+            ]
+        if cb_available:
+            models += [
+                CatBoostModel(lags=lags, output_chunk_length=1, multi_models=True),
+                CatBoostModel(lags=lags, output_chunk_length=1, multi_models=False),
+                CatBoostModel(lags=lags, output_chunk_length=2, multi_models=True),
+                CatBoostModel(lags=lags, output_chunk_length=2, multi_models=False),
+            ]
         train, val = self.sine_univariate1.split_after(0.6)
 
         for model in models:
@@ -1464,7 +1497,14 @@ def test_not_enough_covariates(self):
                         future_covariates=future_covariates[: -26 + req_future_offset],
                     )
 
-    @patch.object(darts.models.forecasting.lgbm.lgb.LGBMRegressor, "fit")
+    # @patch.object(darts.models.forecasting.lgbm.lgb.LGBMRegressor, "fit")
+    @unittest.skipUnless(lgbm_available, "requires lightgbm")
+    @patch.object(
+        darts.models.forecasting.lgbm.lgb.LGBMRegressor
+        if lgbm_available
+        else darts.models.utils.NotImportedModule,
+        "fit",
+    )
     def test_gradient_boosted_model_with_eval_set(self, lgb_fit_patch):
         """Test whether these evaluation set parameters are passed to LGBRegressor"""
         model = LightGBMModel(lags=4, lags_past_covariates=2)
@@ -1574,12 +1614,14 @@ def test_encoders(self):
         }
 
         multi_models_mode = [True, False]
+        models_cls = [RegressionModel, LinearRegressionModel, XGBModel]
+        if lgbm_available:
+            models_cls.append(LightGBMModel)
         for mode in multi_models_mode:
             for ocl in [1, 2]:
                 for model_cls in [
                     RegressionModel,
                     LinearRegressionModel,
-                    LightGBMModel,
                     XGBModel,
                 ]:
                     model_pc_valid0 = model_cls(
@@ -1914,7 +1956,13 @@ def helper_test_encoders_settings(model, example: str):
             assert len(model.encoders.future_encoders) == 1
             assert isinstance(model.encoders.future_encoders[0], FutureCyclicEncoder)
 
-    @patch.object(darts.models.forecasting.catboost_model.CatBoostRegressor, "fit")
+    @unittest.skipUnless(cb_available, "requires catboost")
+    @patch.object(
+        darts.models.forecasting.catboost_model.CatBoostRegressor
+        if cb_available
+        else darts.models.utils.NotImportedModule,
+        "fit",
+    )
     def test_catboost_model_with_eval_set(self, lgb_fit_patch):
         """Test whether these evaluation set parameters are passed to CatBoostRegressor"""
         model = CatBoostModel(lags=4, lags_past_covariates=2)
@@ -1931,6 +1979,7 @@ def test_catboost_model_with_eval_set(self, lgb_fit_patch):
         assert lgb_fit_patch.call_args[1]["eval_set"] is not None
         assert lgb_fit_patch.call_args[1]["early_stopping_rounds"] == 2
 
+    @unittest.skipUnless(lgbm_available, "requires lightgbm")
     def test_quality_forecast_with_categorical_covariates(self):
         """Test case: two time series, a full sine wave series and a sine wave series
         with some irregularities every other period. Only models which use categorical
@@ -2006,6 +2055,7 @@ def get_model_params():
                 ]
             )
 
+    @unittest.skipUnless(lgbm_available, "requires lightgbm")
     def test_fit_with_categorical_features_raises_error(self):
         (
             series,
@@ -2047,6 +2097,7 @@ def test_fit_with_categorical_features_raises_error(self):
                     future_covariates=future_covariates,
                 )
 
+    @unittest.skipUnless(lgbm_available, "requires lightgbm")
     def test_get_categorical_features_helper(self):
         """Test helper function responsible for retrieving indices of categorical features"""
         (
@@ -2072,7 +2123,13 @@ def test_get_categorical_features_helper(self):
             ],
         )
 
-    @patch.object(darts.models.forecasting.lgbm.lgb.LGBMRegressor, "fit")
+    @unittest.skipUnless(lgbm_available, "requires lightgbm")
+    @patch.object(
+        darts.models.forecasting.lgbm.lgb.LGBMRegressor
+        if lgbm_available
+        else darts.models.utils.NotImportedModule,
+        "fit",
+    )
     def test_lgbm_categorical_features_passed_to_fit_correctly(self, lgb_fit_patch):
         """Test whether the categorical features are passed to LightGBMRegressor"""
         (
@@ -2119,78 +2176,6 @@ def helper_create_LinearModel(self, multi_models=True, extreme_lags=False):
 
 class ProbabilisticRegressionModelsTestCase(DartsBaseTestClass):
     models_cls_kwargs_errs = [
-        (
-            LightGBMModel,
-            {
-                "lags": 2,
-                "likelihood": "quantile",
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.4,
-        ),
-        (
-            LightGBMModel,
-            {
-                "lags": 2,
-                "likelihood": "quantile",
-                "quantiles": [0.1, 0.3, 0.5, 0.7, 0.9],
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.4,
-        ),
-        (
-            LightGBMModel,
-            {
-                "lags": 2,
-                "likelihood": "poisson",
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.6,
-        ),
-        (
-            CatBoostModel,
-            {
-                "lags": 2,
-                "likelihood": "quantile",
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.05,
-        ),
-        (
-            CatBoostModel,
-            {
-                "lags": 2,
-                "likelihood": "quantile",
-                "quantiles": [0.1, 0.3, 0.5, 0.7, 0.9],
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.05,
-        ),
-        (
-            CatBoostModel,
-            {
-                "lags": 2,
-                "likelihood": "poisson",
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.6,
-        ),
-        (
-            CatBoostModel,
-            {
-                "lags": 2,
-                "likelihood": "gaussian",
-                "random_state": 42,
-                "multi_models": True,
-            },
-            0.05,
-        ),
         (
             LinearRegressionModel,
             {
@@ -2233,6 +2218,84 @@ class ProbabilisticRegressionModelsTestCase(DartsBaseTestClass):
             0.4,
         ),
     ]
+    if lgbm_available:
+        models_cls_kwargs_errs += [
+            (
+                LightGBMModel,
+                {
+                    "lags": 2,
+                    "likelihood": "quantile",
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.4,
+            ),
+            (
+                LightGBMModel,
+                {
+                    "lags": 2,
+                    "likelihood": "quantile",
+                    "quantiles": [0.1, 0.3, 0.5, 0.7, 0.9],
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.4,
+            ),
+            (
+                LightGBMModel,
+                {
+                    "lags": 2,
+                    "likelihood": "poisson",
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.6,
+            ),
+        ]
+    if cb_available:
+        models_cls_kwargs_errs += [
+            (
+                CatBoostModel,
+                {
+                    "lags": 2,
+                    "likelihood": "quantile",
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.05,
+            ),
+            (
+                CatBoostModel,
+                {
+                    "lags": 2,
+                    "likelihood": "quantile",
+                    "quantiles": [0.1, 0.3, 0.5, 0.7, 0.9],
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.05,
+            ),
+            (
+                CatBoostModel,
+                {
+                    "lags": 2,
+                    "likelihood": "poisson",
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.6,
+            ),
+            (
+                CatBoostModel,
+                {
+                    "lags": 2,
+                    "likelihood": "gaussian",
+                    "random_state": 42,
+                    "multi_models": True,
+                },
+                0.05,
+            ),
+        ]
 
     constant_ts = tg.constant_timeseries(length=200, value=0.5)
     constant_noisy_ts = constant_ts + tg.gaussian_timeseries(length=200, std=0.1)

From 51efc4f964c17f07d82f464de07db07c0e012cce Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Wed, 2 Aug 2023 16:22:55 +0200
Subject: [PATCH 06/12] update gradle for including optional dependencies in
 unittest_all

---
 build.gradle                               |  6 +++---
 darts/models/utils.py                      | 19 +++++++++++--------
 requirements/dev-all.txt                   |  2 +-
 requirements/{no-torch.txt => notorch.txt} |  0
 setup_u8darts.py                           |  4 ++--
 5 files changed, 17 insertions(+), 14 deletions(-)
 rename requirements/{no-torch.txt => notorch.txt} (100%)

diff --git a/build.gradle b/build.gradle
index e95b2350f3..ce129e597c 100644
--- a/build.gradle
+++ b/build.gradle
@@ -64,7 +64,7 @@ task installPipLatest {
             commandLine "pip", "install", "pip-tools"
         }
         exec {
-            commandLine "pip-compile", "-o", "requirements-latest.txt"
+            commandLine "pip-compile", "requirements/core.txt", "requirements/notorch.txt", "requirements/torch.txt", "-o", "requirements-latest.txt"
         }
         exec {
             commandLine "pip", "install", "-r", "requirements-latest.txt"
@@ -80,7 +80,7 @@ void createPipInstallTask(String flavour) {
    }
 }
 
-String[] flavours = ["core", "dev", "torch", "release"];
+String[] flavours = ["core", "dev", "notorch", "torch", "release"];
 
 for(String flavour : flavours) {
     createPipInstallTask(flavour);
@@ -94,7 +94,7 @@ task pipInstall() {
     doFirst {
         setupPip
     }
-    dependsOn pip_core, pip_dev, pip_torch, pip_release
+    dependsOn pip_core, pip_dev, pip_notorch, pip_torch, pip_release
 }
 
 task lint_black(type: Exec) {
diff --git a/darts/models/utils.py b/darts/models/utils.py
index 31a28a8bac..87c57e311e 100644
--- a/darts/models/utils.py
+++ b/darts/models/utils.py
@@ -1,4 +1,4 @@
-from darts.logging import get_logger
+from darts.logging import get_logger, raise_log
 
 logger = get_logger(__name__)
 
@@ -9,11 +9,14 @@ class NotImportedModule:
     usable = False
 
     def __init__(self, module_name: str, warn: bool = True):
+        self.error_message = (
+            f"The `{module_name}` module could not be imported. "
+            f"To enable {module_name} support in Darts, follow the detailed "
+            f"install instructions for LightGBM in the README: "
+            f"https://github.com/unit8co/darts/blob/master/INSTALL.md"
+        )
         if warn:
-            txt = (
-                f"The {module_name} module could not be imported. "
-                "To enable LightGBM support in Darts, follow the detailed "
-                "install instructions for LightGBM in the README: "
-                "https://github.com/unit8co/darts/blob/master/INSTALL.md"
-            )
-            logger.warning(txt)
+            logger.warning(self.error_message)
+
+    def __call__(self, *args, **kwargs):
+        raise_log(ImportError(self.error_message), logger=logger)
diff --git a/requirements/dev-all.txt b/requirements/dev-all.txt
index f2615b17fe..7835ae3de5 100644
--- a/requirements/dev-all.txt
+++ b/requirements/dev-all.txt
@@ -2,4 +2,4 @@
 -r dev.txt
 -r release.txt
 -r torch.txt
--r no-torch.txt
+-r notorch.txt
diff --git a/requirements/no-torch.txt b/requirements/notorch.txt
similarity index 100%
rename from requirements/no-torch.txt
rename to requirements/notorch.txt
diff --git a/setup_u8darts.py b/setup_u8darts.py
index 445c2f74d0..ff34254ae3 100644
--- a/setup_u8darts.py
+++ b/setup_u8darts.py
@@ -9,7 +9,7 @@ def read_requirements(path):
 
 base_reqs = read_requirements("requirements/core.txt")
 torch_reqs = read_requirements("requirements/torch.txt")
-no_torch_reqs = read_requirements("requirements/no-torch.txt")
+no_torch_reqs = read_requirements("requirements/notorch.txt")
 
 all_reqs = base_reqs + torch_reqs + no_torch_reqs
 
@@ -40,7 +40,7 @@ def read_requirements(path):
     license="Apache License 2.0",
     packages=find_packages(),
     install_requires=base_reqs,
-    extras_require={"all": all_reqs, "torch": torch_reqs, "no-torch": no_torch_reqs},
+    extras_require={"all": all_reqs, "torch": torch_reqs, "notorch": no_torch_reqs},
     package_data={
         "darts": ["py.typed"],
     },

From 7cc0b19002faeff17133bedd175df6aca92deb04 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 10:15:06 +0200
Subject: [PATCH 07/12] update checkExamples to show all installed dependencies
 before test

---
 build.gradle                                             | 8 ++++++++
 darts/tests/models/forecasting/test_regression_models.py | 7 +------
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/build.gradle b/build.gradle
index ce129e597c..5b8d120952 100644
--- a/build.gradle
+++ b/build.gradle
@@ -158,6 +158,14 @@ def exampleName=project.properties["exampleName"] ?: ""
 task checkExample(type: Exec) {
     dependsOn pipInstall, installLocally
     workingDir "./examples"
+    doFirst {
+        exec {
+            commandLine "echo", "Installed packages"
+        }
+        exec {
+            commandLine "pip", "list"
+        }
+    }
     // exampleName must be passed with -PexampleName=FFT-examples.ipynb
     commandLine "papermill", exampleName, exampleName
 }
diff --git a/darts/tests/models/forecasting/test_regression_models.py b/darts/tests/models/forecasting/test_regression_models.py
index 5ba156a1c8..3eadb8f704 100644
--- a/darts/tests/models/forecasting/test_regression_models.py
+++ b/darts/tests/models/forecasting/test_regression_models.py
@@ -1497,7 +1497,6 @@ def test_not_enough_covariates(self):
                         future_covariates=future_covariates[: -26 + req_future_offset],
                     )
 
-    # @patch.object(darts.models.forecasting.lgbm.lgb.LGBMRegressor, "fit")
     @unittest.skipUnless(lgbm_available, "requires lightgbm")
     @patch.object(
         darts.models.forecasting.lgbm.lgb.LGBMRegressor
@@ -1619,11 +1618,7 @@ def test_encoders(self):
             models_cls.append(LightGBMModel)
         for mode in multi_models_mode:
             for ocl in [1, 2]:
-                for model_cls in [
-                    RegressionModel,
-                    LinearRegressionModel,
-                    XGBModel,
-                ]:
+                for model_cls in models_cls:
                     model_pc_valid0 = model_cls(
                         lags=2,
                         add_encoders=encoder_examples["past"],

From be35339f848e964dff4bcfce008fe46ec51ad1f3 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 10:18:52 +0200
Subject: [PATCH 08/12] apply suggestions from PR review

---
 darts/models/utils.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/darts/models/utils.py b/darts/models/utils.py
index 87c57e311e..8d0d0d11ea 100644
--- a/darts/models/utils.py
+++ b/darts/models/utils.py
@@ -12,7 +12,7 @@ def __init__(self, module_name: str, warn: bool = True):
         self.error_message = (
             f"The `{module_name}` module could not be imported. "
             f"To enable {module_name} support in Darts, follow the detailed "
-            f"install instructions for LightGBM in the README: "
+            f"instructions in the installation guide: "
             f"https://github.com/unit8co/darts/blob/master/INSTALL.md"
         )
         if warn:

From 1cafa7965900a8076fb8cf37a09265848fa4a35f Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 11:24:36 +0200
Subject: [PATCH 09/12] update install guide

---
 INSTALL.md | 102 +++++++++++++++++++++--------------------------------
 1 file changed, 40 insertions(+), 62 deletions(-)

diff --git a/INSTALL.md b/INSTALL.md
index 94b5ddeff5..cd8f14f061 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -2,6 +2,20 @@
 
 Below, we detail how to install Darts using either `conda` or `pip`.
 
+## From PyPI
+Install darts with all models expect the ones from optional dependencies (Prophet, LightGBM, CatBoost, see more on that [here](#enabling-optional-dependencies)): `pip install darts`.
+
+If this fails on your platform, please follow the official installation 
+guide for [PyTorch](https://pytorch.org/get-started/locally/), then try installing Darts again.
+
+As some dependencies are relatively big or involve non-Python dependencies,
+we also maintain the `u8darts` package, which provides the following alternate lighter install options:
+
+* Install darts with all available models: `pip install u8darts[all]`
+* Install core only (without neural networks, Prophet, LightGBM and Catboost): `pip install u8darts`
+* Install core + Prophet + LightGBM + CatBoost: `pip install "u8darts[notorch]"`
+* Install core + neural networks (PyTorch): `pip install "u8darts[torch]"` (equivalent to `pip install darts`)
+
 ## From conda-forge
 Currently only the x86_64 architecture with Python 3.8-3.10
 is fully supported with conda; consider using PyPI if you are running into troubles.
@@ -11,104 +25,68 @@ Create a conda environment (e.g., for Python 3.10):
 
     conda create --name <env-name> python=3.10
 
-Activate your virtual environment
+Activate the environment
 
     conda activate <env-name>
 
-As some models have relatively heavy dependencies, we provide two conda-forge packages:
+As some models have relatively heavy dependencies, we provide four conda-forge packages:
 
-* Install darts with all available models (recommended): `conda install -c conda-forge -c pytorch u8darts-all`.
+* Install darts with all available models: `conda install -c conda-forge -c pytorch u8darts-all`
+* Install core only (without neural networks, Prophet, LightGBM and Catboost): `conda install -c conda-forge u8darts`
+* Install core + Prophet + LightGBM + CatBoost: `pip install "u8darts[notorch]"`
 * Install core + neural networks (PyTorch): `conda install -c conda-forge -c pytorch u8darts-torch`
-* Install core only (without neural networks): `conda install -c conda-forge u8darts`
 
-## From PyPI
-Install darts with all available models: `pip install darts`.
-
-If this fails on your platform, please follow the official installation 
-guide for [PyTorch](https://pytorch.org/get-started/locally/), then try installing Darts again.
-
-As some dependencies are relatively big or involve non-Python dependencies,
-we also maintain the `u8darts` package, which provides the following alternate lighter install options:
-
-* Install core only (without neural networks, Prophet or AutoARIMA): `pip install u8darts`
-* Install core + neural networks (PyTorch): `pip install "u8darts[torch]"`
-* Install core + AutoARIMA: `pip install "u8darts[pmdarima]"`
 
 ## Other Information
 
-### Issues with LightGBM
-If you run into issues with LightGBM when installing Darts, please follow the
-[installation instructions](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html) for your OS,
-and then try re-installing Darts.
+### Enabling Optional Dependencies
+By default, as of 0.25.0, `darts` does not have Prophet, CatBoost, and LightGBM as dependencies anymore, because their 
+build processes were too often causing issues. If you want to use any of Darts' `Prophet`, `CatBoostModel`, and 
+`LightGBMModel`, you will need to manually install the corresponding packages.  
 
-For instance, on MacOS you may have to run this (using [brew](https://brew.sh/)):
-```
-brew install cmake
-brew install libomp
-```
-and then again: `pip install darts`.
+#### Prophet
+Install the `prophet` package (version 1.1.1 or more recent) using the [Prophet install guide](https://facebook.github.io/prophet/docs/installation.html#python)
 
-### Enabling support for Prophet
-By default, as of 0.24.0, `darts` does not have Prophet as a dependency anymore, because its build
-process was too often causing issues, [notably on Apple silicon](https://github.com/facebook/prophet/issues/2002).
+#### CatBoostModel
+Install the `catboost` package (version 1.0.6 or more recent) using the [CatBoost install guide](https://catboost.ai/en/docs/concepts/python-installation)
 
-If you want to use Darts' `Prophet` model, you will need to install the `prophet` package (version 1.1 or more recent).
-We refer to the [Prophet README install guide](https://github.com/facebook/prophet#installation-in-python---pypi-release)
+#### LightGBMModel
+Install the `lightgbm` package (version 3.2.0 or more recent) using the [LightGBM install guide](https://lightgbm.readthedocs.io/en/latest/Installation-Guide.html)
 
 ### Enabling GPU support
 Darts relies on PyTorch for the neural network models.
 For GPU support, please follow the instructions to install CUDA in the [PyTorch installation guide](https://pytorch.org/get-started/locally/).
 
 ### Using an emulated x64 environment on Apple Silicon.
-The installation of `darts` has been tested to work on Apple silicon (Python 3.10, OSX Ventura 13.2.1).
-It requires following the instructions to make LightGBM work 
-[here](https://github.com/unit8co/darts/blob/master/INSTALL.md#issues-with-lightgbm).
+The installation of `darts` has been tested to work on Apple silicon (M1) (Python 3.10, OSX Ventura 13.2.1).
 
-If you still run into some issues with Apple silicon, you can consider using rosetta
-to use an emulated x64 environment by following the steps below:
+If you run into issues, you can always use rosetta to run in an (intel) emulated x64 environment:
 
 Before you start make sure that you have rosetta2 installed by running: 
-```
+
+```bash
 pgrep oahd
 ``` 
-If you see some process id you are ready to go, as internally rosetta is known as oah.
+
+If you see a process id you are ready to go, as internally rosetta is known as oah.
 
 If pgrep doesn't return any id then install rosetta2:
-```
+
+```bash
 softwareupdate --install-rosetta
 ```
 
 Below are the necessary instructions to create and configure the environment:
-- Start by installing conda (e.g., with miniforge : `brew install miniforge`).
+- Install conda if you haven't done so (e.g., with miniforge : `brew install miniforge`).
 - Create the x_64 environment : `CONDA_SUBDIR=osx-64 conda create -n env_name python=3.9 pip`
-- Activate the created environment: `conda activate env_name`
+- Activate the environment: `conda activate env_name`
 - Configure the environment : `conda env config vars set CONDA_SUBDIR=osx-64`
 - Deactivate and reactivate the environment:
-  ```
+  ```bash
   conda deactivate
   conda activate env_name
   ```
 - Install darts: `pip install darts`
-  - If after this you still run into issues with lightgbm having issues finding the libomp library,
-  the following procedure guarantees that the correct libomp (11.1.0) library is linked.
-    - Unlink the existing libomp, from terminal : `brew unlink libomp`
-    - Setup a homebrew installer that is compatible with x_64 packages (follow this [blog](https://medium.com/mkdir-awesome/how-to-install-x86-64-homebrew-packages-on-apple-m1-macbook-54ba295230f) 
-    post):
-    ```
-    cd ~/Downloads
-    mkdir homebrew
-    curl -L https://github.com/Homebrew/brew/tarball/master | tar xz --strip 1 -C homebrew
-    sudo mv homebrew /usr/local/homebrew
-    export PATH=$HOME/bin:/usr/local/bin:$PATH
-    ```
-    - At this point, we have a new brew command located at /usr/local/homebrew/bin/brew
-    - In the following code bits we download version 11.1.0 of libomp, install it as a x_64 compatible package and link to it so that lightgbm can find it:
-    ```
-    wget https://raw.githubusercontent.com/Homebrew/homebrew-core/fb8323f2b170bd4ae97e1bac9bf3e2983af3fdb0/Formula/libomp.rb
-    arch -x86_64 /usr/local/homebrew/bin/brew install libomp.rb
-    sudo ln -s /usr/local/homebrew/Cellar/libomp/11.1.0/lib /usr/local/opt/libomp/lib
-    ```
-    - Verify that your lightgbm works by importing lightgbm from your python env. It should not give library loading errors. 
 
 ### Running the examples only, without installing:
 

From e0cd981e8d3eebc4c74bd0f74e7dc841c9ba3976 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 11:36:23 +0200
Subject: [PATCH 10/12] update shap unit tests

---
 .../explainability/test_shap_explainer.py     | 48 +++++++++++++------
 1 file changed, 33 insertions(+), 15 deletions(-)

diff --git a/darts/tests/explainability/test_shap_explainer.py b/darts/tests/explainability/test_shap_explainer.py
index 9e99361a7e..822ffa1024 100644
--- a/darts/tests/explainability/test_shap_explainer.py
+++ b/darts/tests/explainability/test_shap_explainer.py
@@ -16,13 +16,19 @@
 from darts.explainability.explainability_result import ShapExplainabilityResult
 from darts.explainability.shap_explainer import ShapExplainer
 from darts.models import (
+    CatBoostModel,
     ExponentialSmoothing,
+    LightGBMModel,
     LinearRegressionModel,
+    NotImportedModule,
     RegressionModel,
     XGBModel,
 )
 from darts.tests.base_test_class import DartsBaseTestClass
 
+lgbm_available = not isinstance(LightGBMModel, NotImportedModule)
+cb_available = not isinstance(CatBoostModel, NotImportedModule)
+
 
 class ShapExplainerTestCase(DartsBaseTestClass):
     np.random.seed(42)
@@ -128,9 +134,9 @@ class ShapExplainerTestCase(DartsBaseTestClass):
     )
 
     def test_creation(self):
-
+        model_cls = LightGBMModel if lgbm_available else XGBModel
         # Model should be fitted first
-        m = XGBModel(
+        m = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -157,7 +163,7 @@ def test_creation(self):
                 self.target_ts,
             )
 
-        m = XGBModel(
+        m = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -248,7 +254,8 @@ def test_creation(self):
         )
 
         # CatBoost
-        m = XGBModel(
+        model_cls = CatBoostModel if cb_available else XGBModel
+        m = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -6],
             lags_future_covariates=[0],
@@ -269,7 +276,8 @@ def test_creation(self):
             ShapExplainer(m, shap_method="bad_choice")
 
     def test_explain(self):
-        m = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        m = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -428,7 +436,8 @@ def test_explain(self):
         self.assertTrue(isinstance(shap_explain.explain(), ShapExplainabilityResult))
 
     def test_explain_with_lags_future_covariates_series_of_same_length_as_target(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[2],
@@ -464,7 +473,8 @@ def test_explain_with_lags_future_covariates_series_extending_into_future(self):
         fut_cov = np.random.normal(0, 1, len(days)).astype("float32")
         fut_cov_ts = TimeSeries.from_times_and_values(days, fut_cov.reshape(-1, 1))
 
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[2],
@@ -499,7 +509,8 @@ def test_explain_with_lags_covariates_series_older_timestamps_than_target(self):
         past_cov = np.random.normal(0, 1, len(days)).astype("float32")
         past_cov_ts = TimeSeries.from_times_and_values(days, past_cov.reshape(-1, 1))
 
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=None,
             lags_past_covariates=[-1, -2],
             lags_future_covariates=[-1, -2],
@@ -525,7 +536,8 @@ def test_explain_with_lags_covariates_series_older_timestamps_than_target(self):
             self.assertEqual(explanation.start_time(), self.target_ts.start_time())
 
     def test_plot(self):
-        m_0 = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        m_0 = model_cls(
             lags=4,
             lags_past_covariates=[-1, -2, -3],
             lags_future_covariates=[0],
@@ -618,7 +630,8 @@ def test_plot(self):
         plt.close()
 
     def test_feature_values_align_with_input(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             output_chunk_length=1,
         )
@@ -644,7 +657,8 @@ def test_feature_values_align_with_input(self):
         )
 
     def test_feature_values_align_with_raw_output_shap(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             output_chunk_length=1,
         )
@@ -670,7 +684,8 @@ def test_feature_values_align_with_raw_output_shap(self):
         ), "The shape of the feature values should be the same as the shap values"
 
     def test_shap_explanation_object_validity(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             lags_past_covariates=2,
             lags_future_covariates=[1],
@@ -692,7 +707,8 @@ def test_shap_explanation_object_validity(self):
         )
 
     def test_shap_selected_components(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             lags_past_covariates=2,
             lags_future_covariates=[1],
@@ -728,7 +744,8 @@ def test_shap_selected_components(self):
 
     def test_shapley_with_static_cov(self):
         ts = self.target_ts_with_static_covs
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             output_chunk_length=1,
         )
@@ -772,7 +789,8 @@ def test_shapley_with_static_cov(self):
             ]
 
     def test_shapley_multiple_series_with_different_static_covs(self):
-        model = XGBModel(
+        model_cls = LightGBMModel if lgbm_available else XGBModel
+        model = model_cls(
             lags=4,
             output_chunk_length=1,
         )

From eebd227e8d54925bfc0c556dd4956ac6df9d762f Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 11:51:38 +0200
Subject: [PATCH 11/12] apply suggestions from PR review

---
 INSTALL.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/INSTALL.md b/INSTALL.md
index cd8f14f061..6cb2fc8e84 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -33,7 +33,7 @@ As some models have relatively heavy dependencies, we provide four conda-forge p
 
 * Install darts with all available models: `conda install -c conda-forge -c pytorch u8darts-all`
 * Install core only (without neural networks, Prophet, LightGBM and Catboost): `conda install -c conda-forge u8darts`
-* Install core + Prophet + LightGBM + CatBoost: `pip install "u8darts[notorch]"`
+* Install core + Prophet + LightGBM + CatBoost: `pip install "u8darts-notorch"`
 * Install core + neural networks (PyTorch): `conda install -c conda-forge -c pytorch u8darts-torch`
 
 
@@ -78,7 +78,7 @@ softwareupdate --install-rosetta
 
 Below are the necessary instructions to create and configure the environment:
 - Install conda if you haven't done so (e.g., with miniforge : `brew install miniforge`).
-- Create the x_64 environment : `CONDA_SUBDIR=osx-64 conda create -n env_name python=3.9 pip`
+- Create the x_64 environment : `CONDA_SUBDIR=osx-64 conda create -n env_name python=3.10 pip`
 - Activate the environment: `conda activate env_name`
 - Configure the environment : `conda env config vars set CONDA_SUBDIR=osx-64`
 - Deactivate and reactivate the environment:

From a1ce78d161cf6fba5582a001e01ffabb829f77e8 Mon Sep 17 00:00:00 2001
From: dennisbader <dennis.bader@gmx.ch>
Date: Thu, 3 Aug 2023 13:26:57 +0200
Subject: [PATCH 12/12] update changelog

---
 CHANGELOG.md | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1b24dc21b5..c69a1158d4 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -10,6 +10,13 @@ but cannot always guarantee backwards compatibility. Changes that may **break co
 
 ### For users of the library:
 
+**Installation**
+- 🔴 Removed Prophet, LightGBM, and CatBoost dependencies from PyPI packages (`darts`, `u8darts`, `u8darts[torch]`), and conda-forge packages (`u8darts`, `u8darts-torch`)  to avoid installation issues that some users were facing (installation on Apple M1/M2 devices, ...). [#1589](https://github.com/unit8co/darts/pull/1589) by [Julien Herzen](https://github.com/hrzn) and [Dennis Bader](https://github.com/dennisbader).
+  - The models are still supported by installing the required packages as described in our [installation guide](https://github.com/unit8co/darts/blob/master/INSTALL.md#enabling-optional-dependencies).
+  - PyPi package `u8darts[all]` and conda-forge package `u8darts-all` are now equivalent to the old `darts` package (all dependencies). 
+  - Added new PyPI flavor `u8darts[notorch]`, and conda-forge flavor `u8darts-notorch` which are equivalent to the old `u8darts` installation.
+- 🔴 Removed support for Python 3.7 [#1864](https://github.com/unit8co/darts/pull/#1864) by [Dennis Bader](https://github.com/dennisbader).
+
 **Improved**
 - General model improvements:
   - Added support for `PathLike` to the `save()` and `load()` functions of all non-deep learning based models. [#1754](https://github.com/unit8co/darts/pull/1754) by [Simon Sudrich](https://github.com/sudrich).
@@ -42,9 +49,6 @@ but cannot always guarantee backwards compatibility. Changes that may **break co
 - Fixed an issue when calling `ARIMA.predict()` and `num_samples > 1` (probabilistic forecasting), where the start point of the simulation was not anchored to the end of the target series. [#1893](https://github.com/unit8co/darts/pull/1893) by [Dennis Bader](https://github.com/dennisbader).
 - Fixed an issue when using `TFTModel.predict()` with `full_attention=True` where the attention mask was not applied properly. [#1392](https://github.com/unit8co/darts/issues/1392) by [Dennis Bader](https://github.com/dennisbader).
 
-**Removed**
-- Removed support for Python 3.7 [#1864](https://github.com/unit8co/darts/pull/#1864) by [Dennis Bader](https://github.com/dennisbader).
-
 ### For developers of the library:
 
 **Improvements**