Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

HPO: Alibaba DSW+DLC support #4055

Merged
merged 5 commits into from
Aug 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
83 changes: 83 additions & 0 deletions docs/en_US/TrainingService/DLCMode.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
**Run an Experiment on Aliyun PAI-DSW + PAI-DLC**
===================================================

NNI supports running an experiment on `PAI-DSW <https://help.aliyun.com/document_detail/194831.html>`__ , submit trials to `PAI-DLC <https://help.aliyun.com/document_detail/165137.html>`__ called dlc mode.

PAI-DSW server performs the role to submit a job while PAI-DLC is where the training job runs.

Setup environment
-----------------

Step 1. Install NNI, follow the install guide `here <../Tutorial/QuickStart.rst>`__.

Step 2. Create PAI-DSW server following this `link <https://help.aliyun.com/document_detail/163684.html?section-2cw-lsi-es9#title-ji9-re9-88x>`__. Note as the training service will be run on PAI-DLC, it won't cost many resources to run and you may just need a PAI-DSW server with CPU.

Step 3. Open PAI-DLC `here <https://pai-dlc.console.aliyun.com/#/guide>`__, select the same region as your PAI-DSW server. Move to ``dataset configuration`` and mount the same NAS disk as the PAI-DSW server does. (Note currently only PAI-DLC public-cluster is supported.)

Step 4. Open your PAI-DSW server command line, download and install PAI-DLC python SDK to submit DLC tasks, refer to `this link <https://help.aliyun.com/document_detail/203290.html>`__. Skip this step if SDK is already installed.


.. code-block:: bash

wget https://sdk-portal-cluster-prod.oss-cn-zhangjiakou.aliyuncs.com/downloads/u-3536038a-3de7-4f2e-9379-0cb309d29355-python-pai-dlc.zip
unzip u-3536038a-3de7-4f2e-9379-0cb309d29355-python-pai-dlc.zip
pip install ./pai-dlc-20201203 # pai-dlc-20201203 refer to unzipped sdk file name, replace it accordingly.


Run an experiment
-----------------

Use ``examples/trials/mnist-pytorch`` as an example. The NNI config YAML file's content is like:

.. code-block:: yaml

# working directory on DSW, please provie FULL path
experimentWorkingDirectory: /home/admin/workspace/{your_working_dir}
searchSpaceFile: search_space.json
# the command on trial runner(or, DLC container), be aware of data_dir
trialCommand: python mnist.py --data_dir /root/data/{your_data_dir}
trialConcurrency: 1 # NOTE: please provide number <= 3 due to DLC system limit.
maxTrialNumber: 10
tuner:
name: TPE
classArgs:
optimize_mode: maximize
# ref: https://help.aliyun.com/document_detail/203290.html?spm=a2c4g.11186623.6.727.6f9b5db6bzJh4x
trainingService:
platform: dlc
type: Worker
image: registry-vpc.cn-beijing.aliyuncs.com/pai-dlc/pytorch-training:1.6.0-gpu-py37-cu101-ubuntu18.04
jobType: PyTorchJob # choices: [TFJob, PyTorchJob]
podCount: 1
ecsSpec: ecs.c6.large
region: cn-hangzhou
nasDataSourceId: ${your_nas_data_source_id}
accessKeyId: ${your_ak_id}
accessKeySecret: ${your_ak_key}
nasDataSourceId: ${your_nas_data_source_id} # NAS datasource ID,e.g., datat56by9n1xt0a
localStorageMountPoint: /home/admin/workspace/ # default NAS path on DSW
containerStorageMountPoint: /root/data/ # default NAS path on DLC container, change it according your setting

Note: You should set ``platform: dlc`` in NNI config YAML file if you want to start experiment in dlc mode.

Compared with `LocalMode <LocalMode.rst>`__ training service configuration in dlc mode have these additional keys like ``type/image/jobType/podCount/ecsSpec/region/nasDataSourceId/accessKeyId/accessKeySecret``, for detailed explanation ref to this `link <https://help.aliyun.com/document_detail/203111.html#h2-url-3>`__.

Also, as dlc mode requires DSW/DLC to mount the same NAS disk to share information, there are two extra keys related to this: ``localStorageMountPoint`` and ``containerStorageMountPoint``.

Run the following commands to start the example experiment:

.. code-block:: bash

git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/trials/mnist-pytorch

# modify config_dlc.yml ...

nnictl create --config config_dlc.yml

Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.3``.

Monitor your job
----------------

To monitor your job on DLC, you need to visit `DLC <https://pai-dlc.console.aliyun.com/#/jobs>`__ to check job status.
6 changes: 4 additions & 2 deletions docs/en_US/TrainingService/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ What is Training Service?

NNI training service is designed to allow users to focus on AutoML itself, agnostic to the underlying computing infrastructure where the trials are actually run. When migrating from one cluster to another (e.g., local machine to Kubeflow), users only need to tweak several configurations, and the experiment can be easily scaled.

Users can use training service provided by NNI, to run trial jobs on `local machine <./LocalMode.rst>`__\ , `remote machines <./RemoteMachineMode.rst>`__\ , and on clusters like `PAI <./PaiMode.rst>`__\ , `Kubeflow <./KubeflowMode.rst>`__\ , `AdaptDL <./AdaptDLMode.rst>`__\ , `FrameworkController <./FrameworkControllerMode.rst>`__\ , `DLTS <./DLTSMode.rst>`__ and `AML <./AMLMode.rst>`__. These are called *built-in training services*.
Users can use training service provided by NNI, to run trial jobs on `local machine <./LocalMode.rst>`__\ , `remote machines <./RemoteMachineMode.rst>`__\ , and on clusters like `PAI <./PaiMode.rst>`__\ , `Kubeflow <./KubeflowMode.rst>`__\ , `AdaptDL <./AdaptDLMode.rst>`__\ , `FrameworkController <./FrameworkControllerMode.rst>`__\ , `DLTS <./DLTSMode.rst>`__, `AML <./AMLMode.rst>`__ and `DLC <./DLCMode.rst>`__. These are called *built-in training services*.

If the computing resource customers try to use is not listed above, NNI provides interface that allows users to build their own training service easily. Please refer to `how to implement training service <./HowToImplementTrainingService.rst>`__ for details.

Expand Down Expand Up @@ -44,6 +44,8 @@ Built-in Training Services
- NNI supports running experiment using `DLTS <https://github.com/microsoft/DLWorkspace.git>`__\ , which is an open source toolkit, developed by Microsoft, that allows AI scientists to spin up an AI cluster in turn-key fashion.
* - `AML <./AMLMode.rst>`__
- NNI supports running an experiment on `AML <https://azure.microsoft.com/en-us/services/machine-learning/>`__ , called aml mode.
* - `DLC <./DLCMode.rst>`__
- NNI supports running an experiment on `PAI-DLC <https://help.aliyun.com/document_detail/165137.html>`__ , called dlc mode.


What does Training Service do?
Expand Down Expand Up @@ -77,4 +79,4 @@ When reuse mode is enabled, a cluster, such as a remote machine or a computer in

In the reuse mode, user needs to make sure each trial can run independently in the same job (e.g., avoid loading checkpoints from previous trials).

.. note:: Currently, only `Local <./LocalMode.rst>`__, `Remote <./RemoteMachineMode.rst>`__, `OpenPAI <./PaiMode.rst>`__ and `AML <./AMLMode.rst>`__ training services support resue mode. For Remote and OpenPAI training platforms, you can enable reuse mode according to `here <../reference/experiment_config.rst>`__ manually. AML is implemented under reuse mode, so the default mode is reuse mode, no need to manually enable.
.. note:: Currently, only `Local <./LocalMode.rst>`__, `Remote <./RemoteMachineMode.rst>`__, `OpenPAI <./PaiMode.rst>`__, `AML <./AMLMode.rst>`__ and `DLC <./DLCMode.rst>`__ training services support resue mode. For Remote and OpenPAI training platforms, you can enable reuse mode according to `here <../reference/experiment_config.rst>`__ manually. AML is implemented under reuse mode, so the default mode is reuse mode, no need to manually enable.
106 changes: 106 additions & 0 deletions docs/en_US/reference/experiment_config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -409,6 +409,7 @@ One of the following:
- `RemoteConfig`_
- :ref:`OpenpaiConfig <openpai-class>`
- `AmlConfig`_
- `DlcConfig`_
- `HybridConfig`_

For `Kubeflow <../TrainingService/KubeflowMode.rst>`_, `FrameworkController <../TrainingService/FrameworkControllerMode.rst>`_, and `AdaptDL <../TrainingService/AdaptDLMode.rst>`_ training platforms, it is suggested to use `v1 config schema <../Tutorial/ExperimentConfig.rst>`_ for now.
Expand Down Expand Up @@ -797,6 +798,111 @@ AML compute cluster name.
type: ``str``


DlcConfig
---------

Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.


platform
""""""""

Constant string ``"dlc"``.


type
""""

Job spec type.

type: ``str``

default: ``"worker"``


image
"""""

Name and tag of docker image to run the trials.

type: ``str``


jobType
"""""""

PAI-DLC training job type, ``"TFJob"`` or ``"PyTorchJob"``.

type: ``str``


podCount
""""""""

Pod count to run a single training job.

type: ``str``


ecsSpec
"""""""

Training server config spec string.

type: ``str``


region
""""""

The region where PAI-DLC public-cluster locates.

type: ``str``


nasDataSourceId
"""""""""""""""

The NAS datasource id configurated in PAI-DLC side.

type: ``str``



accessKeyId
"""""""""""

The accessKeyId of your cloud account.

type: ``str``



accessKeySecret
"""""""""""""""

The accessKeySecret of your cloud account.

type: ``str``



localStorageMountPoint
""""""""""""""""""""""

The mount point of the NAS on PAI-DSW server, default is /home/admin/workspace/.

type: ``str``


containerStorageMountPoint
""""""""""""""""""""""""""

The mount point of the NAS on PAI-DLC side, default is /root/data/.

type: ``str``


HybridConfig
------------

Expand Down
1 change: 1 addition & 0 deletions docs/en_US/training_services.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,5 @@ Introduction to NNI Training Services
FrameworkController<./TrainingService/FrameworkControllerMode>
DLTS<./TrainingService/DLTSMode>
AML<./TrainingService/AMLMode>
PAI-DLC<./TrainingService/DLCMode>
Hybrid<./TrainingService/HybridMode>
25 changes: 25 additions & 0 deletions examples/trials/mnist-pytorch/config_dlc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# working directory on DSW, please provie FULL path
searchSpaceFile: search_space.json
# the command on trial runner(or, DLC container), be aware of data_dir
trialCommand: python mnist.py --data_dir /root/data/{your_data_dir}
trialConcurrency: 1 # NOTE: please provide number <= 3 due to DLC system limit.
maxTrialNumber: 10
tuner:
name: TPE
classArgs:
optimize_mode: maximize
# ref: https://help.aliyun.com/document_detail/203290.html?spm=a2c4g.11186623.6.727.6f9b5db6bzJh4x
trainingService:
platform: dlc
type: Worker
image: registry-vpc.cn-beijing.aliyuncs.com/pai-dlc/pytorch-training:1.6.0-gpu-py37-cu101-ubuntu18.04
jobType: PyTorchJob # choices: [TFJob, PyTorchJob]
podCount: 1
ecsSpec: ecs.c6.large
region: cn-hangzhou
nasDataSourceId: ${your_nas_data_source_id}
accessKeyId: ${your_ak_id}
accessKeySecret: ${your_ak_key}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

provide accessKeySecret directly in yaml file directly is not a recommended way.

nasDataSourceId: ${your_nas_data_source_id} # NAS datasource ID,e.g., datat56by9n1xt0a
localStorageMountPoint: /home/admin/workspace/ # default NAS path on DSW, MUST provide full path.
containerStorageMountPoint: /root/data/ # default NAS path on DLC container, change it according your setting
1 change: 1 addition & 0 deletions nni/experiment/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@
from .kubeflow import *
from .frameworkcontroller import *
from .adl import *
from .dlc import *
from .shared_storage import *
27 changes: 27 additions & 0 deletions nni/experiment/config/dlc.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.

from dataclasses import dataclass

from .common import TrainingServiceConfig

__all__ = ['DlcConfig']

@dataclass(init=False)
class DlcConfig(TrainingServiceConfig):
platform: str = 'dlc'
type: str = 'Worker'
image: str # 'registry-vpc.{region}.aliyuncs.com/pai-dlc/tensorflow-training:1.15.0-cpu-py36-ubuntu18.04',
job_type: str = 'TFJob'
pod_count: int
ecs_spec: str # e.g.,'ecs.c6.large'
region: str
nas_data_source_id: str
access_key_id: str
access_key_secret: str
local_storage_mount_point: str
container_storage_mount_point: str

_validation_rules = {
'platform': lambda value: (value == 'dlc', 'cannot be modified')
}
6 changes: 6 additions & 0 deletions nni/tools/nnictl/config_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,12 @@ def _inverse_cluster_metadata(platform: str, metadata_config: list) -> dict:
inverse_config['amlConfig'] = kv['value']
elif kv['key'] == 'trial_config':
inverse_config['trial'] = kv['value']
elif platform == 'dlc':
for kv in metadata_config:
if kv['key'] == 'dlc_config':
inverse_config['dlcConfig'] = kv['value']
elif kv['key'] == 'trial_config':
inverse_config['trial'] = kv['value']
elif platform == 'adl':
for kv in metadata_config:
if kv['key'] == 'adl_config':
Expand Down
1 change: 1 addition & 0 deletions nni/tools/nnictl/ts_management.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
'remote',
'openpai', 'pai',
'aml',
'dlc'
'kubeflow',
'frameworkcontroller',
'adl',
Expand Down
19 changes: 19 additions & 0 deletions ts/nni_manager/common/experimentConfig.ts
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,25 @@ export interface AmlConfig extends TrainingServiceConfig {
maxTrialNumberPerGpu: number;
}


/* Alibaba PAI DLC */
export interface DlcConfig extends TrainingServiceConfig {
platfrom: 'dlc';
type: string;
image: string;
jobType: string;
podCount: number;
ecsSpec: string;
region: string;
nasDataSourceId: string;
accessKeyId: string;
accessKeySecret: string;
localStorageMountPoint: string;
containerStorageMountPoint: string;
}
/* Kubeflow */

// FIXME: merge with shared storage config
export interface KubeflowStorageConfig {
storageType: string;
maxTrialNumberPerGpu?: number;
Expand Down
Loading