From 50758b9fab0f91f0ab6d149c333ae25393ad21b2 Mon Sep 17 00:00:00 2001 From: Yan Ni Date: Mon, 7 Jan 2019 18:03:47 +0800 Subject: [PATCH 1/4] Dev weight sharing (#568) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * simple weight sharing * update gitignore file * change tuner codedir to relative path * add python cache files to gitignore list * move extract scalar reward logic from dispatcher to tuner * update tuner code corresponding to last commit * update doc for receive_trial_result api change * add numpy to package whitelist of pylint * distinguish param value from return reward for tuner.extract_scalar_reward * update pylintrc * add comments to dispatcher.handle_report_metric_data * update install for mac support * fix root mode bug on Makefile * Quick fix bug: nnictl port value error (#245) * fix port bug * Dev exp stop more (#221) * Exp stop refactor (#161) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * fix setup.py (#115) * Add DAG model configuration format for SQuAD example. * Explain config format for SQuAD QA model. * Add more detailed introduction about the evolution algorithm. * Fix install.sh add add trial log path (#109) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * show trial log path * update document * fix install.sh * set default vallue for maxTrialNum and maxExecDuration * fix nnictl * Dev smac (#116) * support package install (#91) * fix nnictl bug * support package install * update * update package install logic * Fix package install issue (#95) * fix nnictl bug * fix pakcage install * support SMAC as a tuner on nni (#81) * update doc * update doc * update doc * update hyperopt installation * update doc * update doc * update description in setup.py * update setup.py * modify encoding * encoding * add encoding * remove pymc3 * update doc * update builtin tuner spec * support smac in sdk, fix logging issue * support smac tuner * add optimize_mode * update config in nnictl * add __init__.py * update smac * update import path * update setup.py: remove entry_point * update rest server validation * fix bug in nnictl launcher * support classArgs: optimize_mode * quick fix bug * test travis * add dependency * add dependency * add dependency * add dependency * create smac python package * fix trivial points * optimize import of tuners, modify nnictl accordingly * fix bug: incorrect algorithm_name * trivial refactor * for debug * support virtual * update doc of SMAC * update smac requirements * update requirements * change debug mode * update doc * update doc * refactor based on comments * fix comments * modify example config path to relative path and increase maxTrialNum (#94) * modify example config path to relative path and increase maxTrialNum * add document * support conda (#90) (#110) * support install from venv and travis CI * support install from venv and travis CI * support install from venv and travis CI * support conda * support conda * modify example config path to relative path and increase maxTrialNum * undo messy commit * undo messy commit * Support pip install as root (#77) * Typo on #58 (#122) * PAI Training Service implementation (#128) * PAI Training service implementation **1. Implement PAITrainingService **2. Add trial-keeper python module, and modify setup.py to install the module **3. Add PAItrainingService rest server to collect metrics from PAI container. * fix datastore for multiple final result (#129) * Update NNI v0.2 release notes (#132) Update NNI v0.2 release notes * Update setup.py Makefile and documents (#130) * update makefile and setup.py * update makefile and setup.py * update document * update document * Update Makefile no travis * update doc * update doc * fix convert from ss to pcs (#133) * Fix bugs about webui (#131) * Fix webui bugs * Fix tslint * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * Merge branch V0.2 to Master (#143) * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * fix bug (#147) * Refactor nnictl and add config_pai.yml (#144) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * add config_pai.yml * refactor nnictl create logic and add colorful print * fix nnictl stop logic * add annotation for config_pai.yml * add document for start experiment * fix config.yml * fix document * Fix trial keeper wrongly exit issue (#152) * Fix trial keeper bug, use actual exitcode to exit rather than 1 * Fix bug of table sort (#145) * Update doc for PAIMode and v0.2 release notes (#153) * Update v0.2 documentation regards to release note and PAI training service * Update document to describe NNI docker image * fix antd (#159) * refactor experiment stopping logic * support change concurrency * remove trialJobs.ts * trivial changes * fix bugs * fix bug * support updating maxTrialNum * Modify IT scripts for supporting multiple experiments * Update ci (#175) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * modify CI cuz of refracting exp stop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * file saving * fix issues from code merge * remove $(INSTALL_PREFIX)/nni/nni_manager before install * fix indent * fix merge issue * socket close * update port * fix merge error * modify ci logic in nnimanager * fix ci * fix bug * change suspended to done * update ci (#229) * update ci * update ci * update ci (#232) * update ci * update ci * update azure-pipelines * update azure-pipelines * update ci (#233) * update ci * update ci * update azure-pipelines * update azure-pipelines * update azure-pipelines * run.py (#238) * Nnupdate ci (#239) * run.py * test ci * Nnupdate ci (#240) * run.py * test ci * test ci * Udci (#241) * run.py * test ci * test ci * test ci * update ci (#242) * run.py * test ci * test ci * test ci * update ci * revert install.sh (#244) * run.py * test ci * test ci * test ci * update ci * revert install.sh * add comments * remove assert * trivial change * trivial change * update Makefile (#246) * update Makefile * update Makefile * quick fix for ci (#248) * add update trialNum and fix bugs (#261) * Add builtin tuner to CI (#247) * update Makefile * update Makefile * add builtin-tuner test * add builtin-tuner test * refractor ci * update azure.yml * add built-in tuner test * fix bugs * Doc refactor (#258) * doc refactor * image name refactor * Refactor nnictl to support listing stopped experiments. (#256) Refactor nnictl to support listing stopped experiments. * Show experiment parameters more beautifully (#262) * fix error on example of RemoteMachineMode (#269) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * Update docker file to use latest nni release (#263) * fix bug about execDuration and endTime (#270) * fix bug about execDuration and endTime * modify time interval to 30 seconds * refactor based on Gems's suggestion * for triggering ci * Refactor dockerfile (#264) * refactor Dockerfile * Support nnictl tensorboard (#268) support tensorboard * Sdk update (#272) * Rename get_parameters to get_next_parameter * annotations add get_next_parameter * updates * updates * updates * updates * updates * add experiment log path to experiment profile (#276) * refactor extract reward from dict by tuner * update Makefile for mac support, wait for aka.ms support * refix Makefile for colorful echo * unversion config.yml with machine information * sync graph.py between tuners & trial of ga_squad * sync graph.py between tuners & trial of ga_squad * copy weight shared ga_squad under weight_sharing folder * mv ga_squad code back to master * simple tuner & trial ready * Fix nnictl multiThread option * weight sharing with async dispatcher simple example ready * update for ga_squad * fix bug * modify multihead attention name * add min_layer_num to Graph * fix bug * update share id calc * fix bug * add save logging * fix ga_squad tuner bug * sync bug fix for ga_squad tuner * fix same hash_id bug * add lock to simple tuner in weight sharing * Add readme to simple weight sharing * update * update * add paper link * update * reformat with autopep8 * add documentation for weight sharing * test for weight sharing * delete irrelevant files * move details of weight sharing in to code comments --- docs/AdvancedNAS.md | 71 +++ examples/trials/ga_squad/trial.py | 6 +- .../weight_sharing/ga_squad/attention.py | 171 +++++++ .../weight_sharing/ga_squad/config_remote.yml | 31 ++ .../trials/weight_sharing/ga_squad/data.py | 269 ++++++++++ .../weight_sharing/ga_squad/download.sh | 6 + .../weight_sharing/ga_squad/evaluate.py | 169 +++++++ .../trials/weight_sharing/ga_squad/graph.py | 336 +++++++++++++ .../weight_sharing/ga_squad/graph_to_tf.py | 342 +++++++++++++ .../trials/weight_sharing/ga_squad/rnn.py | 118 +++++ .../weight_sharing/ga_squad/train_model.py | 263 ++++++++++ .../trials/weight_sharing/ga_squad/trial.py | 461 ++++++++++++++++++ .../trials/weight_sharing/ga_squad/util.py | 76 +++ .../ga_customer_tuner/customer_tuner.py | 2 +- .../ga_customer_tuner/README.md | 15 + .../ga_customer_tuner/__init__.py | 0 .../ga_customer_tuner/customer_tuner.py | 224 +++++++++ .../weight_sharing/ga_customer_tuner/graph.py | 336 +++++++++++++ src/sdk/pynni/nni/common.py | 3 +- src/sdk/pynni/nni/msg_dispatcher.py | 1 + src/sdk/pynni/nni/msg_dispatcher_base.py | 20 +- src/sdk/pynni/nni/tuner.py | 1 + test/async_sharing_test/config.yml | 25 + test/async_sharing_test/main.py | 56 +++ test/async_sharing_test/simple_tuner.py | 65 +++ tools/nni_cmd/launcher.py | 2 + 26 files changed, 3060 insertions(+), 9 deletions(-) create mode 100644 docs/AdvancedNAS.md create mode 100644 examples/trials/weight_sharing/ga_squad/attention.py create mode 100644 examples/trials/weight_sharing/ga_squad/config_remote.yml create mode 100644 examples/trials/weight_sharing/ga_squad/data.py create mode 100644 examples/trials/weight_sharing/ga_squad/download.sh create mode 100644 examples/trials/weight_sharing/ga_squad/evaluate.py create mode 100644 examples/trials/weight_sharing/ga_squad/graph.py create mode 100644 examples/trials/weight_sharing/ga_squad/graph_to_tf.py create mode 100644 examples/trials/weight_sharing/ga_squad/rnn.py create mode 100644 examples/trials/weight_sharing/ga_squad/train_model.py create mode 100644 examples/trials/weight_sharing/ga_squad/trial.py create mode 100644 examples/trials/weight_sharing/ga_squad/util.py create mode 100644 examples/tuners/weight_sharing/ga_customer_tuner/README.md create mode 100644 examples/tuners/weight_sharing/ga_customer_tuner/__init__.py create mode 100644 examples/tuners/weight_sharing/ga_customer_tuner/customer_tuner.py create mode 100644 examples/tuners/weight_sharing/ga_customer_tuner/graph.py create mode 100644 test/async_sharing_test/config.yml create mode 100644 test/async_sharing_test/main.py create mode 100644 test/async_sharing_test/simple_tuner.py diff --git a/docs/AdvancedNAS.md b/docs/AdvancedNAS.md new file mode 100644 index 0000000000..3d2dd986bb --- /dev/null +++ b/docs/AdvancedNAS.md @@ -0,0 +1,71 @@ +# Tutorial for Advanced Neural Architecture Search +Currently many of the NAS algorithms leverage the technique of **weight sharing** among trials to accelerate its training process. For example, [ENAS][1] delivers 1000x effiency with '_parameter sharing between child models_', compared with the previous [NASNet][2] algorithm. Other NAS algorithms such as [DARTS][3], [Network Morphism][4], and [Evolution][5] is also leveraging, or has the potential to leverage weight sharing. + +This is a tutorial on how to enable weight sharing in NNI. + +## Weight Sharing among trials +Currently we recommend sharing weights through NFS (Network File System), which supports sharing files across machines, and is light-weighted, (relatively) efficient. We also welcome contributions from the community on more efficient techniques. + +### NFS Setup +In NFS, files are physically stored on a server machine, and trials on the client machine can read/write those files in the same way that they access local files. + +#### Install NFS on server machine +First, install NFS server: +```bash +sudo apt-get install nfs-kernel-server +``` +Suppose `/tmp/nni/shared` is used as the physical storage, then run: +```bash +sudo mkdir -p /tmp/nni/shared +sudo echo "/tmp/nni/shared *(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports +sudo service nfs-kernel-server restart +``` +You can check if the above directory is successfully exported by NFS using `sudo showmount -e localhost` + +#### Install NFS on client machine +First, install NFS client: +```bash +sudo apt-get install nfs-common +``` +Then create & mount the mounted directory of shared files: +```bash +sudo mkdir -p /mnt/nfs/nni/ +sudo mount -t nfs 10.10.10.10:/tmp/nni/shared /mnt/nfs/nni +``` +where `10.10.10.10` should be replaced by the real IP of NFS server machine in practice. + +### Weight Sharing through NFS file +With the NFS setup, trial code can share model weight through loading & saving files. For example, in tensorflow: +```python +# save models +saver = tf.train.Saver() +saver.save(sess, os.path.join(params['save_path'], 'model.ckpt')) +# load models +tf.init_from_checkpoint(params['restore_path']) +``` +where `'save_path'` and `'restore_path'` in hyper-parameter can be managed by the tuner. + +## Asynchornous Dispatcher Mode for trial dependency control +The feature of weight sharing enables trials from different machines, in which most of the time **read after write** consistency must be assured. After all, the child model should not load parent model before parent trial finishes training. To deal with this, users can enable **asynchronous dispatcher mode** with `multiThread: true` in `config.yml` in NNI, where the dispatcher assign a tuner thread each time a `NEW_TRIAL` request comes in, and the tuner thread can decide when to submit a new trial by blocking and unblocking the thread itself. For example: +```python + def generate_parameters(self, parameter_id): + self.thread_lock.acquire() + indiv = # configuration for a new trial + self.events[parameter_id] = threading.Event() + self.thread_lock.release() + if indiv.parent_id is not None: + self.events[indiv.parent_id].wait() + + def receive_trial_result(self, parameter_id, parameters, reward): + self.thread_lock.acquire() + # code for processing trial results + self.thread_lock.release() + self.events[parameter_id].set() +``` + + +[1]: https://arxiv.org/abs/1802.03268 +[2]: https://arxiv.org/abs/1707.07012 +[3]: https://arxiv.org/abs/1806.09055 +[4]: https://arxiv.org/abs/1806.10282 +[5]: https://arxiv.org/abs/1703.01041 \ No newline at end of file diff --git a/examples/trials/ga_squad/trial.py b/examples/trials/ga_squad/trial.py index cb6640ac7a..815e88af4e 100644 --- a/examples/trials/ga_squad/trial.py +++ b/examples/trials/ga_squad/trial.py @@ -338,7 +338,7 @@ def train_with_graph(graph, qp_pairs, dev_qp_pairs): answers = generate_predict_json( position1, position2, ids, contexts) if save_path is not None: - with open(save_path + 'epoch%d.prediction' % epoch, 'w') as file: + with open(os.path.join(save_path, 'epoch%d.prediction' % epoch), 'w') as file: json.dump(answers, file) else: answers = json.dumps(answers) @@ -359,8 +359,8 @@ def train_with_graph(graph, qp_pairs, dev_qp_pairs): bestacc = acc if save_path is not None: - saver.save(sess, save_path + 'epoch%d.model' % epoch) - with open(save_path + 'epoch%d.score' % epoch, 'wb') as file: + saver.save(os.path.join(sess, save_path + 'epoch%d.model' % epoch)) + with open(os.path.join(save_path, 'epoch%d.score' % epoch), 'wb') as file: pickle.dump( (position1, position2, ids, contexts), file) logger.debug('epoch %d acc %g bestacc %g' % diff --git a/examples/trials/weight_sharing/ga_squad/attention.py b/examples/trials/weight_sharing/ga_squad/attention.py new file mode 100644 index 0000000000..812db53221 --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/attention.py @@ -0,0 +1,171 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +import math + +import tensorflow as tf +from tensorflow.python.ops.rnn_cell_impl import RNNCell + + +def _get_variable(variable_dict, name, shape, initializer=None, dtype=tf.float32): + if name not in variable_dict: + variable_dict[name] = tf.get_variable( + name=name, shape=shape, initializer=initializer, dtype=dtype) + return variable_dict[name] + + +class DotAttention: + ''' + DotAttention + ''' + + def __init__(self, name, + hidden_dim, + is_vanilla=True, + is_identity_transform=False, + need_padding=False): + self._name = '/'.join([name, 'dot_att']) + self._hidden_dim = hidden_dim + self._is_identity_transform = is_identity_transform + self._need_padding = need_padding + self._is_vanilla = is_vanilla + self._var = {} + + @property + def is_identity_transform(self): + return self._is_identity_transform + + @property + def is_vanilla(self): + return self._is_vanilla + + @property + def need_padding(self): + return self._need_padding + + @property + def hidden_dim(self): + return self._hidden_dim + + @property + def name(self): + return self._name + + @property + def var(self): + return self._var + + def _get_var(self, name, shape, initializer=None): + with tf.variable_scope(self.name): + return _get_variable(self.var, name, shape, initializer) + + def _define_params(self, src_dim, tgt_dim): + hidden_dim = self.hidden_dim + self._get_var('W', [src_dim, hidden_dim]) + if not self.is_vanilla: + self._get_var('V', [src_dim, hidden_dim]) + if self.need_padding: + self._get_var('V_s', [src_dim, src_dim]) + self._get_var('V_t', [tgt_dim, tgt_dim]) + if not self.is_identity_transform: + self._get_var('T', [tgt_dim, src_dim]) + self._get_var('U', [tgt_dim, hidden_dim]) + self._get_var('b', [1, hidden_dim]) + self._get_var('v', [hidden_dim, 1]) + + def get_pre_compute(self, s): + ''' + :param s: [src_sequence, batch_size, src_dim] + :return: [src_sequence, batch_size. hidden_dim] + ''' + hidden_dim = self.hidden_dim + src_dim = s.get_shape().as_list()[-1] + assert src_dim is not None, 'src dim must be defined' + W = self._get_var('W', shape=[src_dim, hidden_dim]) + b = self._get_var('b', shape=[1, hidden_dim]) + return tf.tensordot(s, W, [[2], [0]]) + b + + def get_prob(self, src, tgt, mask, pre_compute, return_logits=False): + ''' + :param s: [src_sequence_length, batch_size, src_dim] + :param h: [batch_size, tgt_dim] or [tgt_sequence_length, batch_size, tgt_dim] + :param mask: [src_sequence_length, batch_size]\ + or [tgt_sequence_length, src_sequence_length, batch_sizse] + :param pre_compute: [src_sequence_length, batch_size, hidden_dim] + :return: [src_sequence_length, batch_size]\ + or [tgt_sequence_length, src_sequence_length, batch_size] + ''' + s_shape = src.get_shape().as_list() + h_shape = tgt.get_shape().as_list() + src_dim = s_shape[-1] + tgt_dim = h_shape[-1] + assert src_dim is not None, 'src dimension must be defined' + assert tgt_dim is not None, 'tgt dimension must be defined' + + self._define_params(src_dim, tgt_dim) + + if len(h_shape) == 2: + tgt = tf.expand_dims(tgt, 0) + if pre_compute is None: + pre_compute = self.get_pre_compute(src) + + buf0 = pre_compute + buf1 = tf.tensordot(tgt, self.var['U'], axes=[[2], [0]]) + buf2 = tf.tanh(tf.expand_dims(buf0, 0) + tf.expand_dims(buf1, 1)) + + if not self.is_vanilla: + xh1 = tgt + xh2 = tgt + s1 = src + if self.need_padding: + xh1 = tf.tensordot(xh1, self.var['V_t'], 1) + xh2 = tf.tensordot(xh2, self.var['S_t'], 1) + s1 = tf.tensordot(s1, self.var['V_s'], 1) + if not self.is_identity_transform: + xh1 = tf.tensordot(xh1, self.var['T'], 1) + xh2 = tf.tensordot(xh2, self.var['T'], 1) + buf3 = tf.expand_dims(s1, 0) * tf.expand_dims(xh1, 1) + buf3 = tf.tanh(tf.tensordot(buf3, self.var['V'], axes=[[3], [0]])) + buf = tf.reshape(tf.tanh(buf2 + buf3), shape=tf.shape(buf3)) + else: + buf = buf2 + v = self.var['v'] + e = tf.tensordot(buf, v, [[3], [0]]) + e = tf.squeeze(e, axis=[3]) + tmp = tf.reshape(e + (mask - 1) * 10000.0, shape=tf.shape(e)) + prob = tf.nn.softmax(tmp, 1) + if len(h_shape) == 2: + prob = tf.squeeze(prob, axis=[0]) + tmp = tf.squeeze(tmp, axis=[0]) + if return_logits: + return prob, tmp + return prob + + def get_att(self, s, prob): + ''' + :param s: [src_sequence_length, batch_size, src_dim] + :param prob: [src_sequence_length, batch_size]\ + or [tgt_sequence_length, src_sequence_length, batch_size] + :return: [batch_size, src_dim] or [tgt_sequence_length, batch_size, src_dim] + ''' + buf = s * tf.expand_dims(prob, axis=-1) + att = tf.reduce_sum(buf, axis=-3) + return att diff --git a/examples/trials/weight_sharing/ga_squad/config_remote.yml b/examples/trials/weight_sharing/ga_squad/config_remote.yml new file mode 100644 index 0000000000..a07ab055cb --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/config_remote.yml @@ -0,0 +1,31 @@ +authorName: default +experimentName: ga_squad_weight_sharing +trialConcurrency: 2 +maxExecDuration: 1h +maxTrialNum: 200 +#choice: local, remote, pai +trainingServicePlatform: remote +#choice: true, false +useAnnotation: false +multiThread: true +tuner: + codeDir: ../../../tuners/weight_sharing/ga_customer_tuner + classFileName: customer_tuner.py + className: CustomerTuner + classArgs: + optimize_mode: maximize + population_size: 32 + save_dir_root: /mnt/nfs/nni/ga_squad +trial: + command: python3 trial.py --input_file /mnt/nfs/nni/train-v1.1.json --dev_file /mnt/nfs/nni/dev-v1.1.json --max_epoch 1 --embedding_file /mnt/nfs/nni/glove.6B.300d.txt + codeDir: . + gpuNum: 1 +machineList: + - ip: remote-ip-0 + port: 8022 + username: root + passwd: screencast + - ip: remote-ip-1 + port: 8022 + username: root + passwd: screencast diff --git a/examples/trials/weight_sharing/ga_squad/data.py b/examples/trials/weight_sharing/ga_squad/data.py new file mode 100644 index 0000000000..074b5a5b28 --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/data.py @@ -0,0 +1,269 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +''' +Data processing script for the QA model. +''' + +import csv +import json +from random import shuffle + +import numpy as np + + +class WhitespaceTokenizer: + ''' + Tokenizer for whitespace + ''' + + def tokenize(self, text): + ''' + tokenize function in Tokenizer. + ''' + start = -1 + tokens = [] + for i, character in enumerate(text): + if character == ' ' or character == '\t': + if start >= 0: + word = text[start:i] + tokens.append({ + 'word': word, + 'original_text': word, + 'char_begin': start, + 'char_end': i}) + start = -1 + else: + if start < 0: + start = i + if start >= 0: + tokens.append({ + 'word': text[start:len(text)], + 'original_text': text[start:len(text)], + 'char_begin': start, + 'char_end': len(text) + }) + return tokens + + +def load_from_file(path, fmt=None, is_training=True): + ''' + load data from file + ''' + if fmt is None: + fmt = 'squad' + assert fmt in ['squad', 'csv'], 'input format must be squad or csv' + qp_pairs = [] + if fmt == 'squad': + with open(path) as data_file: + data = json.load(data_file)['data'] + for doc in data: + for paragraph in doc['paragraphs']: + passage = paragraph['context'] + for qa_pair in paragraph['qas']: + question = qa_pair['question'] + qa_id = qa_pair['id'] + if not is_training: + qp_pairs.append( + {'passage': passage, 'question': question, 'id': qa_id}) + else: + for answer in qa_pair['answers']: + answer_begin = int(answer['answer_start']) + answer_end = answer_begin + len(answer['text']) + qp_pairs.append({'passage': passage, + 'question': question, + 'id': qa_id, + 'answer_begin': answer_begin, + 'answer_end': answer_end}) + else: + with open(path, newline='') as csvfile: + reader = csv.reader(csvfile, delimiter='\t') + line_num = 0 + for row in reader: + qp_pairs.append( + {'passage': row[1], 'question': row[0], 'id': line_num}) + line_num += 1 + return qp_pairs + + +def tokenize(qp_pair, tokenizer=None, is_training=False): + ''' + tokenize function. + ''' + question_tokens = tokenizer.tokenize(qp_pair['question']) + passage_tokens = tokenizer.tokenize(qp_pair['passage']) + if is_training: + question_tokens = question_tokens[:300] + passage_tokens = passage_tokens[:300] + passage_tokens.insert( + 0, {'word': '', 'original_text': '', 'char_begin': 0, 'char_end': 0}) + passage_tokens.append( + {'word': '', 'original_text': '', 'char_begin': 0, 'char_end': 0}) + qp_pair['question_tokens'] = question_tokens + qp_pair['passage_tokens'] = passage_tokens + + +def collect_vocab(qp_pairs): + ''' + Build the vocab from corpus. + ''' + vocab = set() + for qp_pair in qp_pairs: + for word in qp_pair['question_tokens']: + vocab.add(word['word']) + for word in qp_pair['passage_tokens']: + vocab.add(word['word']) + return vocab + + +def shuffle_step(entries, step): + ''' + Shuffle the step + ''' + answer = [] + for i in range(0, len(entries), step): + sub = entries[i:i+step] + shuffle(sub) + answer += sub + return answer + + +def get_batches(qp_pairs, batch_size, need_sort=True): + ''' + Get batches data and shuffle. + ''' + if need_sort: + qp_pairs = sorted(qp_pairs, key=lambda qp: ( + len(qp['passage_tokens']), qp['id']), reverse=True) + batches = [{'qp_pairs': qp_pairs[i:(i + batch_size)]} + for i in range(0, len(qp_pairs), batch_size)] + shuffle(batches) + return batches + + +def get_char_input(data, char_dict, max_char_length): + ''' + Get char input. + ''' + batch_size = len(data) + sequence_length = max(len(d) for d in data) + char_id = np.zeros((max_char_length, sequence_length, + batch_size), dtype=np.int32) + char_lengths = np.zeros((sequence_length, batch_size), dtype=np.float32) + for batch_idx in range(0, min(len(data), batch_size)): + batch_data = data[batch_idx] + for sample_idx in range(0, min(len(batch_data), sequence_length)): + word = batch_data[sample_idx]['word'] + char_lengths[sample_idx, batch_idx] = min( + len(word), max_char_length) + for i in range(0, min(len(word), max_char_length)): + char_id[i, sample_idx, batch_idx] = get_id(char_dict, word[i]) + return char_id, char_lengths + + +def get_word_input(data, word_dict, embed, embed_dim): + ''' + Get word input. + ''' + batch_size = len(data) + max_sequence_length = max(len(d) for d in data) + sequence_length = max_sequence_length + word_input = np.zeros((max_sequence_length, batch_size, + embed_dim), dtype=np.float32) + ids = np.zeros((sequence_length, batch_size), dtype=np.int32) + masks = np.zeros((sequence_length, batch_size), dtype=np.float32) + lengths = np.zeros([batch_size], dtype=np.int32) + + for batch_idx in range(0, min(len(data), batch_size)): + batch_data = data[batch_idx] + + lengths[batch_idx] = len(batch_data) + + for sample_idx in range(0, min(len(batch_data), sequence_length)): + word = batch_data[sample_idx]['word'].lower() + if word in word_dict.keys(): + word_input[sample_idx, batch_idx] = embed[word_dict[word]] + ids[sample_idx, batch_idx] = word_dict[word] + masks[sample_idx, batch_idx] = 1 + + word_input = np.reshape(word_input, (-1, embed_dim)) + return word_input, ids, masks, lengths + + +def get_word_index(tokens, char_index): + ''' + Given word return word index. + ''' + for (i, token) in enumerate(tokens): + if token['char_end'] == 0: + continue + if token['char_begin'] <= char_index and char_index <= token['char_end']: + return i + return 0 + + +def get_answer_begin_end(data): + ''' + Get answer's index of begin and end. + ''' + begin = [] + end = [] + for qa_pair in data: + tokens = qa_pair['passage_tokens'] + char_begin = qa_pair['answer_begin'] + char_end = qa_pair['answer_end'] + word_begin = get_word_index(tokens, char_begin) + word_end = get_word_index(tokens, char_end) + begin.append(word_begin) + end.append(word_end) + return np.asarray(begin), np.asarray(end) + + +def get_id(word_dict, word): + ''' + Given word, return word id. + ''' + if word in word_dict.keys(): + return word_dict[word] + return word_dict[''] + + +def get_buckets(min_length, max_length, bucket_count): + ''' + Get bucket by length. + ''' + if bucket_count <= 0: + return [max_length] + unit_length = int((max_length - min_length) // (bucket_count)) + buckets = [min_length + unit_length * + (i + 1) for i in range(0, bucket_count)] + buckets[-1] = max_length + return buckets + + +def find_bucket(length, buckets): + ''' + Find bucket. + ''' + for bucket in buckets: + if length <= bucket: + return bucket + return buckets[-1] diff --git a/examples/trials/weight_sharing/ga_squad/download.sh b/examples/trials/weight_sharing/ga_squad/download.sh new file mode 100644 index 0000000000..308fbaedbf --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/download.sh @@ -0,0 +1,6 @@ +#!/bin/bash + +wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json +wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json +wget http://nlp.stanford.edu/data/glove.840B.300d.zip +unzip glove.840B.300d.zip \ No newline at end of file diff --git a/examples/trials/weight_sharing/ga_squad/evaluate.py b/examples/trials/weight_sharing/ga_squad/evaluate.py new file mode 100644 index 0000000000..d2bc208cf4 --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/evaluate.py @@ -0,0 +1,169 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +''' +Evaluation scripts for QA model. +''' + +from __future__ import print_function +from collections import Counter +import string +import re +import argparse +import json +import sys + + +def normalize_answer(str_input): + """Lower text and remove punctuation, articles and extra whitespace.""" + def remove_articles(text): + ''' + Remove "a|an|the" + ''' + return re.sub(r'\b(a|an|the)\b', ' ', text) + + def white_space_fix(text): + ''' + Remove unnessary whitespace + ''' + return ' '.join(text.split()) + + def remove_punc(text): + ''' + Remove punc + ''' + exclude = set(string.punctuation) + return ''.join(ch for ch in text if ch not in exclude) + + def lower(text): + ''' + Change string to lower form. + ''' + return text.lower() + + return white_space_fix(remove_articles(remove_punc(lower(str_input)))) + + +def f1_score(prediction, ground_truth): + ''' + Calculate the f1 score. + ''' + prediction_tokens = normalize_answer(prediction).split() + ground_truth_tokens = normalize_answer(ground_truth).split() + common = Counter(prediction_tokens) & Counter(ground_truth_tokens) + num_same = sum(common.values()) + if num_same == 0: + return 0 + precision = 1.0 * num_same / len(prediction_tokens) + recall = 1.0 * num_same / len(ground_truth_tokens) + f1_result = (2 * precision * recall) / (precision + recall) + return f1_result + + +def exact_match_score(prediction, ground_truth): + ''' + Calculate the match score with prediction and ground truth. + ''' + return normalize_answer(prediction) == normalize_answer(ground_truth) + + +def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): + ''' + Metric max over the ground truths. + ''' + scores_for_ground_truths = [] + for ground_truth in ground_truths: + score = metric_fn(prediction, ground_truth) + scores_for_ground_truths.append(score) + return max(scores_for_ground_truths) + + +def _evaluate(dataset, predictions): + ''' + Evaluate function. + ''' + f1_result = exact_match = total = 0 + count = 0 + for article in dataset: + for paragraph in article['paragraphs']: + for qa_pair in paragraph['qas']: + total += 1 + if qa_pair['id'] not in predictions: + count += 1 + continue + ground_truths = list( + map(lambda x: x['text'], qa_pair['answers'])) + prediction = predictions[qa_pair['id']] + exact_match += metric_max_over_ground_truths( + exact_match_score, prediction, ground_truths) + f1_result += metric_max_over_ground_truths( + f1_score, prediction, ground_truths) + print('total', total, 'exact_match', + exact_match, 'unanswer_question ', count) + exact_match = 100.0 * exact_match / total + f1_result = 100.0 * f1_result / total + return {'exact_match': exact_match, 'f1': f1_result} + + +def evaluate(data_file, pred_file): + ''' + Evaluate. + ''' + expected_version = '1.1' + with open(data_file) as dataset_file: + dataset_json = json.load(dataset_file) + if dataset_json['version'] != expected_version: + print('Evaluation expects v-' + expected_version + + ', but got dataset with v-' + dataset_json['version'], + file=sys.stderr) + dataset = dataset_json['data'] + with open(pred_file) as prediction_file: + predictions = json.load(prediction_file) + # print(json.dumps(evaluate(dataset, predictions))) + result = _evaluate(dataset, predictions) + # print('em:', result['exact_match'], 'f1:', result['f1']) + return result['exact_match'] + + +def evaluate_with_predictions(data_file, predictions): + ''' + Evalutate with predictions/ + ''' + expected_version = '1.1' + with open(data_file) as dataset_file: + dataset_json = json.load(dataset_file) + if dataset_json['version'] != expected_version: + print('Evaluation expects v-' + expected_version + + ', but got dataset with v-' + dataset_json['version'], + file=sys.stderr) + dataset = dataset_json['data'] + result = _evaluate(dataset, predictions) + return result['exact_match'] + + +if __name__ == '__main__': + EXPECT_VERSION = '1.1' + parser = argparse.ArgumentParser( + description='Evaluation for SQuAD ' + EXPECT_VERSION) + parser.add_argument('dataset_file', help='Dataset file') + parser.add_argument('prediction_file', help='Prediction File') + args = parser.parse_args() + print(evaluate(args.dataset_file, args.prediction_file)) diff --git a/examples/trials/weight_sharing/ga_squad/graph.py b/examples/trials/weight_sharing/ga_squad/graph.py new file mode 100644 index 0000000000..8e675a06ff --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/graph.py @@ -0,0 +1,336 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +''' +Graph is customed-define class, this module contains related class and function about graph. +''' + + +import copy +import hashlib +import logging +import json +import random +from collections import deque +from enum import Enum, unique +from typing import Iterable + +import numpy as np + +_logger = logging.getLogger('ga_squad_graph') + +@unique +class LayerType(Enum): + ''' + Layer type + ''' + attention = 0 + self_attention = 1 + rnn = 2 + input = 3 + output = 4 + +class Layer(object): + ''' + Layer class, which contains the information of graph. + ''' + def __init__(self, graph_type, inputs=None, output=None, size=None, hash_id=None): + self.input = inputs if inputs is not None else [] + self.output = output if output is not None else [] + self.graph_type = graph_type + self.is_delete = False + self.size = size + self.hash_id = hash_id + if graph_type == LayerType.attention.value: + self.input_size = 2 + self.output_size = 1 + elif graph_type == LayerType.rnn.value: + self.input_size = 1 + self.output_size = 1 + elif graph_type == LayerType.self_attention.value: + self.input_size = 1 + self.output_size = 1 + elif graph_type == LayerType.input.value: + self.input_size = 0 + self.output_size = 1 + if self.hash_id is None: + hasher = hashlib.md5() + hasher.update(np.random.bytes(100)) + self.hash_id = hasher.hexdigest() + elif graph_type == LayerType.output.value: + self.input_size = 1 + self.output_size = 0 + else: + raise ValueError('Unsupported LayerType: {}'.format(graph_type)) + + def update_hash(self, layers: Iterable): + """ + Calculation of `hash_id` of Layer. Which is determined by the properties of itself, and the `hash_id`s of input layers + """ + if self.graph_type == LayerType.input.value: + return + hasher = hashlib.md5() + hasher.update(LayerType(self.graph_type).name.encode('ascii')) + hasher.update(str(self.size).encode('ascii')) + for i in self.input: + if layers[i].hash_id is None: + raise ValueError('Hash id of layer {}: {} not generated!'.format(i, layers[i])) + hasher.update(layers[i].hash_id.encode('ascii')) + self.hash_id = hasher.hexdigest() + + def set_size(self, graph_id, size): + ''' + Set size. + ''' + if self.graph_type == LayerType.attention.value: + if self.input[0] == graph_id: + self.size = size + if self.graph_type == LayerType.rnn.value: + self.size = size + if self.graph_type == LayerType.self_attention.value: + self.size = size + if self.graph_type == LayerType.output.value: + if self.size != size: + return False + return True + + def clear_size(self): + ''' + Clear size + ''' + if self.graph_type == LayerType.attention.value or \ + LayerType.rnn.value or LayerType.self_attention.value: + self.size = None + + def __str__(self): + return 'input:' + str(self.input) + ' output:' + str(self.output) + ' type:' + str(self.graph_type) + ' is_delete:' + str(self.is_delete) + ' size:' + str(self.size) + +def graph_dumps(graph): + ''' + Dump the graph. + ''' + return json.dumps(graph, default=lambda obj: obj.__dict__) + +def graph_loads(graph_json): + ''' + Load graph + ''' + layers = [] + for layer in graph_json['layers']: + layer_info = Layer(layer['graph_type'], layer['input'], layer['output'], layer['size'], layer['hash_id']) + layer_info.is_delete = layer['is_delete'] + _logger.debug('append layer {}'.format(layer_info)) + layers.append(layer_info) + graph = Graph(graph_json['max_layer_num'], graph_json['min_layer_num'], [], [], []) + graph.layers = layers + _logger.debug('graph {} loaded'.format(graph)) + return graph + +class Graph(object): + ''' + Customed Graph class. + ''' + def __init__(self, max_layer_num, min_layer_num, inputs, output, hide): + self.layers = [] + self.max_layer_num = max_layer_num + self.min_layer_num = min_layer_num + assert min_layer_num < max_layer_num + + for layer in inputs: + self.layers.append(layer) + for layer in output: + self.layers.append(layer) + if hide is not None: + for layer in hide: + self.layers.append(layer) + assert self.is_legal() + + def is_topology(self, layers=None): + ''' + valid the topology + ''' + if layers is None: + layers = self.layers + layers_nodle = [] + result = [] + for i, layer in enumerate(layers): + if layer.is_delete is False: + layers_nodle.append(i) + while True: + flag_break = True + layers_toremove = [] + for layer1 in layers_nodle: + flag_arrive = True + for layer2 in layers[layer1].input: + if layer2 in layers_nodle: + flag_arrive = False + if flag_arrive is True: + for layer2 in layers[layer1].output: + # Size is error + if layers[layer2].set_size(layer1, layers[layer1].size) is False: + return False + layers_toremove.append(layer1) + result.append(layer1) + flag_break = False + for layer in layers_toremove: + layers_nodle.remove(layer) + result.append('|') + if flag_break: + break + # There is loop in graph || some layers can't to arrive + if layers_nodle: + return False + return result + + def layer_num(self, layers=None): + ''' + Reutn number of layer. + ''' + if layers is None: + layers = self.layers + layer_num = 0 + for layer in layers: + if layer.is_delete is False and layer.graph_type != LayerType.input.value\ + and layer.graph_type != LayerType.output.value: + layer_num += 1 + return layer_num + + def is_legal(self, layers=None): + ''' + Judge whether is legal for layers + ''' + if layers is None: + layers = self.layers + + for layer in layers: + if layer.is_delete is False: + if len(layer.input) != layer.input_size: + return False + if len(layer.output) < layer.output_size: + return False + + # layer_num <= max_layer_num + if self.layer_num(layers) > self.max_layer_num: + return False + + # There is loop in graph || some layers can't to arrive + if self.is_topology(layers) is False: + return False + + return True + + def update_hash(self): + """ + update hash id of each layer, in topological order/recursively + hash id will be used in weight sharing + """ + _logger.debug('update hash') + layer_in_cnt = [len(layer.input) for layer in self.layers] + topo_queue = deque([i for i, layer in enumerate(self.layers) if not layer.is_delete and layer.graph_type == LayerType.input.value]) + while topo_queue: + layer_i = topo_queue.pop() + self.layers[layer_i].update_hash(self.layers) + for layer_j in self.layers[layer_i].output: + layer_in_cnt[layer_j] -= 1 + if layer_in_cnt[layer_j] == 0: + topo_queue.appendleft(layer_j) + + def mutation(self, only_add=False): + ''' + Mutation for a graph + ''' + types = [] + if self.layer_num() < self.max_layer_num: + types.append(0) + types.append(1) + if self.layer_num() > self.min_layer_num and only_add is False: + types.append(2) + types.append(3) + # 0 : add a layer , delete a edge + # 1 : add a layer , change a edge + # 2 : delete a layer, delete a edge + # 3 : delete a layer, change a edge + graph_type = random.choice(types) + layer_type = random.choice([LayerType.attention.value,\ + LayerType.self_attention.value, LayerType.rnn.value]) + layers = copy.deepcopy(self.layers) + cnt_try = 0 + while True: + layers_in = [] + layers_out = [] + layers_del = [] + for i, layer in enumerate(layers): + if layer.is_delete is False: + if layer.graph_type != LayerType.output.value: + layers_in.append(i) + if layer.graph_type != LayerType.input.value: + layers_out.append(i) + if layer.graph_type != LayerType.output.value\ + and layer.graph_type != LayerType.input.value: + layers_del.append(i) + if graph_type <= 1: + new_id = len(layers) + out = random.choice(layers_out) + inputs = [] + output = [out] + pos = random.randint(0, len(layers[out].input) - 1) + last_in = layers[out].input[pos] + layers[out].input[pos] = new_id + if graph_type == 0: + layers[last_in].output.remove(out) + if graph_type == 1: + layers[last_in].output.remove(out) + layers[last_in].output.append(new_id) + inputs = [last_in] + lay = Layer(graph_type=layer_type, inputs=inputs, output=output) + while len(inputs) < lay.input_size: + layer1 = random.choice(layers_in) + inputs.append(layer1) + layers[layer1].output.append(new_id) + lay.input = inputs + layers.append(lay) + else: + layer1 = random.choice(layers_del) + for layer2 in layers[layer1].output: + layers[layer2].input.remove(layer1) + if graph_type == 2: + random_in = random.choice(layers_in) + else: + random_in = random.choice(layers[layer1].input) + layers[layer2].input.append(random_in) + layers[random_in].output.append(layer2) + for layer2 in layers[layer1].input: + layers[layer2].output.remove(layer1) + layers[layer1].is_delete = True + + if self.is_legal(layers): + self.layers = layers + break + else: + layers = copy.deepcopy(self.layers) + cnt_try += 1 + self.update_hash() + + def __str__(self): + info = "" + for l_id, layer in enumerate(self.layers): + if layer.is_delete is False: + info += 'id:%d ' % l_id + str(layer) + '\n' + return info diff --git a/examples/trials/weight_sharing/ga_squad/graph_to_tf.py b/examples/trials/weight_sharing/ga_squad/graph_to_tf.py new file mode 100644 index 0000000000..2712d531ca --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/graph_to_tf.py @@ -0,0 +1,342 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +import tensorflow as tf +from rnn import XGRUCell +from util import dropout +from graph import LayerType + + +def normalize(inputs, + epsilon=1e-8, + scope="ln"): + '''Applies layer normalization. + + Args: + inputs: A tensor with 2 or more dimensions, where the first dimension has + `batch_size`. + epsilon: A floating number. A very small number for preventing ZeroDivision Error. + scope: Optional scope for `variable_scope`. + reuse: Boolean, whether to reuse the weights of a previous layer + by the same name. + + Returns: + A tensor with the same shape and data dtype as `inputs`. + ''' + with tf.variable_scope(scope): + inputs_shape = inputs.get_shape() + params_shape = inputs_shape[-1:] + + mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True) + beta = tf.Variable(tf.zeros(params_shape)) + gamma = tf.Variable(tf.ones(params_shape)) + normalized = (inputs - mean) / ((variance + epsilon) ** (.5)) + outputs = gamma * normalized + beta + + return outputs + + +def multihead_attention(queries, + keys, + scope="multihead_attention", + num_units=None, + num_heads=4, + dropout_rate=0, + is_training=True, + causality=False): + '''Applies multihead attention. + + Args: + queries: A 3d tensor with shape of [N, T_q, C_q]. + keys: A 3d tensor with shape of [N, T_k, C_k]. + num_units: A cdscalar. Attention size. + dropout_rate: A floating point number. + is_training: Boolean. Controller of mechanism for dropout. + causality: Boolean. If true, units that reference the future are masked. + num_heads: An int. Number of heads. + scope: Optional scope for `variable_scope`. + reuse: Boolean, whether to reuse the weights of a previous layer + by the same name. + + Returns + A 3d tensor with shape of (N, T_q, C) + ''' + global look5 + with tf.variable_scope(scope): + # Set the fall back option for num_units + if num_units is None: + num_units = queries.get_shape().as_list()[-1] + + Q_ = [] + K_ = [] + V_ = [] + for head_i in range(num_heads): + Q = tf.layers.dense(queries, num_units / num_heads, + activation=tf.nn.relu, name='Query' + str(head_i)) # (N, T_q, C) + K = tf.layers.dense(keys, num_units / num_heads, + activation=tf.nn.relu, name='Key' + str(head_i)) # (N, T_k, C) + V = tf.layers.dense(keys, num_units / num_heads, + activation=tf.nn.relu, name='Value' + str(head_i)) # (N, T_k, C) + Q_.append(Q) + K_.append(K) + V_.append(V) + + # Split and concat + Q_ = tf.concat(Q_, axis=0) # (h*N, T_q, C/h) + K_ = tf.concat(K_, axis=0) # (h*N, T_k, C/h) + V_ = tf.concat(V_, axis=0) # (h*N, T_k, C/h) + + # Multiplication + outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k) + + # Scale + outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5) + + # Key Masking + key_masks = tf.sign(tf.abs(tf.reduce_sum(keys, axis=-1))) # (N, T_k) + key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k) + key_masks = tf.tile(tf.expand_dims(key_masks, 1), + [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k) + + paddings = tf.ones_like(outputs) * (-2 ** 32 + 1) + outputs = tf.where(tf.equal(key_masks, 0), paddings, + outputs) # (h*N, T_q, T_k) + + # Causality = Future blinding + if causality: + diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k) + tril = tf.contrib.linalg.LinearOperatorTriL( + diag_vals).to_dense() # (T_q, T_k) + masks = tf.tile(tf.expand_dims(tril, 0), + [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k) + + paddings = tf.ones_like(masks) * (-2 ** 32 + 1) + outputs = tf.where(tf.equal(masks, 0), paddings, + outputs) # (h*N, T_q, T_k) + + # Activation + look5 = outputs + outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k) + + # Query Masking + query_masks = tf.sign( + tf.abs(tf.reduce_sum(queries, axis=-1))) # (N, T_q) + query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q) + query_masks = tf.tile(tf.expand_dims( + query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k) + outputs *= query_masks # broadcasting. (N, T_q, C) + + # Dropouts + outputs = dropout(outputs, dropout_rate, is_training) + + # Weighted sum + outputs = tf.matmul(outputs, V_) # ( h*N, T_q, C/h) + + # Restore shape + outputs = tf.concat(tf.split(outputs, num_heads, + axis=0), axis=2) # (N, T_q, C) + + # Residual connection + if queries.get_shape().as_list()[-1] == num_units: + outputs += queries + + # Normalize + outputs = normalize(outputs, scope=scope) # (N, T_q, C) + + return outputs + + +def positional_encoding(inputs, + num_units=None, + zero_pad=True, + scale=True, + scope="positional_encoding", + reuse=None): + ''' + Return positinal embedding. + ''' + Shape = tf.shape(inputs) + N = Shape[0] + T = Shape[1] + num_units = Shape[2] + with tf.variable_scope(scope, reuse=reuse): + position_ind = tf.tile(tf.expand_dims(tf.range(T), 0), [N, 1]) + + # First part of the PE function: sin and cos argument + # Second part, apply the cosine to even columns and sin to odds. + X = tf.expand_dims(tf.cast(tf.range(T), tf.float32), axis=1) + Y = tf.expand_dims( + tf.cast(10000 ** -(2 * tf.range(num_units) / num_units), tf.float32), axis=0) + h1 = tf.cast((tf.range(num_units) + 1) % 2, tf.float32) + h2 = tf.cast((tf.range(num_units) % 2), tf.float32) + position_enc = tf.multiply(X, Y) + position_enc = tf.sin(position_enc) * tf.multiply(tf.ones_like(X), h1) + \ + tf.cos(position_enc) * tf.multiply(tf.ones_like(X), h2) + + # Convert to a tensor + lookup_table = position_enc + + if zero_pad: + lookup_table = tf.concat((tf.zeros(shape=[1, num_units]), + lookup_table[1:, :]), 0) + outputs = tf.nn.embedding_lookup(lookup_table, position_ind) + + if scale: + outputs = outputs * tf.sqrt(tf.cast(num_units, tf.float32)) + + return outputs + + +def feedforward(inputs, + num_units, + scope="multihead_attention"): + '''Point-wise feed forward net. + + Args: + inputs: A 3d tensor with shape of [N, T, C]. + num_units: A list of two integers. + scope: Optional scope for `variable_scope`. + reuse: Boolean, whether to reuse the weights of a previous layer + by the same name. + + Returns: + A 3d tensor with the same shape and dtype as inputs + ''' + with tf.variable_scope(scope): + # Inner layer + params = {"inputs": inputs, "filters": num_units[0], "kernel_size": 1, + "activation": tf.nn.relu, "use_bias": True} + outputs = tf.layers.conv1d(**params) + + # Readout layer + params = {"inputs": outputs, "filters": num_units[1], "kernel_size": 1, + "activation": None, "use_bias": True} + outputs = tf.layers.conv1d(**params) + + # Residual connection + outputs += inputs + + # Normalize + outputs = normalize(outputs) + + return outputs + + +def rnn(input_states, sequence_lengths, dropout_rate, is_training, num_units): + layer_cnt = 1 + states = [] + xs = tf.transpose(input_states, perm=[1, 0, 2]) + for i in range(0, layer_cnt): + xs = dropout(xs, dropout_rate, is_training) + with tf.variable_scope('layer_' + str(i)): + cell_fw = XGRUCell(num_units) + cell_bw = XGRUCell(num_units) + outputs, _ = tf.nn.bidirectional_dynamic_rnn( + cell_fw=cell_fw, + cell_bw=cell_bw, + dtype=tf.float32, + sequence_length=sequence_lengths, + inputs=xs, + time_major=True) + + y_lr, y_rl = outputs + xs = tf.concat([y_lr, y_rl], 2) + states.append(xs) + + return tf.transpose(dropout(tf.concat(states, axis=2), + dropout_rate, + is_training), perm=[1, 0, 2]) + + +def graph_to_network(input1, + input2, + input1_lengths, + input2_lengths, + p_graph, + dropout_rate, + is_training, + num_heads=1, + rnn_units=256): + topology = p_graph.is_topology() + layers = dict() + layers_sequence_lengths = dict() + num_units = input1.get_shape().as_list()[-1] + layers[0] = input1*tf.sqrt(tf.cast(num_units, tf.float32)) + \ + positional_encoding(input1, scale=False, zero_pad=False) + layers[1] = input2*tf.sqrt(tf.cast(num_units, tf.float32)) + layers[0] = dropout(layers[0], dropout_rate, is_training) + layers[1] = dropout(layers[1], dropout_rate, is_training) + layers_sequence_lengths[0] = input1_lengths + layers_sequence_lengths[1] = input2_lengths + for _, topo_i in enumerate(topology): + if topo_i == '|': + continue + + # Note: here we use the `hash_id` of layer as scope name, + # so that we can automatically load sharable weights from previous trained models + with tf.variable_scope(p_graph.layers[topo_i].hash_id, reuse=tf.AUTO_REUSE): + if p_graph.layers[topo_i].graph_type == LayerType.input.value: + continue + elif p_graph.layers[topo_i].graph_type == LayerType.attention.value: + with tf.variable_scope('attention'): + layer = multihead_attention(layers[p_graph.layers[topo_i].input[0]], + layers[p_graph.layers[topo_i].input[1]], + scope="multihead_attention", + dropout_rate=dropout_rate, + is_training=is_training, + num_heads=num_heads, + num_units=rnn_units * 2) + layer = feedforward(layer, scope="feedforward", + num_units=[rnn_units * 2 * 4, rnn_units * 2]) + layers[topo_i] = layer + layers_sequence_lengths[topo_i] = layers_sequence_lengths[ + p_graph.layers[topo_i].input[0]] + elif p_graph.layers[topo_i].graph_type == LayerType.self_attention.value: + with tf.variable_scope('self-attention'): + layer = multihead_attention(layers[p_graph.layers[topo_i].input[0]], + layers[p_graph.layers[topo_i].input[0]], + scope="multihead_attention", + dropout_rate=dropout_rate, + is_training=is_training, + num_heads=num_heads, + num_units=rnn_units * 2) + layer = feedforward(layer, scope="feedforward", + num_units=[rnn_units * 2 * 4, rnn_units * 2]) + layers[topo_i] = layer + layers_sequence_lengths[topo_i] = layers_sequence_lengths[ + p_graph.layers[topo_i].input[0]] + elif p_graph.layers[topo_i].graph_type == LayerType.rnn.value: + with tf.variable_scope('rnn'): + layer = rnn(layers[p_graph.layers[topo_i].input[0]], + layers_sequence_lengths[p_graph.layers[topo_i].input[0]], + dropout_rate, + is_training, + rnn_units) + layers[topo_i] = layer + layers_sequence_lengths[topo_i] = layers_sequence_lengths[ + p_graph.layers[topo_i].input[0]] + elif p_graph.layers[topo_i].graph_type == LayerType.output.value: + layers[topo_i] = layers[p_graph.layers[topo_i].input[0]] + if layers[topo_i].get_shape().as_list()[-1] != rnn_units * 1 * 2: + with tf.variable_scope('add_dense'): + layers[topo_i] = tf.layers.dense( + layers[topo_i], units=rnn_units*2) + return layers[2], layers[3] diff --git a/examples/trials/weight_sharing/ga_squad/rnn.py b/examples/trials/weight_sharing/ga_squad/rnn.py new file mode 100644 index 0000000000..82f7d070bf --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/rnn.py @@ -0,0 +1,118 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +import tensorflow as tf +from tensorflow.python.ops.rnn_cell_impl import RNNCell + + +class GRU: + ''' + GRU class. + ''' + def __init__(self, name, input_dim, hidden_dim): + self.name = '/'.join([name, 'gru']) + self.input_dim = input_dim + self.hidden_dim = hidden_dim + self.w_matrix = None + self.U = None + self.bias = None + + def define_params(self): + ''' + Define parameters. + ''' + input_dim = self.input_dim + hidden_dim = self.hidden_dim + prefix = self.name + self.w_matrix = tf.Variable(tf.random_normal([input_dim, 3 * hidden_dim], stddev=0.1), + name='/'.join([prefix, 'W'])) + self.U = tf.Variable(tf.random_normal([hidden_dim, 3 * hidden_dim], stddev=0.1), + name='/'.join([prefix, 'U'])) + self.bias = tf.Variable(tf.random_normal([1, 3 * hidden_dim], stddev=0.1), + name='/'.join([prefix, 'b'])) + return self + + def build(self, x, h, mask=None): + ''' + Build the GRU cell. + ''' + xw = tf.split(tf.matmul(x, self.w_matrix) + self.bias, 3, 1) + hu = tf.split(tf.matmul(h, self.U), 3, 1) + r = tf.sigmoid(xw[0] + hu[0]) + z = tf.sigmoid(xw[1] + hu[1]) + h1 = tf.tanh(xw[2] + r * hu[2]) + next_h = h1 * (1 - z) + h * z + if mask is not None: + next_h = next_h * mask + h * (1 - mask) + return next_h + + def build_sequence(self, xs, masks, init, is_left_to_right): + ''' + Build GRU sequence. + ''' + states = [] + last = init + if is_left_to_right: + for i, xs_i in enumerate(xs): + h = self.build(xs_i, last, masks[i]) + states.append(h) + last = h + else: + for i in range(len(xs) - 1, -1, -1): + h = self.build(xs[i], last, masks[i]) + states.insert(0, h) + last = h + return states + + +class XGRUCell(RNNCell): + + def __init__(self, hidden_dim, reuse=None): + super(XGRUCell, self).__init__(self, _reuse=reuse) + self._num_units = hidden_dim + self._activation = tf.tanh + + @property + def state_size(self): + return self._num_units + + @property + def output_size(self): + return self._num_units + + def call(self, inputs, state): + + input_dim = inputs.get_shape()[-1] + assert input_dim is not None, "input dimension must be defined" + W = tf.get_variable( + name="W", shape=[input_dim, 3 * self._num_units], dtype=tf.float32) + U = tf.get_variable( + name='U', shape=[self._num_units, 3 * self._num_units], dtype=tf.float32) + b = tf.get_variable( + name='b', shape=[1, 3 * self._num_units], dtype=tf.float32) + + xw = tf.split(tf.matmul(inputs, W) + b, 3, 1) + hu = tf.split(tf.matmul(state, U), 3, 1) + r = tf.sigmoid(xw[0] + hu[0]) + z = tf.sigmoid(xw[1] + hu[1]) + h1 = self._activation(xw[2] + r * hu[2]) + next_h = h1 * (1 - z) + state * z + return next_h, next_h diff --git a/examples/trials/weight_sharing/ga_squad/train_model.py b/examples/trials/weight_sharing/ga_squad/train_model.py new file mode 100644 index 0000000000..b8240bc960 --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/train_model.py @@ -0,0 +1,263 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +''' +Train the network combined by RNN and attention. +''' + +import tensorflow as tf + +from attention import DotAttention +from rnn import XGRUCell +from util import dropout +from graph_to_tf import graph_to_network + + +class GAGConfig: + """The class for model hyper-parameter configuration.""" + def __init__(self): + self.batch_size = 128 + + self.dropout = 0.1 + + self.char_vcb_size = 1500 + self.max_char_length = 20 + self.char_embed_dim = 100 + + self.max_query_length = 40 + self.max_passage_length = 800 + + self.att_is_vanilla = True + self.att_need_padding = False + self.att_is_id = False + + self.ptr_dim = 70 + self.learning_rate = 0.1 + self.labelsmoothing = 0.1 + self.num_heads = 1 + self.rnn_units = 256 + + +class GAG: + """The class for the computation graph based QA model.""" + def __init__(self, cfg, embed, p_graph): + self.cfg = cfg + self.embed = embed + self.graph = p_graph + + self.query_word = None + self.query_mask = None + self.query_lengths = None + self.passage_word = None + self.passage_mask = None + self.passage_lengths = None + self.answer_begin = None + self.answer_end = None + self.query_char_ids = None + self.query_char_lengths = None + self.passage_char_ids = None + self.passage_char_lengths = None + self.passage_states = None + self.query_states = None + self.query_init = None + self.begin_prob = None + self.end_prob = None + self.loss = None + self.train_op = None + + + def build_net(self, is_training): + """Build the whole neural network for the QA model.""" + cfg = self.cfg + word_embed = tf.get_variable( + name='word_embed', initializer=self.embed, dtype=tf.float32, trainable=False) + char_embed = tf.get_variable(name='char_embed', + shape=[cfg.char_vcb_size, + cfg.char_embed_dim], + dtype=tf.float32) + + # [query_length, batch_size] + self.query_word = tf.placeholder(dtype=tf.int32, + shape=[None, None], + name='query_word') + self.query_mask = tf.placeholder(dtype=tf.float32, + shape=[None, None], + name='query_mask') + # [batch_size] + self.query_lengths = tf.placeholder( + dtype=tf.int32, shape=[None], name='query_lengths') + + # [passage_length, batch_size] + self.passage_word = tf.placeholder( + dtype=tf.int32, shape=[None, None], name='passage_word') + self.passage_mask = tf.placeholder( + dtype=tf.float32, shape=[None, None], name='passage_mask') + # [batch_size] + self.passage_lengths = tf.placeholder( + dtype=tf.int32, shape=[None], name='passage_lengths') + + if is_training: + self.answer_begin = tf.placeholder( + dtype=tf.int32, shape=[None], name='answer_begin') + self.answer_end = tf.placeholder( + dtype=tf.int32, shape=[None], name='answer_end') + + self.query_char_ids = tf.placeholder(dtype=tf.int32, + shape=[ + self.cfg.max_char_length, None, None], + name='query_char_ids') + # sequence_length, batch_size + self.query_char_lengths = tf.placeholder( + dtype=tf.int32, shape=[None, None], name='query_char_lengths') + + self.passage_char_ids = tf.placeholder(dtype=tf.int32, + shape=[ + self.cfg.max_char_length, None, None], + name='passage_char_ids') + # sequence_length, batch_size + self.passage_char_lengths = tf.placeholder(dtype=tf.int32, + shape=[None, None], + name='passage_char_lengths') + + query_char_states = self.build_char_states(char_embed=char_embed, + is_training=is_training, + reuse=False, + char_ids=self.query_char_ids, + char_lengths=self.query_char_lengths) + + passage_char_states = self.build_char_states(char_embed=char_embed, + is_training=is_training, + reuse=True, + char_ids=self.passage_char_ids, + char_lengths=self.passage_char_lengths) + + with tf.variable_scope("encoding") as scope: + query_states = tf.concat([tf.nn.embedding_lookup( + word_embed, self.query_word), query_char_states], axis=2) + scope.reuse_variables() + passage_states = tf.concat([tf.nn.embedding_lookup( + word_embed, self.passage_word), passage_char_states], axis=2) + passage_states = tf.transpose(passage_states, perm=[1, 0, 2]) + query_states = tf.transpose(query_states, perm=[1, 0, 2]) + self.passage_states = passage_states + self.query_states = query_states + + output, output2 = graph_to_network(passage_states, query_states, + self.passage_lengths, self.query_lengths, + self.graph, self.cfg.dropout, + is_training, num_heads=cfg.num_heads, + rnn_units=cfg.rnn_units) + + passage_att_mask = self.passage_mask + batch_size_x = tf.shape(self.query_lengths) + answer_h = tf.zeros( + tf.concat([batch_size_x, tf.constant([cfg.ptr_dim], dtype=tf.int32)], axis=0)) + + answer_context = tf.reduce_mean(output2, axis=1) + + query_init_w = tf.get_variable( + 'query_init_w', shape=[output2.get_shape().as_list()[-1], cfg.ptr_dim]) + self.query_init = query_init_w + answer_context = tf.matmul(answer_context, query_init_w) + + output = tf.transpose(output, perm=[1, 0, 2]) + + with tf.variable_scope('answer_ptr_layer'): + ptr_att = DotAttention('ptr', + hidden_dim=cfg.ptr_dim, + is_vanilla=self.cfg.att_is_vanilla, + is_identity_transform=self.cfg.att_is_id, + need_padding=self.cfg.att_need_padding) + answer_pre_compute = ptr_att.get_pre_compute(output) + ptr_gru = XGRUCell(hidden_dim=cfg.ptr_dim) + begin_prob, begin_logits = ptr_att.get_prob(output, answer_context, passage_att_mask, + answer_pre_compute, True) + att_state = ptr_att.get_att(output, begin_prob) + (_, answer_h) = ptr_gru.call(inputs=att_state, state=answer_h) + answer_context = answer_h + end_prob, end_logits = ptr_att.get_prob(output, answer_context, + passage_att_mask, answer_pre_compute, + True) + + self.begin_prob = tf.transpose(begin_prob, perm=[1, 0]) + self.end_prob = tf.transpose(end_prob, perm=[1, 0]) + begin_logits = tf.transpose(begin_logits, perm=[1, 0]) + end_logits = tf.transpose(end_logits, perm=[1, 0]) + + if is_training: + def label_smoothing(inputs, masks, epsilon=0.1): + """Modify target for label smoothing.""" + epsilon = cfg.labelsmoothing + num_of_channel = tf.shape(inputs)[-1] # number of channels + inputs = tf.cast(inputs, tf.float32) + return (((1 - epsilon) * inputs) + (epsilon / + tf.cast(num_of_channel, tf.float32))) * masks + cost1 = tf.reduce_mean( + tf.losses.softmax_cross_entropy(label_smoothing( + tf.one_hot(self.answer_begin, + depth=tf.shape(self.passage_word)[0]), + tf.transpose(self.passage_mask, perm=[1, 0])), begin_logits)) + cost2 = tf.reduce_mean( + tf.losses.softmax_cross_entropy( + label_smoothing(tf.one_hot(self.answer_end, + depth=tf.shape(self.passage_word)[0]), + tf.transpose(self.passage_mask, perm=[1, 0])), end_logits)) + + reg_ws = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) + l2_loss = tf.reduce_sum(reg_ws) + loss = cost1 + cost2 + l2_loss + self.loss = loss + + optimizer = tf.train.AdamOptimizer(learning_rate=cfg.learning_rate) + self.train_op = optimizer.minimize(self.loss) + + return tf.stack([self.begin_prob, self.end_prob]) + + def build_char_states(self, char_embed, is_training, reuse, char_ids, char_lengths): + """Build char embedding network for the QA model.""" + max_char_length = self.cfg.max_char_length + + inputs = dropout(tf.nn.embedding_lookup(char_embed, char_ids), + self.cfg.dropout, is_training) + inputs = tf.reshape( + inputs, shape=[max_char_length, -1, self.cfg.char_embed_dim]) + char_lengths = tf.reshape(char_lengths, shape=[-1]) + with tf.variable_scope('char_encoding', reuse=reuse): + cell_fw = XGRUCell(hidden_dim=self.cfg.char_embed_dim) + cell_bw = XGRUCell(hidden_dim=self.cfg.char_embed_dim) + _, (left_right, right_left) = tf.nn.bidirectional_dynamic_rnn( + cell_fw=cell_fw, + cell_bw=cell_bw, + sequence_length=char_lengths, + inputs=inputs, + time_major=True, + dtype=tf.float32 + ) + + left_right = tf.reshape(left_right, shape=[-1, self.cfg.char_embed_dim]) + + right_left = tf.reshape(right_left, shape=[-1, self.cfg.char_embed_dim]) + + states = tf.concat([left_right, right_left], axis=1) + out_shape = tf.shape(char_ids)[1:3] + out_shape = tf.concat([out_shape, tf.constant( + value=[self.cfg.char_embed_dim * 2], dtype=tf.int32)], axis=0) + return tf.reshape(states, shape=out_shape) diff --git a/examples/trials/weight_sharing/ga_squad/trial.py b/examples/trials/weight_sharing/ga_squad/trial.py new file mode 100644 index 0000000000..bafe1e707a --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/trial.py @@ -0,0 +1,461 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), to deal in the Software without restriction, +# including without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +import argparse +import heapq +import json +import os +import pickle + +import logging +logger = logging.getLogger('ga_squad') + +import numpy as np +from tensorflow.train import init_from_checkpoint + +import graph + +from util import Timer + +import nni +import data +import evaluate +from train_model import * + + +os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' + + +def get_config(): + ''' + Get config from arument parser. + ''' + parser = argparse.ArgumentParser( + description='This program is using genetic algorithm to search architecture for SQuAD.') + parser.add_argument('--input_file', type=str, + default='./train-v1.1.json', help='input file') + parser.add_argument('--dev_file', type=str, + default='./dev-v1.1.json', help='dev file') + parser.add_argument('--embedding_file', type=str, + default='./glove.840B.300d.txt', help='dev file') + parser.add_argument('--root_path', default='./data/', + type=str, help='Root path of models') + parser.add_argument('--batch_size', type=int, default=64, help='batch size') + parser.add_argument('--save_path', type=str, + default='./save', help='save path dir') + parser.add_argument('--learning_rate', type=float, default=0.0001, + help='set half of original learning rate reload data and train.') + parser.add_argument('--max_epoch', type=int, default=30) + parser.add_argument('--dropout_rate', type=float, + default=0.1, help='dropout_rate') + parser.add_argument('--labelsmoothing', type=float, + default=0.1, help='labelsmoothing') + parser.add_argument('--num_heads', type=int, default=1, help='num_heads') + parser.add_argument('--rnn_units', type=int, default=256, help='rnn_units') + + args = parser.parse_args() + return args + + +def get_id(word_dict, word): + ''' + Return word id. + ''' + if word in word_dict.keys(): + return word_dict[word] + return word_dict[''] + + +def load_embedding(path): + ''' + return embedding for a specif file by given file path. + ''' + EMBEDDING_DIM = 300 + embedding_dict = {} + with open(path, 'r', encoding='utf-8') as file: + pairs = [line.strip('\r\n').split() for line in file.readlines()] + for pair in pairs: + if len(pair) == EMBEDDING_DIM + 1: + embedding_dict[pair[0]] = [float(x) for x in pair[1:]] + logger.debug('embedding_dict size: %d', len(embedding_dict)) + return embedding_dict + + +class MaxQueue: + ''' + Queue for max value. + ''' + + def __init__(self, capacity): + assert capacity > 0, 'queue size must be larger than 0' + self._capacity = capacity + self._entries = [] + + @property + def entries(self): + return self._entries + + @property + def capacity(self): + return self._capacity + + @property + def size(self): + return len(self._entries) + + def clear(self): + self._entries = [] + + def push(self, item): + if self.size < self.capacity: + heapq.heappush(self.entries, item) + else: + heapq.heappushpop(self.entries, item) + + +def find_best_answer_span(left_prob, right_prob, passage_length, max_answer_length): + left = 0 + right = 0 + max_prob = left_prob[0] * right_prob[0] + for i in range(0, passage_length): + left_p = left_prob[i] + for j in range(i, min(i + max_answer_length, passage_length)): + total_prob = left_p * right_prob[j] + if max_prob < total_prob: + left, right, max_prob = i, j, total_prob + return [(max_prob, left, right)] + + +def write_prediction(path, position1_result, position2_result): + import codecs + + with codecs.open(path, 'w', encoding='utf8') as file: + batch_num = len(position1_result) + for i in range(batch_num): + position1_batch = position1_result[i] + position2_batch = position2_result[i] + + for j in range(position1_batch.shape[0]): + file.write(str(position1_batch[j]) + + '\t' + str(position2_batch[j]) + '\n') + + +def find_kbest_answer_span(k, left_prob, right_prob, passage_length, max_answer_length): + if k == 1: + return find_best_answer_span(left_prob, right_prob, passage_length, max_answer_length) + + queue = MaxQueue(k) + for i in range(0, passage_length): + left_p = left_prob[i] + for j in range(i, min(i + max_answer_length, passage_length)): + total_prob = left_p * right_prob[j] + queue.push((total_prob, i, j)) + return list(sorted(queue.entries, key=lambda x: -x[0])) + + +def run_epoch(batches, answer_net, is_training): + if not is_training: + position1_result = [] + position2_result = [] + contexts = [] + ids = [] + + loss_sum = 0 + timer = Timer() + count = 0 + for batch in batches: + used = timer.get_elapsed(False) + count += 1 + qps = batch['qp_pairs'] + question_tokens = [qp['question_tokens'] for qp in qps] + passage_tokens = [qp['passage_tokens'] for qp in qps] + context = [(qp['passage'], qp['passage_tokens']) for qp in qps] + sample_id = [qp['id'] for qp in qps] + + _, query, query_mask, query_lengths = data.get_word_input( + data=question_tokens, word_dict=word_vcb, embed=embed, embed_dim=cfg.word_embed_dim) + _, passage, passage_mask, passage_lengths = data.get_word_input( + data=passage_tokens, word_dict=word_vcb, embed=embed, embed_dim=cfg.word_embed_dim) + + query_char, query_char_lengths = data.get_char_input( + data=question_tokens, char_dict=char_vcb, max_char_length=cfg.max_char_length) + + passage_char, passage_char_lengths = data.get_char_input( + data=passage_tokens, char_dict=char_vcb, max_char_length=cfg.max_char_length) + + if is_training: + answer_begin, answer_end = data.get_answer_begin_end(qps) + + if is_training: + feed_dict = {answer_net.query_word: query, + answer_net.query_mask: query_mask, + answer_net.query_lengths: query_lengths, + answer_net.passage_word: passage, + answer_net.passage_mask: passage_mask, + answer_net.passage_lengths: passage_lengths, + answer_net.query_char_ids: query_char, + answer_net.query_char_lengths: query_char_lengths, + answer_net.passage_char_ids: passage_char, + answer_net.passage_char_lengths: passage_char_lengths, + answer_net.answer_begin: answer_begin, + answer_net.answer_end: answer_end} + loss, _, = sess.run( + [answer_net.loss, answer_net.train_op], feed_dict=feed_dict) + if count % 100 == 0: + logger.debug('%d %g except:%g, loss:%g' % + (count, used, used / count * len(batches), loss)) + loss_sum += loss + else: + feed_dict = {answer_net.query_word: query, + answer_net.query_mask: query_mask, + answer_net.query_lengths: query_lengths, + answer_net.passage_word: passage, + answer_net.passage_mask: passage_mask, + answer_net.passage_lengths: passage_lengths, + answer_net.query_char_ids: query_char, + answer_net.query_char_lengths: query_char_lengths, + answer_net.passage_char_ids: passage_char, + answer_net.passage_char_lengths: passage_char_lengths} + position1, position2 = sess.run( + [answer_net.begin_prob, answer_net.end_prob], feed_dict=feed_dict) + position1_result += position1.tolist() + position2_result += position2.tolist() + contexts += context + ids = np.concatenate((ids, sample_id)) + if count % 100 == 0: + logger.debug('%d %g except:%g' % + (count, used, used / count * len(batches))) + loss = loss_sum / len(batches) + if is_training: + return loss + return loss, position1_result, position2_result, ids, contexts + + +def generate_predict_json(position1_result, position2_result, ids, passage_tokens): + ''' + Generate json by prediction. + ''' + predict_len = len(position1_result) + logger.debug('total prediction num is %s', str(predict_len)) + + answers = {} + for i in range(predict_len): + sample_id = ids[i] + passage, tokens = passage_tokens[i] + kbest = find_best_answer_span( + position1_result[i], position2_result[i], len(tokens), 23) + _, start, end = kbest[0] + answer = passage[tokens[start]['char_begin']:tokens[end]['char_end']] + answers[sample_id] = answer + logger.debug('generate predict done.') + return answers + + +def generate_data(path, tokenizer, char_vcb, word_vcb, is_training=False): + ''' + Generate data + ''' + global root_path + qp_pairs = data.load_from_file(path=path, is_training=is_training) + + tokenized_sent = 0 + # qp_pairs = qp_pairs[:1000]1 + for qp_pair in qp_pairs: + tokenized_sent += 1 + data.tokenize(qp_pair, tokenizer, is_training) + for word in qp_pair['question_tokens']: + word_vcb.add(word['word']) + for char in word['word']: + char_vcb.add(char) + for word in qp_pair['passage_tokens']: + word_vcb.add(word['word']) + for char in word['word']: + char_vcb.add(char) + + max_query_length = max(len(x['question_tokens']) for x in qp_pairs) + max_passage_length = max(len(x['passage_tokens']) for x in qp_pairs) + #min_passage_length = min(len(x['passage_tokens']) for x in qp_pairs) + cfg.max_query_length = max_query_length + cfg.max_passage_length = max_passage_length + + return qp_pairs + + +def train_with_graph(p_graph, qp_pairs, dev_qp_pairs): + ''' + Train a network from a specific graph. + ''' + global sess + with tf.Graph().as_default(): + train_model = GAG(cfg, embed, p_graph) + train_model.build_net(is_training=True) + tf.get_variable_scope().reuse_variables() + dev_model = GAG(cfg, embed, p_graph) + dev_model.build_net(is_training=False) + with tf.Session() as sess: + if restore_path is not None: + restore_mapping = dict(zip(restore_shared, restore_shared)) + logger.debug('init shared variables from {}, restore_scopes: {}'.format(restore_path, restore_shared)) + init_from_checkpoint(restore_path, restore_mapping) + logger.debug('init variables') + logger.debug(sess.run(tf.report_uninitialized_variables())) + init = tf.global_variables_initializer() + sess.run(init) + # writer = tf.summary.FileWriter('%s/graph/'%execution_path, sess.graph) + logger.debug('assign to graph') + + saver = tf.train.Saver() + train_loss = None + bestacc = 0 + patience = 5 + patience_increase = 2 + improvement_threshold = 0.995 + + for epoch in range(max_epoch): + logger.debug('begin to train') + train_batches = data.get_batches(qp_pairs, cfg.batch_size) + train_loss = run_epoch(train_batches, train_model, True) + logger.debug('epoch ' + str(epoch) + + ' loss: ' + str(train_loss)) + dev_batches = list(data.get_batches( + dev_qp_pairs, cfg.batch_size)) + _, position1, position2, ids, contexts = run_epoch( + dev_batches, dev_model, False) + + answers = generate_predict_json( + position1, position2, ids, contexts) + if save_path is not None: + logger.info('save prediction file to {}'.format(save_path)) + with open(os.path.join(save_path, 'epoch%d.prediction' % epoch), 'w') as file: + json.dump(answers, file) + else: + answers = json.dumps(answers) + answers = json.loads(answers) + iter = epoch + 1 + + acc = evaluate.evaluate_with_predictions( + args.dev_file, answers) + + logger.debug('Send intermediate acc: %s', str(acc)) + nni.report_intermediate_result(acc) + + logger.debug('Send intermediate result done.') + + if acc > bestacc: + if acc * improvement_threshold > bestacc: + patience = max(patience, iter * patience_increase) + bestacc = acc + + if save_path is not None: + logger.info('save model & prediction to {}'.format(save_path)) + saver.save(sess, os.path.join(save_path, 'epoch%d.model' % epoch)) + with open(os.path.join(save_path, 'epoch%d.score' % epoch), 'wb') as file: + pickle.dump( + (position1, position2, ids, contexts), file) + logger.debug('epoch %d acc %g bestacc %g' % + (epoch, acc, bestacc)) + if patience <= iter: + break + logger.debug('save done.') + return train_loss, bestacc + + +embed = None +char_vcb = None +tokenizer = None +word_vcb = None + + +def load_data(): + global embed, char_vcb, tokenizer, word_vcb + logger.debug('tokenize data') + tokenizer = data.WhitespaceTokenizer() + + char_set = set() + word_set = set() + logger.debug('generate train data') + qp_pairs = generate_data(input_file, tokenizer, + char_set, word_set, is_training=True) + logger.debug('generate dev data') + dev_qp_pairs = generate_data( + dev_file, tokenizer, char_set, word_set, is_training=False) + logger.debug('generate data done.') + + char_vcb = {char: sample_id for sample_id, char in enumerate(char_set)} + word_vcb = {word: sample_id for sample_id, word in enumerate(word_set)} + + timer.start() + logger.debug('read embedding table') + + cfg.word_embed_dim = 300 + embed = np.zeros((len(word_vcb), cfg.word_embed_dim), dtype=np.float32) + + embedding = load_embedding(args.embedding_file) + for word, sample_id in enumerate(word_vcb): + if word in embedding: + embed[sample_id] = embedding[word] + + # add UNK into dict + unk = np.zeros((1, cfg.word_embed_dim), dtype=np.float32) + embed = np.concatenate((unk, embed), axis=0) + word_vcb = {key: value + 1 for key, value in word_vcb.items()} + + return qp_pairs, dev_qp_pairs + + +if __name__ == '__main__': + try: + args = get_config() + + root_path = os.path.expanduser(args.root_path) + input_file = os.path.expanduser(args.input_file) + dev_file = os.path.expanduser(args.dev_file) + max_epoch = args.max_epoch + + cfg = GAGConfig() + cfg.batch_size = args.batch_size + cfg.learning_rate = float(args.learning_rate) + cfg.dropout = args.dropout_rate + cfg.rnn_units = args.rnn_units + cfg.labelsmoothing = args.labelsmoothing + cfg.num_heads = args.num_heads + timer = Timer() + + qp_pairs, dev_qp_pairs = load_data() + logger.debug('Init finish.') + + original_params = nni.get_next_parameter() + ''' + with open('data.json') as f: + original_params = json.load(f) + ''' + p_graph = graph.graph_loads(original_params['graph']) + save_path = original_params['save_dir'] + os.makedirs(save_path) + restore_path = original_params['restore_dir'] + restore_shared = [hash_id + '/' for hash_id in original_params['shared_id']] if original_params['shared_id'] is not None else [] + ['word_embed', 'char_embed', 'char_encoding/'] + train_loss, best_acc = train_with_graph(p_graph, qp_pairs, dev_qp_pairs) + + logger.debug('Send best acc: %s', str(best_acc)) + nni.report_final_result(best_acc) + logger.debug('Send final result done') + except: + logger.exception('Catch exception in trial.py.') + raise diff --git a/examples/trials/weight_sharing/ga_squad/util.py b/examples/trials/weight_sharing/ga_squad/util.py new file mode 100644 index 0000000000..ac9f363003 --- /dev/null +++ b/examples/trials/weight_sharing/ga_squad/util.py @@ -0,0 +1,76 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, to any person obtaining +# a copy of this software and associated documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be +# included in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +''' +Util Module +''' + +import time + +import tensorflow as tf + + +def shape(tensor): + ''' + Get shape of variable. + Return type is tuple. + ''' + temp_s = tensor.get_shape() + return tuple([temp_s[i].value for i in range(0, len(temp_s))]) + + +def get_variable(name, temp_s): + ''' + Get variable by name. + ''' + return tf.Variable(tf.zeros(temp_s), name=name) + + +def dropout(tensor, drop_prob, is_training): + ''' + Dropout except test. + ''' + if not is_training: + return tensor + return tf.nn.dropout(tensor, 1.0 - drop_prob) + + +class Timer: + ''' + Class Timer is for calculate time. + ''' + def __init__(self): + self.__start = time.time() + + def start(self): + ''' + Start to calculate time. + ''' + self.__start = time.time() + + def get_elapsed(self, restart=True): + ''' + Calculate time span. + ''' + end = time.time() + span = end - self.__start + if restart: + self.__start = end + return span diff --git a/examples/tuners/ga_customer_tuner/customer_tuner.py b/examples/tuners/ga_customer_tuner/customer_tuner.py index 2cfae001e5..699df5eb0e 100644 --- a/examples/tuners/ga_customer_tuner/customer_tuner.py +++ b/examples/tuners/ga_customer_tuner/customer_tuner.py @@ -96,7 +96,7 @@ def generate_parameters(self, parameter_id): temp = json.loads(graph_dumps(indiv.config)) else: random.shuffle(self.population) - if self.population[0].result > self.population[1].result: + if self.population[0].result < self.population[1].result: self.population[0] = self.population[1] indiv = copy.deepcopy(self.population[0]) self.population.pop(1) diff --git a/examples/tuners/weight_sharing/ga_customer_tuner/README.md b/examples/tuners/weight_sharing/ga_customer_tuner/README.md new file mode 100644 index 0000000000..bc7a6f1f84 --- /dev/null +++ b/examples/tuners/weight_sharing/ga_customer_tuner/README.md @@ -0,0 +1,15 @@ +# How to use ga_customer_tuner? +This tuner is a customized tuner which only suitable for trial whose code path is "~/nni/examples/trials/ga_squad", +type `cd ~/nni/examples/trials/ga_squad` and check readme.md to get more information for ga_squad trial. + +# config +If you want to use ga_customer_tuner in your experiment, you could set config file as following format: + +``` +tuner: + codeDir: ~/nni/examples/tuners/ga_customer_tuner + classFileName: customer_tuner.py + className: CustomerTuner + classArgs: + optimize_mode: maximize +``` \ No newline at end of file diff --git a/examples/tuners/weight_sharing/ga_customer_tuner/__init__.py b/examples/tuners/weight_sharing/ga_customer_tuner/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/examples/tuners/weight_sharing/ga_customer_tuner/customer_tuner.py b/examples/tuners/weight_sharing/ga_customer_tuner/customer_tuner.py new file mode 100644 index 0000000000..86520b5220 --- /dev/null +++ b/examples/tuners/weight_sharing/ga_customer_tuner/customer_tuner.py @@ -0,0 +1,224 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated +# documentation files (the "Software"), to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + +import copy +import json +import logging +import random +import os + +from threading import Event, Lock, current_thread + +from nni.tuner import Tuner + +from graph import Graph, Layer, LayerType, Enum, graph_dumps, graph_loads, unique + +logger = logging.getLogger('ga_customer_tuner') + + +@unique +class OptimizeMode(Enum): + Minimize = 'minimize' + Maximize = 'maximize' + + + + +class Individual(object): + """ + Basic Unit for evolution algorithm + """ + def __init__(self, graph_cfg: Graph = None, info=None, result=None, indiv_id=None): + self.config = graph_cfg + self.result = result + self.info = info + self.indiv_id = indiv_id + self.parent_id = None + self.shared_ids = {layer.hash_id for layer in self.config.layers if layer.is_delete is False} + + def __str__(self): + return "info: " + str(self.info) + ", config :" + str(self.config) + ", result: " + str(self.result) + + def mutation(self, indiv_id: int, graph_cfg: Graph = None, info=None): + self.result = None + if graph_cfg is not None: + self.config = graph_cfg + self.config.mutation() + self.info = info + self.parent_id = self.indiv_id + self.indiv_id = indiv_id + self.shared_ids.intersection_update({layer.hash_id for layer in self.config.layers if layer.is_delete is False}) + + +class CustomerTuner(Tuner): + """ + NAS Tuner using Evolution Algorithm, with weight sharing enabled + """ + def __init__(self, optimize_mode, save_dir_root, population_size=32, graph_max_layer=6, graph_min_layer=3): + self.optimize_mode = OptimizeMode(optimize_mode) + self.indiv_counter = 0 + self.events = [] + self.thread_lock = Lock() + self.save_dir_root = save_dir_root + self.population = self.init_population(population_size, graph_max_layer, graph_min_layer) + assert len(self.population) == population_size + logger.debug('init population done.') + return + + def generate_new_id(self): + """ + generate new id and event hook for new Individual + """ + self.events.append(Event()) + indiv_id = self.indiv_counter + self.indiv_counter += 1 + return indiv_id + + def save_dir(self, indiv_id): + if indiv_id is None: + return None + else: + return os.path.join(self.save_dir_root, str(indiv_id)) + + def init_population(self, population_size, graph_max_layer, graph_min_layer): + """ + initialize populations for evolution tuner + """ + population = [] + graph = Graph(max_layer_num=graph_max_layer, min_layer_num=graph_min_layer, + inputs=[Layer(LayerType.input.value, output=[4, 5], size='x'), Layer(LayerType.input.value, output=[4, 5], size='y')], + output=[Layer(LayerType.output.value, inputs=[4], size='x'), Layer(LayerType.output.value, inputs=[5], size='y')], + hide=[Layer(LayerType.attention.value, inputs=[0, 1], output=[2]), + Layer(LayerType.attention.value, inputs=[1, 0], output=[3])]) + for _ in range(population_size): + graph_tmp = copy.deepcopy(graph) + graph_tmp.mutation() + population.append(Individual(indiv_id=self.generate_new_id(), graph_cfg=graph_tmp, result=None)) + return population + + def generate_parameters(self, parameter_id): + """Returns a set of trial graph config, as a serializable object. + An example configuration: + ```json + { + "shared_id": [ + "4a11b2ef9cb7211590dfe81039b27670", + "370af04de24985e5ea5b3d72b12644c9", + "11f646e9f650f5f3fedc12b6349ec60f", + "0604e5350b9c734dd2d770ee877cfb26", + "6dbeb8b022083396acb721267335f228", + "ba55380d6c84f5caeb87155d1c5fa654" + ], + "graph": { + "layers": [ + ... + { + "hash_id": "ba55380d6c84f5caeb87155d1c5fa654", + "is_delete": false, + "size": "x", + "graph_type": 0, + "output": [ + 6 + ], + "output_size": 1, + "input": [ + 7, + 1 + ], + "input_size": 2 + }, + ... + ] + }, + "restore_dir": "/mnt/nfs/nni/ga_squad/87", + "save_dir": "/mnt/nfs/nni/ga_squad/95" + } + ``` + `restore_dir` means the path in which to load the previous trained model weights. if null, init from stratch. + `save_dir` means the path to save trained model for current trial. + `graph` is the configuration of model network. + Note: each configuration of layers has a `hash_id` property, + which tells tuner & trial code whether to share trained weights or not. + `shared_id` is the hash_id of layers that should be shared with previously trained model. + """ + logger.debug('acquiring lock for param {}'.format(parameter_id)) + self.thread_lock.acquire() + logger.debug('lock for current thread acquired') + if not self.population: + logger.debug("the len of poplution lower than zero.") + raise Exception('The population is empty') + pos = -1 + for i in range(len(self.population)): + if self.population[i].result is None: + pos = i + break + if pos != -1: + indiv = copy.deepcopy(self.population[pos]) + self.population.pop(pos) + graph_param = json.loads(graph_dumps(indiv.config)) + else: + random.shuffle(self.population) + if self.population[0].result < self.population[1].result: + self.population[0] = self.population[1] + indiv = copy.deepcopy(self.population[0]) + self.population.pop(1) + indiv.mutation(indiv_id = self.generate_new_id()) + graph_param = json.loads(graph_dumps(indiv.config)) + param_json = { + 'graph': graph_param, + 'restore_dir': self.save_dir(indiv.parent_id), + 'save_dir': self.save_dir(indiv.indiv_id), + 'shared_id': list(indiv.shared_ids) if indiv.parent_id is not None else None, + } + logger.debug('generate_parameter return value is:') + logger.debug(param_json) + logger.debug('releasing lock') + self.thread_lock.release() + if indiv.parent_id is not None: + logger.debug("new trial {} pending on parent experiment {}".format(indiv.indiv_id, indiv.parent_id)) + self.events[indiv.parent_id].wait() + logger.debug("trial {} ready".format(indiv.indiv_id)) + return param_json + + def receive_trial_result(self, parameter_id, parameters, value): + ''' + Record an observation of the objective function + parameter_id : int + parameters : dict of parameters + value: final metrics of the trial, including reward + ''' + logger.debug('acquiring lock for param {}'.format(parameter_id)) + self.thread_lock.acquire() + logger.debug('lock for current acquired') + reward = self.extract_scalar_reward(value) + if self.optimize_mode is OptimizeMode.Minimize: + reward = -reward + + logger.debug('receive trial result is:\n') + logger.debug(str(parameters)) + logger.debug(str(reward)) + + indiv = Individual(indiv_id=int(os.path.split(parameters['save_dir'])[1]), + graph_cfg=graph_loads(parameters['graph']), result=reward) + self.population.append(indiv) + logger.debug('releasing lock') + self.thread_lock.release() + self.events[indiv.indiv_id].set() + + def update_search_space(self, data): + pass diff --git a/examples/tuners/weight_sharing/ga_customer_tuner/graph.py b/examples/tuners/weight_sharing/ga_customer_tuner/graph.py new file mode 100644 index 0000000000..8e675a06ff --- /dev/null +++ b/examples/tuners/weight_sharing/ga_customer_tuner/graph.py @@ -0,0 +1,336 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), +# to deal in the Software without restriction, including without limitation +# the rights to use, copy, modify, merge, publish, distribute, sublicense, +# and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +''' +Graph is customed-define class, this module contains related class and function about graph. +''' + + +import copy +import hashlib +import logging +import json +import random +from collections import deque +from enum import Enum, unique +from typing import Iterable + +import numpy as np + +_logger = logging.getLogger('ga_squad_graph') + +@unique +class LayerType(Enum): + ''' + Layer type + ''' + attention = 0 + self_attention = 1 + rnn = 2 + input = 3 + output = 4 + +class Layer(object): + ''' + Layer class, which contains the information of graph. + ''' + def __init__(self, graph_type, inputs=None, output=None, size=None, hash_id=None): + self.input = inputs if inputs is not None else [] + self.output = output if output is not None else [] + self.graph_type = graph_type + self.is_delete = False + self.size = size + self.hash_id = hash_id + if graph_type == LayerType.attention.value: + self.input_size = 2 + self.output_size = 1 + elif graph_type == LayerType.rnn.value: + self.input_size = 1 + self.output_size = 1 + elif graph_type == LayerType.self_attention.value: + self.input_size = 1 + self.output_size = 1 + elif graph_type == LayerType.input.value: + self.input_size = 0 + self.output_size = 1 + if self.hash_id is None: + hasher = hashlib.md5() + hasher.update(np.random.bytes(100)) + self.hash_id = hasher.hexdigest() + elif graph_type == LayerType.output.value: + self.input_size = 1 + self.output_size = 0 + else: + raise ValueError('Unsupported LayerType: {}'.format(graph_type)) + + def update_hash(self, layers: Iterable): + """ + Calculation of `hash_id` of Layer. Which is determined by the properties of itself, and the `hash_id`s of input layers + """ + if self.graph_type == LayerType.input.value: + return + hasher = hashlib.md5() + hasher.update(LayerType(self.graph_type).name.encode('ascii')) + hasher.update(str(self.size).encode('ascii')) + for i in self.input: + if layers[i].hash_id is None: + raise ValueError('Hash id of layer {}: {} not generated!'.format(i, layers[i])) + hasher.update(layers[i].hash_id.encode('ascii')) + self.hash_id = hasher.hexdigest() + + def set_size(self, graph_id, size): + ''' + Set size. + ''' + if self.graph_type == LayerType.attention.value: + if self.input[0] == graph_id: + self.size = size + if self.graph_type == LayerType.rnn.value: + self.size = size + if self.graph_type == LayerType.self_attention.value: + self.size = size + if self.graph_type == LayerType.output.value: + if self.size != size: + return False + return True + + def clear_size(self): + ''' + Clear size + ''' + if self.graph_type == LayerType.attention.value or \ + LayerType.rnn.value or LayerType.self_attention.value: + self.size = None + + def __str__(self): + return 'input:' + str(self.input) + ' output:' + str(self.output) + ' type:' + str(self.graph_type) + ' is_delete:' + str(self.is_delete) + ' size:' + str(self.size) + +def graph_dumps(graph): + ''' + Dump the graph. + ''' + return json.dumps(graph, default=lambda obj: obj.__dict__) + +def graph_loads(graph_json): + ''' + Load graph + ''' + layers = [] + for layer in graph_json['layers']: + layer_info = Layer(layer['graph_type'], layer['input'], layer['output'], layer['size'], layer['hash_id']) + layer_info.is_delete = layer['is_delete'] + _logger.debug('append layer {}'.format(layer_info)) + layers.append(layer_info) + graph = Graph(graph_json['max_layer_num'], graph_json['min_layer_num'], [], [], []) + graph.layers = layers + _logger.debug('graph {} loaded'.format(graph)) + return graph + +class Graph(object): + ''' + Customed Graph class. + ''' + def __init__(self, max_layer_num, min_layer_num, inputs, output, hide): + self.layers = [] + self.max_layer_num = max_layer_num + self.min_layer_num = min_layer_num + assert min_layer_num < max_layer_num + + for layer in inputs: + self.layers.append(layer) + for layer in output: + self.layers.append(layer) + if hide is not None: + for layer in hide: + self.layers.append(layer) + assert self.is_legal() + + def is_topology(self, layers=None): + ''' + valid the topology + ''' + if layers is None: + layers = self.layers + layers_nodle = [] + result = [] + for i, layer in enumerate(layers): + if layer.is_delete is False: + layers_nodle.append(i) + while True: + flag_break = True + layers_toremove = [] + for layer1 in layers_nodle: + flag_arrive = True + for layer2 in layers[layer1].input: + if layer2 in layers_nodle: + flag_arrive = False + if flag_arrive is True: + for layer2 in layers[layer1].output: + # Size is error + if layers[layer2].set_size(layer1, layers[layer1].size) is False: + return False + layers_toremove.append(layer1) + result.append(layer1) + flag_break = False + for layer in layers_toremove: + layers_nodle.remove(layer) + result.append('|') + if flag_break: + break + # There is loop in graph || some layers can't to arrive + if layers_nodle: + return False + return result + + def layer_num(self, layers=None): + ''' + Reutn number of layer. + ''' + if layers is None: + layers = self.layers + layer_num = 0 + for layer in layers: + if layer.is_delete is False and layer.graph_type != LayerType.input.value\ + and layer.graph_type != LayerType.output.value: + layer_num += 1 + return layer_num + + def is_legal(self, layers=None): + ''' + Judge whether is legal for layers + ''' + if layers is None: + layers = self.layers + + for layer in layers: + if layer.is_delete is False: + if len(layer.input) != layer.input_size: + return False + if len(layer.output) < layer.output_size: + return False + + # layer_num <= max_layer_num + if self.layer_num(layers) > self.max_layer_num: + return False + + # There is loop in graph || some layers can't to arrive + if self.is_topology(layers) is False: + return False + + return True + + def update_hash(self): + """ + update hash id of each layer, in topological order/recursively + hash id will be used in weight sharing + """ + _logger.debug('update hash') + layer_in_cnt = [len(layer.input) for layer in self.layers] + topo_queue = deque([i for i, layer in enumerate(self.layers) if not layer.is_delete and layer.graph_type == LayerType.input.value]) + while topo_queue: + layer_i = topo_queue.pop() + self.layers[layer_i].update_hash(self.layers) + for layer_j in self.layers[layer_i].output: + layer_in_cnt[layer_j] -= 1 + if layer_in_cnt[layer_j] == 0: + topo_queue.appendleft(layer_j) + + def mutation(self, only_add=False): + ''' + Mutation for a graph + ''' + types = [] + if self.layer_num() < self.max_layer_num: + types.append(0) + types.append(1) + if self.layer_num() > self.min_layer_num and only_add is False: + types.append(2) + types.append(3) + # 0 : add a layer , delete a edge + # 1 : add a layer , change a edge + # 2 : delete a layer, delete a edge + # 3 : delete a layer, change a edge + graph_type = random.choice(types) + layer_type = random.choice([LayerType.attention.value,\ + LayerType.self_attention.value, LayerType.rnn.value]) + layers = copy.deepcopy(self.layers) + cnt_try = 0 + while True: + layers_in = [] + layers_out = [] + layers_del = [] + for i, layer in enumerate(layers): + if layer.is_delete is False: + if layer.graph_type != LayerType.output.value: + layers_in.append(i) + if layer.graph_type != LayerType.input.value: + layers_out.append(i) + if layer.graph_type != LayerType.output.value\ + and layer.graph_type != LayerType.input.value: + layers_del.append(i) + if graph_type <= 1: + new_id = len(layers) + out = random.choice(layers_out) + inputs = [] + output = [out] + pos = random.randint(0, len(layers[out].input) - 1) + last_in = layers[out].input[pos] + layers[out].input[pos] = new_id + if graph_type == 0: + layers[last_in].output.remove(out) + if graph_type == 1: + layers[last_in].output.remove(out) + layers[last_in].output.append(new_id) + inputs = [last_in] + lay = Layer(graph_type=layer_type, inputs=inputs, output=output) + while len(inputs) < lay.input_size: + layer1 = random.choice(layers_in) + inputs.append(layer1) + layers[layer1].output.append(new_id) + lay.input = inputs + layers.append(lay) + else: + layer1 = random.choice(layers_del) + for layer2 in layers[layer1].output: + layers[layer2].input.remove(layer1) + if graph_type == 2: + random_in = random.choice(layers_in) + else: + random_in = random.choice(layers[layer1].input) + layers[layer2].input.append(random_in) + layers[random_in].output.append(layer2) + for layer2 in layers[layer1].input: + layers[layer2].output.remove(layer1) + layers[layer1].is_delete = True + + if self.is_legal(layers): + self.layers = layers + break + else: + layers = copy.deepcopy(self.layers) + cnt_try += 1 + self.update_hash() + + def __str__(self): + info = "" + for l_id, layer in enumerate(self.layers): + if layer.is_delete is False: + info += 'id:%d ' % l_id + str(layer) + '\n' + return info diff --git a/src/sdk/pynni/nni/common.py b/src/sdk/pynni/nni/common.py index cb21efda64..03fd870c31 100644 --- a/src/sdk/pynni/nni/common.py +++ b/src/sdk/pynni/nni/common.py @@ -63,8 +63,7 @@ def init_logger(logger_file_path): elif env_args.log_dir is not None: logger_file_path = os.path.join(env_args.log_dir, logger_file_path) logger_file = open(logger_file_path, 'w') - - fmt = '[%(asctime)s] %(levelname)s (%(name)s) %(message)s' + fmt = '[%(asctime)s] %(levelname)s (%(name)s/%(threadName)s) %(message)s' formatter = logging.Formatter(fmt, _time_format) handler = logging.StreamHandler(logger_file) diff --git a/src/sdk/pynni/nni/msg_dispatcher.py b/src/sdk/pynni/nni/msg_dispatcher.py index 4275e58e7e..325befc7d1 100644 --- a/src/sdk/pynni/nni/msg_dispatcher.py +++ b/src/sdk/pynni/nni/msg_dispatcher.py @@ -97,6 +97,7 @@ def handle_initialize(self, data): def handle_request_trial_jobs(self, data): # data: number or trial jobs ids = [_create_parameter_id() for _ in range(data)] + _logger.debug("requesting for generating params of {}".format(ids)) params_list = self.tuner.generate_multiple_parameters(ids) for i, _ in enumerate(params_list): diff --git a/src/sdk/pynni/nni/msg_dispatcher_base.py b/src/sdk/pynni/nni/msg_dispatcher_base.py index bcb8cc1a3a..d0b8c8beb0 100644 --- a/src/sdk/pynni/nni/msg_dispatcher_base.py +++ b/src/sdk/pynni/nni/msg_dispatcher_base.py @@ -19,10 +19,14 @@ # ================================================================================================== #import json_tricks -import os import logging -import json_tricks +import os +from queue import Queue +import sys + from multiprocessing.dummy import Pool as ThreadPool + +import json_tricks from .common import init_logger, multi_thread_enabled from .recoverable import Recoverable from .protocol import CommandType, receive @@ -49,7 +53,7 @@ def run(self): if command is None: break if multi_thread_enabled(): - self.pool.map_async(self.handle_request, [(command, data)]) + self.pool.map_async(self.handle_request_thread, [(command, data)]) else: self.handle_request((command, data)) @@ -59,6 +63,16 @@ def run(self): _logger.info('Terminated by NNI manager') + def handle_request_thread(self, request): + if multi_thread_enabled(): + try: + self.handle_request(request) + except Exception as e: + _logger.exception(str(e)) + sys.exit(-1) + else: + pass + def handle_request(self, request): command, data = request diff --git a/src/sdk/pynni/nni/tuner.py b/src/sdk/pynni/nni/tuner.py index 7d65395425..4dcf705bcf 100644 --- a/src/sdk/pynni/nni/tuner.py +++ b/src/sdk/pynni/nni/tuner.py @@ -48,6 +48,7 @@ def generate_multiple_parameters(self, parameter_id_list): result = [] for parameter_id in parameter_id_list: try: + _logger.debug("generating param for {}".format(parameter_id)) res = self.generate_parameters(parameter_id) except nni.NoMoreTrialError: return result diff --git a/test/async_sharing_test/config.yml b/test/async_sharing_test/config.yml new file mode 100644 index 0000000000..8cefad3c1a --- /dev/null +++ b/test/async_sharing_test/config.yml @@ -0,0 +1,25 @@ +authorName: default +experimentName: example_weight_sharing +trialConcurrency: 3 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: remote +#choice: true, false +useAnnotation: false +multiThread: true +tuner: + codeDir: . + classFileName: simple_tuner.py + className: SimpleTuner +trial: + command: python3 main.py + codeDir: . + gpuNum: 0 +machineList: + - ip: 10.10.10.10 + username: bob + passwd: bob123 + - ip: 10.10.10.11 + username: bob + passwd: bob123 diff --git a/test/async_sharing_test/main.py b/test/async_sharing_test/main.py new file mode 100644 index 0000000000..4c32ea51ca --- /dev/null +++ b/test/async_sharing_test/main.py @@ -0,0 +1,56 @@ +""" +Test code for weight sharing +need NFS setup and mounted as `/mnt/nfs/nni` +""" + +import hashlib +import os +import random +import time + +import nni + + +def generate_rand_file(fl_name): + """ + generate random file and write to `fl_name` + """ + fl_size = random.randint(1024, 102400) + fl_dir = os.path.split(fl_name)[0] + if not os.path.exists(fl_dir): + os.makedirs(fl_dir) + with open(fl_name, 'wb') as fout: + fout.write(os.urandom(fl_size)) + + +def check_sum(fl_name, tid=None): + """ + compute checksum for generated file of `fl_name` + """ + hasher = hashlib.md5() + with open(fl_name, 'rb') as fin: + for chunk in iter(lambda: fin.read(4096), b""): + hasher.update(chunk) + ret = hasher.hexdigest() + if tid is not None: + ret = ret + str(tid) + return ret + + +if __name__ == '__main__': + nfs_path = '/mnt/nfs/nni' + params = nni.get_next_parameter() + print(params) + if params['prev_id'] == 0: + model_file = os.path.join(nfs_path, str(params['id']), 'model.dat') + time.sleep(10) + generate_rand_file(model_file) + nni.report_final_result({ + 'checksum': check_sum(model_file), + 'path': model_file + }) + else: + model_file = params['prev_path'] + nni.report_final_result({ + 'checksum': check_sum(model_file, params['prev_id']) + }) diff --git a/test/async_sharing_test/simple_tuner.py b/test/async_sharing_test/simple_tuner.py new file mode 100644 index 0000000000..57c39cbe3b --- /dev/null +++ b/test/async_sharing_test/simple_tuner.py @@ -0,0 +1,65 @@ +""" +SimpleTuner for Weight Sharing +""" + +import logging + +from threading import Event, Lock +from nni.tuner import Tuner + +_logger = logging.getLogger('WeightSharingTuner') + + +class SimpleTuner(Tuner): + """ + simple tuner, test for weight sharing + """ + + def __init__(self): + super(SimpleTuner, self).__init__() + self.trial_meta = {} + self.f_id = None # father + self.sig_event = Event() + self.thread_lock = Lock() + + def generate_parameters(self, parameter_id): + if self.f_id is None: + self.thread_lock.acquire() + self.f_id = parameter_id + self.trial_meta[parameter_id] = { + 'prev_id': 0, + 'id': parameter_id, + 'checksum': None, + 'path': '', + } + _logger.info('generate parameter for father trial %s' % + parameter_id) + self.thread_lock.release() + return { + 'prev_id': 0, + 'id': parameter_id, + } + else: + self.sig_event.wait() + self.thread_lock.acquire() + self.trial_meta[parameter_id] = { + 'id': parameter_id, + 'prev_id': self.f_id, + 'prev_path': self.trial_meta[self.f_id]['path'] + } + self.thread_lock.release() + return self.trial_meta[parameter_id] + + def receive_trial_result(self, parameter_id, parameters, reward): + self.thread_lock.acquire() + if parameter_id == self.f_id: + self.trial_meta[parameter_id]['checksum'] = reward['checksum'] + self.trial_meta[parameter_id]['path'] = reward['path'] + self.sig_event.set() + else: + if reward['checksum'] != self.trial_meta[self.f_id]['checksum'] + str(self.f_id): + raise ValueError("Inconsistency in weight sharing!!!") + self.thread_lock.release() + + def update_search_space(self, search_space): + pass diff --git a/tools/nni_cmd/launcher.py b/tools/nni_cmd/launcher.py index 7fa5f8b974..485329d3d2 100644 --- a/tools/nni_cmd/launcher.py +++ b/tools/nni_cmd/launcher.py @@ -203,6 +203,8 @@ def set_experiment(experiment_config, mode, port, config_file_name): request_data['description'] = experiment_config['description'] if experiment_config.get('multiPhase'): request_data['multiPhase'] = experiment_config.get('multiPhase') + if experiment_config.get('multiThread'): + request_data['multiThread'] = experiment_config.get('multiThread') if experiment_config.get('advisor'): request_data['advisor'] = experiment_config['advisor'] else: From 02ee0ac86cae272f8a8edbc474ba00d458085b37 Mon Sep 17 00:00:00 2001 From: Yan Ni Date: Mon, 7 Jan 2019 19:02:15 +0800 Subject: [PATCH 2/4] Dev weight sharing update doc (#577) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * simple weight sharing * update gitignore file * change tuner codedir to relative path * add python cache files to gitignore list * move extract scalar reward logic from dispatcher to tuner * update tuner code corresponding to last commit * update doc for receive_trial_result api change * add numpy to package whitelist of pylint * distinguish param value from return reward for tuner.extract_scalar_reward * update pylintrc * add comments to dispatcher.handle_report_metric_data * update install for mac support * fix root mode bug on Makefile * Quick fix bug: nnictl port value error (#245) * fix port bug * Dev exp stop more (#221) * Exp stop refactor (#161) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * fix setup.py (#115) * Add DAG model configuration format for SQuAD example. * Explain config format for SQuAD QA model. * Add more detailed introduction about the evolution algorithm. * Fix install.sh add add trial log path (#109) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * show trial log path * update document * fix install.sh * set default vallue for maxTrialNum and maxExecDuration * fix nnictl * Dev smac (#116) * support package install (#91) * fix nnictl bug * support package install * update * update package install logic * Fix package install issue (#95) * fix nnictl bug * fix pakcage install * support SMAC as a tuner on nni (#81) * update doc * update doc * update doc * update hyperopt installation * update doc * update doc * update description in setup.py * update setup.py * modify encoding * encoding * add encoding * remove pymc3 * update doc * update builtin tuner spec * support smac in sdk, fix logging issue * support smac tuner * add optimize_mode * update config in nnictl * add __init__.py * update smac * update import path * update setup.py: remove entry_point * update rest server validation * fix bug in nnictl launcher * support classArgs: optimize_mode * quick fix bug * test travis * add dependency * add dependency * add dependency * add dependency * create smac python package * fix trivial points * optimize import of tuners, modify nnictl accordingly * fix bug: incorrect algorithm_name * trivial refactor * for debug * support virtual * update doc of SMAC * update smac requirements * update requirements * change debug mode * update doc * update doc * refactor based on comments * fix comments * modify example config path to relative path and increase maxTrialNum (#94) * modify example config path to relative path and increase maxTrialNum * add document * support conda (#90) (#110) * support install from venv and travis CI * support install from venv and travis CI * support install from venv and travis CI * support conda * support conda * modify example config path to relative path and increase maxTrialNum * undo messy commit * undo messy commit * Support pip install as root (#77) * Typo on #58 (#122) * PAI Training Service implementation (#128) * PAI Training service implementation **1. Implement PAITrainingService **2. Add trial-keeper python module, and modify setup.py to install the module **3. Add PAItrainingService rest server to collect metrics from PAI container. * fix datastore for multiple final result (#129) * Update NNI v0.2 release notes (#132) Update NNI v0.2 release notes * Update setup.py Makefile and documents (#130) * update makefile and setup.py * update makefile and setup.py * update document * update document * Update Makefile no travis * update doc * update doc * fix convert from ss to pcs (#133) * Fix bugs about webui (#131) * Fix webui bugs * Fix tslint * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * Merge branch V0.2 to Master (#143) * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * fix bug (#147) * Refactor nnictl and add config_pai.yml (#144) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * add config_pai.yml * refactor nnictl create logic and add colorful print * fix nnictl stop logic * add annotation for config_pai.yml * add document for start experiment * fix config.yml * fix document * Fix trial keeper wrongly exit issue (#152) * Fix trial keeper bug, use actual exitcode to exit rather than 1 * Fix bug of table sort (#145) * Update doc for PAIMode and v0.2 release notes (#153) * Update v0.2 documentation regards to release note and PAI training service * Update document to describe NNI docker image * fix antd (#159) * refactor experiment stopping logic * support change concurrency * remove trialJobs.ts * trivial changes * fix bugs * fix bug * support updating maxTrialNum * Modify IT scripts for supporting multiple experiments * Update ci (#175) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * modify CI cuz of refracting exp stop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * file saving * fix issues from code merge * remove $(INSTALL_PREFIX)/nni/nni_manager before install * fix indent * fix merge issue * socket close * update port * fix merge error * modify ci logic in nnimanager * fix ci * fix bug * change suspended to done * update ci (#229) * update ci * update ci * update ci (#232) * update ci * update ci * update azure-pipelines * update azure-pipelines * update ci (#233) * update ci * update ci * update azure-pipelines * update azure-pipelines * update azure-pipelines * run.py (#238) * Nnupdate ci (#239) * run.py * test ci * Nnupdate ci (#240) * run.py * test ci * test ci * Udci (#241) * run.py * test ci * test ci * test ci * update ci (#242) * run.py * test ci * test ci * test ci * update ci * revert install.sh (#244) * run.py * test ci * test ci * test ci * update ci * revert install.sh * add comments * remove assert * trivial change * trivial change * update Makefile (#246) * update Makefile * update Makefile * quick fix for ci (#248) * add update trialNum and fix bugs (#261) * Add builtin tuner to CI (#247) * update Makefile * update Makefile * add builtin-tuner test * add builtin-tuner test * refractor ci * update azure.yml * add built-in tuner test * fix bugs * Doc refactor (#258) * doc refactor * image name refactor * Refactor nnictl to support listing stopped experiments. (#256) Refactor nnictl to support listing stopped experiments. * Show experiment parameters more beautifully (#262) * fix error on example of RemoteMachineMode (#269) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * Update docker file to use latest nni release (#263) * fix bug about execDuration and endTime (#270) * fix bug about execDuration and endTime * modify time interval to 30 seconds * refactor based on Gems's suggestion * for triggering ci * Refactor dockerfile (#264) * refactor Dockerfile * Support nnictl tensorboard (#268) support tensorboard * Sdk update (#272) * Rename get_parameters to get_next_parameter * annotations add get_next_parameter * updates * updates * updates * updates * updates * add experiment log path to experiment profile (#276) * refactor extract reward from dict by tuner * update Makefile for mac support, wait for aka.ms support * refix Makefile for colorful echo * unversion config.yml with machine information * sync graph.py between tuners & trial of ga_squad * sync graph.py between tuners & trial of ga_squad * copy weight shared ga_squad under weight_sharing folder * mv ga_squad code back to master * simple tuner & trial ready * Fix nnictl multiThread option * weight sharing with async dispatcher simple example ready * update for ga_squad * fix bug * modify multihead attention name * add min_layer_num to Graph * fix bug * update share id calc * fix bug * add save logging * fix ga_squad tuner bug * sync bug fix for ga_squad tuner * fix same hash_id bug * add lock to simple tuner in weight sharing * Add readme to simple weight sharing * update * update * add paper link * update * reformat with autopep8 * add documentation for weight sharing * test for weight sharing * delete irrelevant files * move details of weight sharing in to code comments * add example section --- docs/AdvancedNAS.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/AdvancedNAS.md b/docs/AdvancedNAS.md index 3d2dd986bb..5306a36b7f 100644 --- a/docs/AdvancedNAS.md +++ b/docs/AdvancedNAS.md @@ -1,7 +1,7 @@ # Tutorial for Advanced Neural Architecture Search Currently many of the NAS algorithms leverage the technique of **weight sharing** among trials to accelerate its training process. For example, [ENAS][1] delivers 1000x effiency with '_parameter sharing between child models_', compared with the previous [NASNet][2] algorithm. Other NAS algorithms such as [DARTS][3], [Network Morphism][4], and [Evolution][5] is also leveraging, or has the potential to leverage weight sharing. -This is a tutorial on how to enable weight sharing in NNI. +This is a tutorial on how to enable weight sharing in NNI. ## Weight Sharing among trials Currently we recommend sharing weights through NFS (Network File System), which supports sharing files across machines, and is light-weighted, (relatively) efficient. We also welcome contributions from the community on more efficient techniques. @@ -63,6 +63,8 @@ The feature of weight sharing enables trials from different machines, in which m self.events[parameter_id].set() ``` +## Examples +For details, please refer to this [simple weight sharing example](../test/async_sharing_test). We also provided a [practice example](../examples/trials/weight_sharing/ga_squad) for reading comprehension, based on previous [ga_squad](../examples/trials/ga_squad) example. [1]: https://arxiv.org/abs/1802.03268 [2]: https://arxiv.org/abs/1707.07012 From 6a03a73efba2987866443f3e2d6884d09c228490 Mon Sep 17 00:00:00 2001 From: Yan Ni Date: Tue, 8 Jan 2019 13:38:23 +0800 Subject: [PATCH 3/4] Dev weight sharing update (#579) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * simple weight sharing * update gitignore file * change tuner codedir to relative path * add python cache files to gitignore list * move extract scalar reward logic from dispatcher to tuner * update tuner code corresponding to last commit * update doc for receive_trial_result api change * add numpy to package whitelist of pylint * distinguish param value from return reward for tuner.extract_scalar_reward * update pylintrc * add comments to dispatcher.handle_report_metric_data * update install for mac support * fix root mode bug on Makefile * Quick fix bug: nnictl port value error (#245) * fix port bug * Dev exp stop more (#221) * Exp stop refactor (#161) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * fix setup.py (#115) * Add DAG model configuration format for SQuAD example. * Explain config format for SQuAD QA model. * Add more detailed introduction about the evolution algorithm. * Fix install.sh add add trial log path (#109) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * show trial log path * update document * fix install.sh * set default vallue for maxTrialNum and maxExecDuration * fix nnictl * Dev smac (#116) * support package install (#91) * fix nnictl bug * support package install * update * update package install logic * Fix package install issue (#95) * fix nnictl bug * fix pakcage install * support SMAC as a tuner on nni (#81) * update doc * update doc * update doc * update hyperopt installation * update doc * update doc * update description in setup.py * update setup.py * modify encoding * encoding * add encoding * remove pymc3 * update doc * update builtin tuner spec * support smac in sdk, fix logging issue * support smac tuner * add optimize_mode * update config in nnictl * add __init__.py * update smac * update import path * update setup.py: remove entry_point * update rest server validation * fix bug in nnictl launcher * support classArgs: optimize_mode * quick fix bug * test travis * add dependency * add dependency * add dependency * add dependency * create smac python package * fix trivial points * optimize import of tuners, modify nnictl accordingly * fix bug: incorrect algorithm_name * trivial refactor * for debug * support virtual * update doc of SMAC * update smac requirements * update requirements * change debug mode * update doc * update doc * refactor based on comments * fix comments * modify example config path to relative path and increase maxTrialNum (#94) * modify example config path to relative path and increase maxTrialNum * add document * support conda (#90) (#110) * support install from venv and travis CI * support install from venv and travis CI * support install from venv and travis CI * support conda * support conda * modify example config path to relative path and increase maxTrialNum * undo messy commit * undo messy commit * Support pip install as root (#77) * Typo on #58 (#122) * PAI Training Service implementation (#128) * PAI Training service implementation **1. Implement PAITrainingService **2. Add trial-keeper python module, and modify setup.py to install the module **3. Add PAItrainingService rest server to collect metrics from PAI container. * fix datastore for multiple final result (#129) * Update NNI v0.2 release notes (#132) Update NNI v0.2 release notes * Update setup.py Makefile and documents (#130) * update makefile and setup.py * update makefile and setup.py * update document * update document * Update Makefile no travis * update doc * update doc * fix convert from ss to pcs (#133) * Fix bugs about webui (#131) * Fix webui bugs * Fix tslint * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * Merge branch V0.2 to Master (#143) * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * fix bug (#147) * Refactor nnictl and add config_pai.yml (#144) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * add config_pai.yml * refactor nnictl create logic and add colorful print * fix nnictl stop logic * add annotation for config_pai.yml * add document for start experiment * fix config.yml * fix document * Fix trial keeper wrongly exit issue (#152) * Fix trial keeper bug, use actual exitcode to exit rather than 1 * Fix bug of table sort (#145) * Update doc for PAIMode and v0.2 release notes (#153) * Update v0.2 documentation regards to release note and PAI training service * Update document to describe NNI docker image * fix antd (#159) * refactor experiment stopping logic * support change concurrency * remove trialJobs.ts * trivial changes * fix bugs * fix bug * support updating maxTrialNum * Modify IT scripts for supporting multiple experiments * Update ci (#175) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * modify CI cuz of refracting exp stop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * file saving * fix issues from code merge * remove $(INSTALL_PREFIX)/nni/nni_manager before install * fix indent * fix merge issue * socket close * update port * fix merge error * modify ci logic in nnimanager * fix ci * fix bug * change suspended to done * update ci (#229) * update ci * update ci * update ci (#232) * update ci * update ci * update azure-pipelines * update azure-pipelines * update ci (#233) * update ci * update ci * update azure-pipelines * update azure-pipelines * update azure-pipelines * run.py (#238) * Nnupdate ci (#239) * run.py * test ci * Nnupdate ci (#240) * run.py * test ci * test ci * Udci (#241) * run.py * test ci * test ci * test ci * update ci (#242) * run.py * test ci * test ci * test ci * update ci * revert install.sh (#244) * run.py * test ci * test ci * test ci * update ci * revert install.sh * add comments * remove assert * trivial change * trivial change * update Makefile (#246) * update Makefile * update Makefile * quick fix for ci (#248) * add update trialNum and fix bugs (#261) * Add builtin tuner to CI (#247) * update Makefile * update Makefile * add builtin-tuner test * add builtin-tuner test * refractor ci * update azure.yml * add built-in tuner test * fix bugs * Doc refactor (#258) * doc refactor * image name refactor * Refactor nnictl to support listing stopped experiments. (#256) Refactor nnictl to support listing stopped experiments. * Show experiment parameters more beautifully (#262) * fix error on example of RemoteMachineMode (#269) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * Update docker file to use latest nni release (#263) * fix bug about execDuration and endTime (#270) * fix bug about execDuration and endTime * modify time interval to 30 seconds * refactor based on Gems's suggestion * for triggering ci * Refactor dockerfile (#264) * refactor Dockerfile * Support nnictl tensorboard (#268) support tensorboard * Sdk update (#272) * Rename get_parameters to get_next_parameter * annotations add get_next_parameter * updates * updates * updates * updates * updates * add experiment log path to experiment profile (#276) * refactor extract reward from dict by tuner * update Makefile for mac support, wait for aka.ms support * refix Makefile for colorful echo * unversion config.yml with machine information * sync graph.py between tuners & trial of ga_squad * sync graph.py between tuners & trial of ga_squad * copy weight shared ga_squad under weight_sharing folder * mv ga_squad code back to master * simple tuner & trial ready * Fix nnictl multiThread option * weight sharing with async dispatcher simple example ready * update for ga_squad * fix bug * modify multihead attention name * add min_layer_num to Graph * fix bug * update share id calc * fix bug * add save logging * fix ga_squad tuner bug * sync bug fix for ga_squad tuner * fix same hash_id bug * add lock to simple tuner in weight sharing * Add readme to simple weight sharing * update * update * add paper link * update * reformat with autopep8 * add documentation for weight sharing * test for weight sharing * delete irrelevant files * move details of weight sharing in to code comments * add example section * update weight sharing tutorial --- docs/AdvancedNAS.md | 36 +++++++++++++++++++++++++----------- docs/img/weight_sharing.png | Bin 0 -> 71354 bytes 2 files changed, 25 insertions(+), 11 deletions(-) create mode 100644 docs/img/weight_sharing.png diff --git a/docs/AdvancedNAS.md b/docs/AdvancedNAS.md index 5306a36b7f..65ecd34100 100644 --- a/docs/AdvancedNAS.md +++ b/docs/AdvancedNAS.md @@ -6,6 +6,31 @@ This is a tutorial on how to enable weight sharing in NNI. ## Weight Sharing among trials Currently we recommend sharing weights through NFS (Network File System), which supports sharing files across machines, and is light-weighted, (relatively) efficient. We also welcome contributions from the community on more efficient techniques. +### Weight Sharing through NFS file +With the NFS setup (see below), trial code can share model weight through loading & saving files. Here we recommend that user feed the tuner with the storage path: +```yaml +tuner: + codeDir: path/to/customer_tuner + classFileName: customer_tuner.py + className: CustomerTuner + classArgs: + ... + save_dir_root: /nfs/storage/path/ +``` +And let tuner decide where to save & load weights and feed the paths to trials through `nni.get_next_parameters()`: + +![weight_sharing_design](./img/weight_sharing.png) + + For example, in tensorflow: +```python +# save models +saver = tf.train.Saver() +saver.save(sess, os.path.join(params['save_path'], 'model.ckpt')) +# load models +tf.init_from_checkpoint(params['restore_path']) +``` +where `'save_path'` and `'restore_path'` in hyper-parameter can be managed by the tuner. + ### NFS Setup In NFS, files are physically stored on a server machine, and trials on the client machine can read/write those files in the same way that they access local files. @@ -34,17 +59,6 @@ sudo mount -t nfs 10.10.10.10:/tmp/nni/shared /mnt/nfs/nni ``` where `10.10.10.10` should be replaced by the real IP of NFS server machine in practice. -### Weight Sharing through NFS file -With the NFS setup, trial code can share model weight through loading & saving files. For example, in tensorflow: -```python -# save models -saver = tf.train.Saver() -saver.save(sess, os.path.join(params['save_path'], 'model.ckpt')) -# load models -tf.init_from_checkpoint(params['restore_path']) -``` -where `'save_path'` and `'restore_path'` in hyper-parameter can be managed by the tuner. - ## Asynchornous Dispatcher Mode for trial dependency control The feature of weight sharing enables trials from different machines, in which most of the time **read after write** consistency must be assured. After all, the child model should not load parent model before parent trial finishes training. To deal with this, users can enable **asynchronous dispatcher mode** with `multiThread: true` in `config.yml` in NNI, where the dispatcher assign a tuner thread each time a `NEW_TRIAL` request comes in, and the tuner thread can decide when to submit a new trial by blocking and unblocking the thread itself. For example: ```python diff --git a/docs/img/weight_sharing.png b/docs/img/weight_sharing.png new file mode 100644 index 0000000000000000000000000000000000000000..dbf087d020d1f27858ddcddbf46dece6087f289a GIT binary patch literal 71354 zcmX_oWmFtZv@L`XG`RcV4uK%Sox$CM1$TD~GC_m8yGwxJ8r%ZG-QC^oHQ&AW{b1JW zE;*$&ZKxh)_^aAEm%z%1}`6L!qGF{e%Ai1qE%ig*FT%KszZ*ia=G4 z672&I@6CkegrT6SqmZ5qVS#4^d$5)h6cjG{+aL50^`#&b)N6s1n6Rq5-ccJ|GEr~F z%h>D1{nMp`+2bg#jnmRSiVl0r&=-Fy$#N|z2~GG=+KVuALds)$XpxDCpm$-kvH1z= zTVz7tS^N4p1>|GDAn(2>KR%K!`-oBUa($!9RM*DQWLD_4^UOE)DC_2;5L|jtYUjsr zg;0__&dSMYw5zG)K~SqC4n2sS;-Ni79z$kSA}#_RFSa3a$knE#1rn^qpeknQKS`??nXpK8n!&y=rGwNy_sz)2Rml2)pOqeddlaMU35j3M`lEw(T5POlis&eE%ZZGSBo zxwjM*i=D!vD`v0YxyZWOG)z{lXZFcKIWyhrR$&!qo7%;yU=TdGhaGx@GA-5+Og_Fk zsAim)6--{ufT80fDt|tG7uO)xG}I(4XhJDfV1PdZoPYH7wR~DpN4XB zEh;gz^W-~pWovJvZsc>(4D+o&YL!uE{Y5$9x7CwXP%V3DTZ5*o!37O*1H3$KGYR4vlzs=C>n}r<2qTsd9__y0Rj4mXTw34pPBwu4*`E~-2d816@6JxSmF2QSwKOA=5Bo5sdViGkr zvlvyC*Qyr4LYP3xael74Wb$E~E*`$cq7((?avl40b76F}|JC8Ak`dC8h(&O#V9G@* zF)o-aX2|?~q7N=n@%q3vajek3P8j;CyIv7%LaDQbBYJS9J1|l=P_JL_cxxqHQQ1tL&g)VXWC%mq#JzwJk;Tgbg0aXzki<9HkrV>!{W)= z{$d+E_b_*w;c_XMBAwArSpWPx)&wJ7$h7~9F?ZW)0)yTGqC%nZv~?vJMTPH^%PN?t zwJCVZ#8roL)^A>ufjRTDVg&I6sjlI2JR=3P&U>qUv2*2|RPziT7p~X@=>ej8`--ns z4$jAMnWp4_OZlsiI>1E9yA{eNlm_8@BH#>WI-mBsJNEil52h*^@;L0MHs&XC2ZQ>e zhrN9TCfSsj&lJo)Jp||Xr5bWrx*F`%pM4$bj=AsYiL@MLGji9mQ!fJ*!`yR=URzh1 zs%6XV0beTXGX)Pi zxq@*L3twh04(>)$1bz=(4=R=NO$z)JXN0g{rX%HQgmcEjGz`_LdWTSy#pGhw0hhjX zN{nJQ*{E}<7?yBE1Yk-JHoF2wa?C_Gve?Btd>s6N$11)Nb3DwRhV)z|3A+Xe3uI^x zI~v#ILM^DvjEt-9qft>wZ&;!1ig`?8@0b{+Vlt-xd}>;NB_MuOtS2Cnh&_N+fs*3LXz5&(anEr6Bt*nCpw%+>jJJ9I1E#lybL$x_o#>g| z7adc2w`dR6W3rP)ejPiizF;#|?XCSL2DxN^K5{j&M5FhIrHj@%G$b+f?6Aa3vt^;Z z%tQCSGLg34&X}J!7hCmi8Jk9g#NYk=TOU&IBS8)}h@@Fcx@Eeeq+3AWmAEySs`@MJ zU!WLTEiT~AEPTHN!yX+4|4rl$@4DMEf#6w{SJe-*w|)br(L zUCj4RV2DJ_vQ<|DDTJv7?94@nH|**-$4FQ5o)2zaik@sP17dYC}JWNI6;k~uS zX)5-){btozveOi}s#;X@2j?OC-}xN1k|oFV^@Ewq7ivaGfROjeniVg}GVN2on=QP4 zHgnW_NTj4KuCMCwh<%g%M^#^EzE-_Etpm2u)qQvKjI-J)6AZ!2lX3x4SH4DdAVE2l z)WXl;Cw$lx6=sKn^P=py@2d@+%La|$`qR*IZze>;ZSf|U7uQXPzw}JsxN4M(T!m{-3XR94&$o_GW+N(x`3-~n4+KV~1pUgMk=hkr@mt&QCfxuQ; zm-Cz=*l1H+a+7UUMwc0Qu;ahf?5`~7Qn1u@(~4qsY3iwRK8ipTEVFR!5C^Qu+Ber^ ze4=&npaeUgXi`4q-|!wJ#{@_}CX$AY9wYgHT%;)XT6xPJrbfMNF?!X9<%-&>nEm51 zGu&2bTW2I?KD-q0@RfXCn9*a5T;}Kqp4!2*UbMEXJoE!UL(q7M53PEud3UK37Ji~_ zFaT~GTLldhVUt|O(k+6x1Sd3{?PY}p6odwN^dPiY)Ew|tYI~d=ZjtV_wLIc%&z?o4 zm~q#tv-Q!mMc0NujnWAbOqzojZ*k_?Y7c`JDsF+2m z3!Pm(-a|*Zy4dny;XWKt+!5H%mp_&8OJiE*lDN_+K22ncx(;}t>ZQz(dvCoyU{HT1 zn=hY%Feq?m#-{pgIWm?WXgw^8)PLay!lF{tC%kXAbeWlY8 z2yukTYKty#$;BafB_0<&KH#pUf_v-r`??h4j6qf452j$-J^?!@5w2`Mqg z+Y4Awn&l9KcS_EQ=S}om!I6x(j`|~&i=USM-nBevkv#(O!sM!$I_KM`j4w$&B)y_~ z-R2%AlH0lP;@^V1WA-jD7S1XjV;zINv6=I?=$b25ogX)KhjU&iT5K4CFx*l=_YZER z)JybrN3{p`Unr_J`$bunZ1^P$yV{DovC}Iqusfn-i|maTXJkytbsCc#ZP~DWM!%ZL z1$jecv*lyZ*~aaD5%(iu^N0e&@KFgosB#5Q$ok;C+n~`A!(Sh&G3!~Rkw_WEt7YkJ z9ByC5aTU<_8^QchA4FN!1vf>U6{Jt;>yx&t+%Qk9^B!0GV2c0Ou?`qJSGD*?z@Qm_9BSR43_OS`#b@b;~ z_(_9=ZL@7@y;vaIpxA%bmL(i{4+fFM?Y~U=UDPN5rHo^mTae@HRL zXPkEb?Kl{3Zr0N#M0IGnnKUqB%&o|F)?xo`T$ohyHFa|g?Z-IDU2kW9cA$JfBd$*s zRZc$5E&LLUG%<0CHr@z)WgJ^~+O*xjPp5RemTr;fHFNaA1G3UKn{%y_K6hvt4vt*n zmV}b-7X9%q)b#AE<1M2?LB=21Q3vreJfv3^aBFhz<>i z9cRGF5b~9!G$n~;Plh5%31&K`q|tncXa9RCNfbIpxKOCxf}X|oRFaq`Z)MlyaAk`f z&osg(yt7PL%6#JOUB-%?#e3Cnd5Lghqg_%jJeoDxop_T?b|CR2HJJ|0o@Q74(u(M~ zw^XZzrF`&t&|v4B7pPib3GmA?vXZCiCLeE=&mUg;r?-;<10y%Dhl3*}xez;~bO{jN z5l1aCKhe8{_MHCbl-2BBvmcCIQhKUFnpIxPH*;LNsw`8dAAMX(dZIr6Eq$i6kaavp-Ddl5-4eWQuha!Hn)$H_7h4nu*XneIE-Umkh=sGbd!+ayRQS8AfKDRm7ItIGW?5>gP`7rt6_`Ln! z#8^PR!lsS5)8rs2Lje$Mj_aewIYd#!v1^kBDNY)~L3IAEeIl-P&CMV*X^Jy)n-Z7O zYiNsNq{j^SQZ5PW!0?~~hyZ2mr}8#1C}&!Zt+3vZv2eUXQ-ou@83@#crZF`_gm+gc zBy`X7xsOZDYB62{UpMd`+_~#{6(yIT( zN{+804^b+nzKP43bkiYyAv52Dv=ne)KZZ7E)O?4a4<={hO8jEBMs@vCOvFfoBSC2t zpp6Hy%H_2|wJ!l&wjQl2=5m28ul-wHm_1#^^fgx)i=t^iWv?E9z{k-Sx+`;$)GaZJ z_*GFh^EP0v?KHZ!^7gftfl0SNtz+AS2Kn)Tl?CFZ-k{Ieb)Yd>?pqu1)-D;2#KpBq z%(4GpRmp`FH(#aBY^bh_Kdbewi*Uxx<$fStp&a|Js>jAA#M^RPL2UN?voq2A{6iPz z6A`g0qqH5^>$KmUhK6En)QwdZO^;zvjCck=QXZ{+j&5}gg>c!bIu_T<(;r<;Oa zMOO3AX;FL^#D=YSk#Tk%S+ECO*v*|7j1k~I)l`_po_hX-nrDUj$Chm+P<*6=GB@g0 zuRGgiYFdY{3H)Ic*n;7J@)}8?LJWL{L=op*PSsHihE*)w6S2&zO@#SjlJD12PJJx{ z4sfK8SPG&TZ+ZSIw7y{f^T07MXg^I_>O+-<=ycU?dds;+l$@TMkmAb(Y#Pc!Q8L|b z7cI6LW}*+LD&{j4Vo)hULs$+jr#4c=*C3@wDBK~U5aITvGTA}(Y zKeo~lDUVS&sA<=W{y-E*HKGg5`wt1nuEqBvF*|^c23ia6y0c{5DTtk{aiEh;1XP7o zl3v93<9_w*Nk>%(a#aS*<($f4Ze8iO!AlM~TxE+6v4D}*%_gpr@1;vf6j(g>eDFOm zVcYB?rtQ0ucFNFdO;{42=#1|iACjR?N*HNP4V`CegNwh@gCXc_pFL5!X;ezs;mA?K zE)2Y+J|X4t-V*1&E6v4`Sa+yKO84bZex|lVqhMJzvcVZsFVee@z~jD-6w4z1Tp;;v zc$&^9s&&QTq4_bhNj;+V3jL6OC-={;?P(b1h<&!amUEaEo{iVePa&C=AB{VS70X?w z73N0^H@&aOZi|K32Z9p>B$OU{c%V(xFMh$->$G(ey7e^Y29Z&bN=Z6c_M^b3p;=Wj zB_bSoh$NmnTReQ7{tZV0fI2At#WsxBJ@v9cGW4+|PHbiwmlo=fuVW!GO_{RiMUU|3 zQwZ%7ESV2EPBtFc5;gTc?gRH9XJknz*9}!0;<<8C=Z;>?Y-%!Tz!z-tZ{74gcXsRkZ|qS{kuhY5H40_`TH@k;-bZuznM zcD6TCAtN@==Hqf;LVm9ga?ytOysjMhnP}sE{K6tA7@a8%oG0PSBxGTYU3%e&@Ge&nzm?PY2w>!nwQBY`0#!XXmz$hbBGKR>2F+)DDyJ zZ1UDvwXE8=J%sV%8yeLSRCzU49;@}kUO{P#uRAJPd+405dwx6}xLIHF85k*m9^}*w zC4ON?C&6n&wjM62-zDC0)LJrv4FgoovrtmiE;)xj!UEGFZgwh)IB3CTvRLI}S4wp~qG->az~Ml6NT)ht(y2!eMFa@Jic-jrysK zB|)MjZ$upEp9E3jDeY+6GS(vFJHbL$#=~KAe(;$Irpl1``t)nh?P3Z!@o3grhp^;L#sVFmSX@61ihb_|ruV|2NfRJ&)7Jo)Y#4$4WIj0C1U23lh}KlPm!De1hA(Dh*) z6d^Gyr-blv$*I*3>TuplKABuEusC=XqJToHWgD+tyYU{JpHsAMTQlR|Iz5q z1u2#bx*=rfBOdQ_sEP0*sr*^^8f%YP5^@xEm(*=@Swu zZPw}Qebxf8kz|);tfd%8ZISy1$r6*!D0+17-MeA9ef^f_vX?9N_n6BcHzt*7#cKZO zLfzMa!q7I1aKQU7>a-!B;T0rGcicoS0#ZsG$1S2()k4J?UaWm-+D-D}Qw%3^ZiSgb zKv?lZ=xV-RNtroMsckh2p; zlaA&;~3Uv{e)+BW5UoM=+YW#$-zL_DBhP0{4Nu>CfB1 zc0Qj8rE7h10=fnE?>q`~F9th~bisYQ-2$G z-EV(m0KmI!9-F6Y_R=a2SRkFccR=>&ZxdfvU+YKtb;Rn!#!endUhEdtk%^mJCLDG@ zTy*O~RRejxhK?co-4EF85*>P5*A9ruIep-%FsVqZ{B4VxnhqWy0u0!oB95}f(kr>X zVJKf0R&sj&Ub*e=ft@xP|LLta_065e1lRa47Mt|LFyDRhxma=qDxyv_BG z0YiH9hzKCPf9L-^g|`a}=bs}>%_)PnWcmXS-zTJ{9(-8J&5c2e{dkdZq}dN+6;yK5 zSFk)&GBQeAqsP-9CM0wnI)uxMKu=`qb#p!Zd*RW}T z&s34>K>m)RhXX{3u+b&px8Qq9zg{2@0BafgL?j(vwt&y8UY;VYXTrEFl4ypk`?Z+r zPuS?PwA@0L`d6L*HjVmSy^g$EHN84%U(+qmw5hlHDh9Wrh=A!gT%K2~R@+3}nLNyQ z-NxiJv>QNA`C~qP&3~UbZufmII>K~onQZVYiDw+Gek#FKLD5ekWB3olk&9J0{U^z3$`{v%qSv@00Wvn-$qqmqGS8c5 zXgwT}VFatzC!OzLbR1cIrws=x$g6kg2_1<44?-tX&LKl3V<}b>34>uLmiln{fsUJX z5C~Z44E@Cr-?8Hi;_Tpygs^;PQDVY~&cB#d zLW1mdVDQc_NQC>xORPZx9{kW*R)v`I;^miV`Z{n(5fI}%2WV3Q_@p;VnvPUXc2)?Q z2-tqjiw^**H8BCyQaJk!o%@RUSRMcUA)`(Sxns}QVL@%ol<5ieb+*6D!NJ>wUIbnW ztunOzr^vUtt0ly=uaHYB#G<{$17&znf(Uob77-Q{1+&M1wotruBT11#>}(Np0-Cbz z8i>?mp2P~Y#2i4U3E;Rc4_^sDr1*+3O$fI%t z82+5So7uPX0WB%q-tRyi0py$i8qYKn+<@hIteDp5*ef$*bB}miVw-}tLDLnZnRcAR zl(6!}dx_vA0f^FET@HIUU`ujB#YI9A0AA{sz^{}805pZqMnZcE7`)W8_r3O|_IzHm z(`mWkx?Kb!q|37IR;&|^WzTBXvD?xIeyM{=p!5Jjp4m|yP#PZccc^bR-K2JnHA%&$ z+X~NDiL9w1@B5^o9^!48EkE&{0Ds}ifZ4-ONyd&}kvL%8V5nYaAo?(2yL-x*f#by7 z7onK0@$xw|>Hs3E5MfP$i5uTpS6%Lu*DtP5qGNjVKhjsDErf%fe_ueq>KFLRT^**P zNdCF{*+UM(mJoRU;cV#NCC(z+%sTG*yc|igzVqU#g{y2j_r}>m_!|UQ>e9<lZ5+9Q1O~U&`xn`{#wi3K|jm@bNkYEz$+&i5C##p-2~_>~E-Yf+HB` zq_{oQt4EYfT*|kti+++ogopvbnqYIMFazN~NCL`imBSIa&`|h6C__1_qax%{sX^WP zti-(b-vG|BU=7D*i_ZMjmo+S!d_H)|Tm{s&?C}Q!IPpf`rQqfQSgB6T?P-e}v;9F!@jvkmqD8=>C<8)M0X0CtOg4!*|{dAjrRI zB_AP4e(QK>Vab?z>Gyj`8WsIvDnbEHF_@VwJc?tXQuFugiBjb=*v(b}` zu@0)@HEAlN-oR?jdu1xM=!NzXf0NLNb=x?D`zvIRZhFd`1FkB zH)K9LfFU6Q8-{07ZAlI;zC3-IH!}`D6|r2EP5phN)}52IR39qNzVMLyA~QQX_SX_) zM&1#10>RDcB_ZrFenC$a#XkwseKnoGUDvo#5QS-3eZ~O#o$|lFWRIgdMk8ja{_;~s zS-L&~3Bzj8@lo^tvzWm7U{1=im4yc>?u93Rwea+LOYC#-a(*upJ=~gwpCg_)%+RF z8cIJvkda53?J1P1h%0L*!vHvqHbuFkXkogY6lxg8!!H~kWZh{uR9glF}i zJ33D4neB#~tt*2cqKkb34Fo@y~(nZs~aDaaf#!#YYT?fox`xp_ENtwHuC8xWvAISHfKIA0GW4 zyrXPrL!33EA)l|(&l;)B)}i;fJo2uE>3)33YT!RdllZ1HWs-np@%Q7O>#LOCeV%Q- z|NbGl9R}?8AJh!}s8Hwt4T;N~c#hsDG?;wkYPgPtjX^=GNf*0%ch6IYld^OJbtepf z1D!B)<;)GV0~&=^Xf$sC!F8%M6INKGfJ>8qUt&{WX~$##7;nc#y{Gl0;RDxXz}`zy z78XC_zbD)$O zk}xm+_!vxWdzJU2lwSOd`&MglN*GWh^Yp;EDnn&t8ZY`YOBQByC2=xPW7Of@u4`7P z%&XdRY1KR}DfbZb`H=KmHqW7Z|49O#=aq8AJn4`)B@I4&_~E@9npL!T-3>pDJ3V@Q zmCXh?<}|y1W@V)a!|Y-4_LR+kZeEp3q8$QIg-RE1m6{7((Z~3UNQt_tMVSCDTemX&~x5fZqo4IOuz5y zHCfvFy=6~Gv!GH#CKg|E{i`YYS|p}HWu8HoPxm{vhJBOCfGNXhe3}!Rd+>6-ofgyE z5^^}(iMU$GoBd+xX18|czf{pQ4PwW4CA@B0P;EVZBDzxRWuGiZ%AV~4M?#o-BDd1~_uhqc@9d=tZh|)#P_vI2>7Bk0cyysE@vr5eWG&2V?@4BZx!bMd@ zqmoSPZgg4$Ux49F5cxPw@Yi3%sWE}MB2#z5%+cvFw=r`fE)4$0Lk}>o4Nv2LS&DvD@Cj)SH!q-9DGZDv(xWHjUxXs(7zm3FqAOPE z$p5A4DI65#(U1iyKE3N4f&qSM;Gu;@a~n91BR2}G6Cze57V3by%|s_1Ih=)Av<%zZKauimKd zYk#=uP!cf4%<;S1EbGANxZZo7U2m_@#+uK@>6wOtZYX@)DwQ!pxKqYiaX4(E>2_O# zz1TTK!}2?c4YAZ@69~^9?;c5ftg&Oh|7qQl0T7Q1yM~N~AD=d-$U*sh-!JDGzu!pT z6J~L3$3b-tjc`T8vmueuas%vAw{SwxdqOwpt=)0V!9TxHXRIkz45ZjQ?%v&kJ&>p2 z1=tDtYMd~o27&l-n4%UgjRl+ni>9jA$@7f?zFIgdm&G1eXgiDgEdUH3p+PE?$ckE^ zA<0D|QU%Mpjs3BHPb`U_K-*Ume_dh`oGo`PwbV}wIp zV#)y21j$GZL{qNMhWb=A`1gHeG!1Mwe*-(6-*4=E8pve+5RxhMd-QLxr$>9esUMRI zqE%67zdM8Sk>Z>oVOn=WL?DHUZ>Gtzg&qH3V@M0ds30)=*4lzJQlO|cF8mi#kb-wO z@22_u%61)oB@#t@ZUWZ;9D&pPj}$jD*90=H%~3H$HgkjomiphlRw;`t#u%;W7d;Kh z_y}E%S*m*|)_1wELbXg4c?N+ju~3&3@nl+Y-oGRVF!jGZezv3jTRzarv)L?ICK}*h z9}pcmi5ad7iS1i)UCiWCI0Hd1r*Nx~e)5;6xJ+PJzFp-092k$>fJ#QZBQR3@B32acDv*|e6<2`Y zCFd~nPT6_MsW)H5&dtp|sJADP^EDfZ)Gf8Kg4}y_S=I~XFBw(;Nc27{FJE&;dj99+ z{k~aN40E=>$+oL#mIiKdtaN-2*D^&(w&*gDls>dzQ#Ac=rW3N`kA(yTLaos1#U#Y1 z)Cn#JB%mz02MlG5i^1}dHAt`bvGg?ILXWIMAu0k^is{$8@GF64Tu%MTR=BE;hv{M2 zIMtu;O$kK2KT`PjjP&gPW8$EV$X3nR0pDcXp?Nz!)FCG-)o6SIY&VhQOu>i5Fh^gp z&@r2O3KOLY4}D-aGLvC`1-YUnod|p-0CHjbrwmSBxAqDwkWcvhGzVdSczYR~GGCkG zn;2+<6qrJSrr!zBm#d_7HN5WJV)*mZjWho5wt@S>b`SHZqUVzahz=G;YRCWpd=eX?nHw2hizrmY12jMl# zMu%Ft=myaZx(E0yvC?Mha9vza=E-ZYU%cG!ZzC7$3sk&M(5&1!9hn6L-^NJx@zUv~AIhv`E`j!;cC@k$vJbXJ7MesdRu(AY}IQ7@=qJ z<)?cC)9gG#W=uJkHn@7-)kjk6#o7r9!sB&u)yX5@(-QLmETAL6RBeeK_Gyow&lRRm zx7tT=@R^5=5xz8r#j4H~@_~Z3q`h4`M&Hy8r7a7m`g9w$U81U==MA96LU4)N%p$Kr z`2sWO_YWnx;bD|gC?bRv=gT}s z*NETH*86{--Oeb8QLBvi1&+L06c zZ)`9I=8x-RoO-kMY2C$6U0@qF=f9FG&W zhF-RburUj!KU!udrATRtEf=x7`7uRe0UVhA{?`_ZN3m^(VlLD^Ng?Fdn_`l}`{g%y z_M&n=iM!*0`(L5e*s*r|+F^Hw{^i|HjX=9ii%~*T7b2YpfbOy_5ZGZK-wp*+DGZ9b zFx?8=Uyhz;rWr$TEbJuOEJ|fPS~px9_m3Z*oK2Q0ICPy^{Sd8xox8cze#3B$d(3W+ ztpR1Gx8_A0pX%(zrAz@8x!>)Qvs#qRKcKKIaCGjcZp&Kfes z(EHB#j;&+uZk3YbN>3~I!mu&?i*PYnOp%ER6r^lqD!ae;8GbkW8~1a!Ue;q`Ofr31 z-Rw%_u5ZZm=N3De;at10$Uc*J8Jo1qx*tUoizPHuG(n9PrKh*Uk-8Vqxv>XlU4R(Q z1_0@=Xy`2wKq(W|RwdrwX<4wUNZEA789e0#aBG7cg|Ma&c?mJ<*^Hb zqOJGm@Dorm63R9%gGtqtq~He={)4l>0;{UNe-~w?AU6ta16HoKRPYE8r;@VKW0sgx z1i8!gqufU;S-3b~Ll{Hg>9Ecpppkgo7%@~W+pfE-p4!6LM8KesvGNDK-5Z_vCK`t| zr`2i^(ci{D*WLzaEaDCYM@<~eafVkJ2gM^=P-T!-9WD}NO$x3wd)Mr>?(T$5#?0aO zt2(2JJP9rS(7v}{!Q!r~XqzI)ln=JUcSvn^5edoSZrxjyAaZ4W7;{bqD$3bhHI0 z#2b*{gm5e|l*mv8#q)}!pS^O75idC$Ro;_h)>ULOuaq3NXg)s&?c8@qE#3@k9GnG3 zOYbZ*{Uf@)r#p+W9gL z&P;VQP!I1)iq`ch6PM5Rvyt0?2sLCuJ(D4I_vfDHUyA(llxnAu6+i)mNLwvXU+ub^ zj9v5Cm_VjiIi?6${JThXy~7c%v0)>+ zL=+-*P_%8fx4mAZYD!K30*G=BIJXgYv-~6z0atd`2I=c&0_b`6KW4`|X?RfPcT*Pv z$sn}Vyss-qKV{(Q5NYl9Ak;y3VGGCl6}g|p;OVd<9VmxEZT$x5Gc)Qd zQ3GDQw{)Dxou&si@v(>>)l7IEg#fHh$@&Wc&@|IAs)qbL3!-|(Dh>7JB?_IES*A%^ zg`e#a%h^ts%tYOF8D*&w)x`ku8{CV-pJV+VE)!SoQx#8?b&mMh4Ykj2QHSSz!vC$O z05yEF(vy?5>w2Tv;d$qI?ZrOpAW;fdC@A{ z%>Fe5uG`!b;XFS#A|9Ht0~lBl1*Ko`(>;AZ$g8`Hw;p}%0`d4a>B@wK3UD?)o(H=l zPmAfGRndn@ea}`B1NwwR|3Xa^+Fs5tSU;3;p{|=rg?~zq5E5nhUynV*o`7pri(%3p zXZO4n37l$QKgR5@xyKnd7gLf81W2HRdQ>!v<1-cIuJlw}i)!dbWz`Y)2($0{W%OUxl5$L{U4b$ z(;0x;9M+Q+kMK*(O8udoC|B+fB9t-JRQjLM_Y4aY<;JkW6L=P|GSNow$DW0H=m z)e9jw$QWr3&<1PChOCj_Iir^X9NsUj3lS=v*xPKG&!-N4Hr4zWkqang5$(pu)M9r< zGq5sc8v3LX-r;er_vCh$9pJ9ywP1#$&`7QG2p*STUok^l&l(Wk3^sj<+{OiFWRH#> zCFeZ6tc-fevO*S#X^qp@pcslBd7`nq?)&21A0F23Y-67wC}a*dyjHOj)&_?U$`wh< zVEz1BX-5`3x`L17X$E`=vHQw}v+% zh{5OmhD@y>+F*-c$fB?x5L#KIZS)%)zDWlbvVX4r)5z8)3GZ}THtV8EEs=o#;S_r` zX{38gK#wlSS^S|i|0b3G+6rbR4~K42(=z@cAfEJlQpuO~_Q>6^|1*&}$ByOU+b7^sP=-2`CkfXB|< zzWIIsWd`^qP`$WlbGvR|LYw@0qz16AU%hWc0J^xPk^1?sR?#`$1O+*jj9gREA$1 za;Nm@Y{Z1jQ$<$`mHn;$Ws+SP1_Z+o6F6xZ{rDp!LQ0Kk9b6=R>rq1tgq)%ORJKS( zHgj=vx`-WDP^=)b2@+t`eumG!`{MK^^yrHrjmfGYDIHEo-wV=>us+4-!TlEr>XR2) za|x#?*Fdfze(A{nsYYKUX3vjTVge9=4W-gYpshxtV}%}TND0RVMy_jKQ^a`c1|^`{fyKE(k`lTghwFId?Qr`tym)BO{UizL5i;*!i5dVzcYBBzNi`C zMrTF8U0^$O5tt=lb?lF-F7N{9aEfH7f|U?>_LN?C(h29DIh8k!=@~YZcGfrt3$O~< zHfX9p^{GF|w~0mo>{1yt`sZ~4d%;E)pQzF2p+|Kkp#aet;GO$GJ_s@Jcs(Y>31g>8MwLL z$lZS28hRLNd%_{$N?oa<%T~f@Jqh14G)iYO+wL~&1lY9gJ zWL(OWun*vpjFcRrwi_{&(}mHu7TR$C6H`F<5AJ))E?)Id z8rrcS#}4UdajB17L5nEPC0kV$?~Yo{r7eBmbt$Q!v>o7#T1xW)6DIW;yr)#E7^dl~ z#p@~Kfrz+GEa)4TTZ{q57r(+vz~3`Kvi2biIz7N`96ErzqKO<05W*8L!JKsh*1i9_}AD?|Y+S+SNpQr;&D5 z>i{G&6AP?oFwQ?Lactcyta~JP-w{wx{@tD_7Y5qUIBoOeyeO7W%BAnta; z3kbCk#1vw<%#8-H*1qy21TW^j|3r!v?f5zDBpSl5rahPs++8Vz9M@``oS?f-`2A>s zG;Ism7QcceP4UzS)FO)Z;zM2haDX#|j%C4AS$$Nbx?H2$m4Hzlz(xh)vAU$;d7rp` zy6W^-@Y*eOu3*Hvpd}o!99VX58!YfgZ7T1CP}t9MAVWa1?hTuyVWxqc>^mQC54o z!V!5^EXn7dH~dvx97YNz0^S07)vC|1W3VCIaAL2n*9#5e^@l&@HpGkB4W^XnXRb|+ zeW5f%3YPA$->X8FTLg|1Z;+3vRNeq_qGbyT*KD1V@eiefm{Wdn09@a}oH%P*E+1H{ zu2w@#CeRmeQ8v0YZtkNEmmWHyE3vl3pt$&Nh=4-;Dm@oeZ!f0O4&Q zeYLT&vre$XCQ3I?*RtV`CyT|=)AZ}(p{-%P4;gy|{ZH?0I)J{<%tJc-v%6x%G*xo+ z`J$#jom#lZf)?XBChR~6lc1xK2e{>md^_bggmdVlGLPht!Wy5O5n_bB++GBj#Pq)= zNpt9PjY_K6rUv?pl5{&l^AxK!K$rX_ANAAD+eMlZWX?b&Ulf zuQpSHN9D0=sf#W|%p2D@!fq2hS>1Zf*=EaPbVkmu<=P<=_q|h!i>W3x$cTs!FLTKX_Z6o7wa4rQ5dXl{$;0a$8VLf~hnH z=_K>a<)s1RQ*QOJaYSlVx3{wT^qr#lHsq7D+AHV!+=v$Ff;^ z{2*>Zb$+N8kL!65)a4!qNM6#E3>FYo=dJbwhFBgN#aCgBk7fj-uDVSdR5{f)w`Cu)Ixu~tQOYQ9 zkhkItahJ{b&8QfP@84T!nNuuC0BeOX(>pM?CiZT1J0AC9LWPoHbo+U1!J3J+B*}eye@<5kzo&Ep_9bl=hA&}55&(S(8#59+b5@g}kx70<}h%GTsi3DtyBYy$67!|F$JVd#o+>Y*oEoB1U&aY1g$JCl!t7rZz3P`iL-E#K2 zCq6zUz@2&J^@Ig7=_Cb%2=o7rvg;Z~BuI{N%vt5u-f|UG{~j%11+u`{ldSZ>1TACQ zDXOYtu&`XKvN7C5H`%NbYd9~GYP}ta`9y7?N~d&px1@liboZg9Te=&hyBnkq-3rnvpfm_5jY>CyASrO?@cX`X zzq{@~E*Iy0XXc&RQ+q$pvoBy&^dk$WOKD@@SWB8K_W-m3omYfXOK-UcJhm(%C6=)J zy)OBOjjHTIhJr6tTe+G-c?X_Og308Wn?dH|6x_8O6r}St0qXE4hB1d5jt;GDAY^$g zj1Nh$m*m;X9#*y*4{Uybc&Ms|zQRA&{q_sTP7yPowtk#Q$Kw+ajICL|KlaqJz9KPc zaKoT*@62p3te%6`5D$YDCtMUArK4-1ATjBTw5}A4-ZD%(H-76mFD}3T;>Ii5XlO2U3 z3Vg9{$g;5LX+>o%eNgj)I|vU~@kn+=(P1pFtf4ntTese=9+Z!}uveu1a6bIX`Eup} zF(oT7R#AJ=ZYqhSTq43Oa+#Gvi@zbHe)nswS)^2-ktPe3qxd z2X0Wr)eq^4$L~_EEvNgcy!ECuY|0K7HSz zU;VwV0NKYyw5yqKr2QugR30R-6y3SY;A! z#rxD`r@x|;nPE3qmreroNXOHkADZ%Ol6e%e)Ofh1`8wLi2Un&J(@Lk8o83m3{5m_^ z>mhDp@@%s@4DK4t0wueLp2$DiRdn3&GZF?r<)J|%bc0v9_3pQ|f?16H&zI_5*e~x! z#f6tmD@`f-J8paZR?|8D)qhSS4;vFHZ@;rBsl0s<$;c|8#l)b4KwBhdmG-ugm-&AM z-Ci0*h#;TDX@s9+BqZh6q|R%(9BSy|dnM9}Z>CA|IX-QQX-StPrRC6WB+TGB$Iowm z10u!H#uP!p!atJ6ez7Y>$>gxJIL&^Bazo;mRS#{HJu%tia$L?*QKY^@XH5s1yZ#r` zR;(}4WnriYf*r}TcSk~9>Q#SYE*LxV^v<+-9Bv6U<$1wsqt0M6%_2~O@~-)!51GwM zMouF~kvv<;w^-6A_K}RE;gVRtYR{p^@?hJy7{gf>9JK$XSA9em0)X1&^7@jVGX7i-@?78Vz&1?AXmsH$$RaHhP0+KN);(n3) zwN*f{ZQ)rRZ>hqS@CQ6?o@+S$y~!`0rw3PMy7TjYta2+iPcgUN;M)PfmQO%??Kv}R zMo`n`L8E4&%vK4Js!BRT#07pCEq+Y*+kQeyG($6G-x?5GeaH0{mtcj2l>{!NYn$wy zJyoxAeAglO!JWo6x@(-E&h;h|!Cg<;N&@W!EHW_ zwxObNT1_q{>EiDD_kE2s-D1rbL8qqr*bkJ5ZiT$_F&bIoW*lu*=cH~yVRlbswDGeD zz!iZsFbU{^;J^?#FP>KxNWP|-?D!KMuS8H60@aMCrQ%kNbyXeB()5evVGwar2-bJ` zJ&Sntmc?+FBbn6gld8tus zpMl618#@`N8Sk#ylP*;Zz-tMZGUndI5xo}d81K9)UezB8n}<#JuW5U;P+zxrJ)@9|N2)=o9*MBbzOUm)aOjB~&^)la*gDs?YV@A%SK z(oH|oEUzhPW&T=C#lgsS1=oYMFw#OtP0x*IvRIXf9?eKY3>ws_0bEoBy`VYtWL$46 z6yM}F5h7fbXYg_;hL^%k=`x5|MDz?vlFH1MMKXfWbGvPntcDdl6WWY88DdRAEjUOc zQ>TRG{3^}PMudkY{F@{s7jzpilHwy1>=3;H+&v{ZtnH{wKT@zEOoG zDL%uq?oPc#t*!00lmY9_k2@CXQ7kM{P2H@>BgZ5{Q&pKBVE%(ngxd)o&Y13~&XtU2!TSCup^0e!8?rB| zS_MwT(#J<;EJv7%-eROOIZ`E+e~4Kk_@-n(mB9ebhq~!_pVR_0F|yUr5W1vK2uD6! zf;*rnUo2~wEQ^uXB7$?uJzD4_o{-4ZaBE%nn-t5$Nrdpy(CgAy#GLNH+>jHXf9TV! zKR~?TWU~+(9@KfroizsZF5*U>XvsF_W0)j&%EYb@7N(w6aiuG@bLgE~-%U?h%Qi4~j>R0G0DOZjO}3PnMQ@h(hI z+>HM{3GkFMZH^hxl>*P{>PTrYc0R>5Ttf zIN6-H#7voj(H2q@4iUP3md9iqy4K)huT=ve>M#9x^hu?{1zi>buwNCw0I>0EoL>zEmEDX5Uo$*; z^vQ(zO->Y!bo9kOvoqk&UVpB>U)nXuPK-*k1ur6I9Zc;LGNEBrE=PR( z3EN<+)(RQcs@T}NB3JaI)5Rbbiig>Va{feh7dY>yiYTbhY$=s4{`FJ`J;f+-xJ7pb zO)V|8Tu+@F-KxWgfco#&Aq~-oBVh3bwqw%qr7`%=X?+3}W1E<q9sZBsDU;nX?7!lcHC}R)p%rpF;rfjW>hy*kad{Gf@Ra;hAjcp6b-qi^7-e zotQK}@r@KLNMEsVhLjk9NrYb!wtaMTysWyR*g`|oUuDtq9^^+iZae+^a-~GajpQ>x z-ixUb7x2y!_XnXgy$Gxb9oR;tA&{zmmU3;Hs2;*?>Cg80r9-|jDvj64_^-u&TW)tq z_L1-BINACpC3je#KozzKr&`@hkHV6(HxPyx!|9R3NY5Rnx5ozHSdOlz``}9p_1%&7 zl?!-bEy6}W4w5f#%%2}dKomlA?OI*e7^_GaFoW9!xsk|&hX8*b+016vhbRWD?U+y z8|bDo;JTo_#G&uYYYiHkJEP1&!o6|*YDy~+0sHDw42e&UY~P9l7(=CPAfj1uNE5h% z^(2WsWL1m}bc0xr6AnV$5=uF0Lo1IdTbgsn>%oiI4}uNi{jkC%Ya_(NCzr01$GEH| zV^JR`?g#T4Y-&!%Q2Ig6$M=)t73Q)+GZ0Z}VKz)d@2uIaVLzU@9DarPIKrCPGnjJx zGV)ch$&5lH15fAdkdeUhOpEa|y-^eK866Fnp2_uaGwZe#n#_-bE3YtOxzK1B?t}PP zy zO>e?quGN?cP?vD>u~r$I4M^ea%jRNQj?M;CUhKrdLX?9Cw^WV=`Y)6yXf4*=k-ERl zy?^B)_B;V6>3fSXzMMf)S8$$+;scLkgJRp` z>M^jUt(D{{)+3{{{iAxLSew>Z{MVImQ?+d6>`T7j-YFBRRTsOcBk>4L_q$iF=ZRDX ztpo-facbgFr`7+)9~){1gdqkdMH~B{H!dAl5uGjuiobVcOMk}sQ1GG)Ux+-et0J{% z;lI8&{mR1&p=h9L;}xsFLbur# z=lBx4XR~}ZCvq*05W8k%AE1FZ`l|Ja$n{$Mm zehWCWGa9Qq;3Nd!AoQdD;PPdLquVEI1c^tRDD^0++8h*GB)Z7SKEHt*^>Rm{%x_$}}f?^S^%WEK@z^X5+gjNUcaAd~;&J_v^rSrUY;fjd! za9&-_RFjUXoKArwW<{lB^;RU8*a-*`hvR1g6pQ#vza59^(>~JJKN-2sBHynaJ8O*j zrmAleM;k%4(XAf`8Oa@kO`D6Dd-q)U%rTvfx%MkZd%$n<_k2@S_azs%Rty{PBV(?Q z9$bGy+cutp8c1OR}sv5nAyKDKc`e)sM z`)DjHQrID)od(bvOImJqVlQrWu4UWUy8~u+#Mo310u@`2urC(foA@NZ31v<^EQVWr z(z0C6R26J7%yGCKpopto=DfN&sSl(MyvwT>ZMCQBKDxG=_l~gp3otI=MnzMwa(k2; z7;_ePz;)T|JHB#@prPU!NHpdTJES!3932d3w~Ushr)GV3T(v$iUA4wvw7geCmyP)$ z7Hlxsku(=>7FZjZ!$LmPMTxQ_Zyj!Dq0q2=%$?yPT5cBcaBL#nx-7Qc0y(f2jAveM zDPlIBPx>hhY$sJ1$ezSq`0L)t{z*-Z6z@I1soxS6F#6t%IW__=Kza2FJ~j@^%4WT3 zVwWT~CIB^jFl}~Lk$6C8Jt`tweIR#*XhJo|aR{I^ZkERuP6AfKYCJhZ7|`5H2787FUaWXn=?Z3Q-C^}#h&vcA(2);?eb{(5 zuSkrxIMj4l&G??v z;|Re{Uhl%)e)jv0hi{Cj^qSyt;KvW!#B8!{zN&l`tA3}YPRqrR`J{&fivbr2ViEP@ zvsX(C{muJVA1E%d42jCS%RPkrE)!_QY~I_hdenLPfAI9Vw&c$TZ8%Nb*8l#~D|gn5 zvjUEuSOS%<%Kz$2DYz&edb}2)Za83$(?Dgqs$)yB8bEQJ2AfJ4r`J-gtv|eaY@NF@ z(4wDTEvXNogpxO;S{0h3{3t0oq1JEltsD4oI+Ty01e;i?SBE{$UVP8-jFy@qAM3?_ z+VR`9Z`rR8$Fe{jH@-t1$FK%9s#+X*#eZZ~`%tX|bogj!*Vi?}9y`n_1z|#kl7L@?8QQ2|zwF0pIOwdl&{fuXg};?=K=_HP(Robz zv^`8nn-Cy1Bx9wPx2d8N^BZZQ%vj&O=3m?U@HE#88j?y%c~?0oz*?DAu3!9=Lb#++ z6UzMCbF<~lFPvgk9e`Z0gh>l|ZIebLK{?Ulf*srD)6?(6p(45glfO(V9Uhmx!g76W zT@OC?%nV`~-x@mJntLNFUf7f-ChsV|IXg0Rc9FeNPbUKUJT7`xoUmVfx(w$zmt3b3 zpzcE)_!&*>O}8g_0k)4DocLxJkPPf2fHR+FW6j9(Tsj+QyT_^s) z9w%fm0;ZPt3r5%V@R;zJ_MMu5q4uKIEhDsqB$rm+*0Z6FenO!Sh`Y(%8GA_MAqQ=8@2?2 zcyBx^G|nFO9ONF{G{2al3OwA6t%5oZU`+R1M?udfegE{LOW$T5$SmKm4NhzC#L$f~ z#%%iz2!k!ZYY8ImQZ0l5TVBv40A6$leBU}`xNL6i^Usz@Fq0PVbgC<1;|nmNx%w|^ ze1DwGB|0|AFv)fjNF3z_QGd1Ts&e{;@p@oQW=62_aL4Aw{H5@^G&QWq(Y(11v4=$- zn_rpb`rm&G{PfwME@7J_q-Zta5c*w-H*}oqOwiIi7HE)F$dJ2}1_rg0F5VZ8Y6-s8_%~1qBu9_hhwLb9>K&=;`A3-vHJ% z*@s~2O2v;cX0XP1;=H*_4t;2!PwB){xuESScv8LD~L(lIspJKb2C|hGnHmK{6M1O1ma%RqDD3uq+ePhBIoo2CGR;M;hgcZ^qJphg`okdKxi8T6fe z9M4mi0$NG=5z{DHfZ;RE0iFUO7qzCJ02ikwNH<0L<)19 z6q=w&cu^D@L}V%lD`8QZ!KtEnb0&L9c?k(E5+*6S@GBBxsdG!%rVTUT;>y*R)?Wbo zfb6l|gyW_OJW8P0Xm9$ZExC7hXB%sh7LN8CLAKd7cGyqzkgrp7x2+p5I2u9c5n`Gh6;f)7FC43IrM#O zP`LZrJ3)c^r}NPI*kq^2w-p@+7VO@nK(O(coE|n{R7nMR%No8{KRn5pU>;TY3MV|N z=MMO=An-e{EioADolxfwxggPGjQa-Z%6x=<#!{^PkZ9#H4FL)!Ex>gB8k>HTlFrgi zJ9hsPUtdU#RJ@lrAr2O+E}pp}HtL*rD!Ni{(|OHXzuHA)M}J2MV!7I_Dx>72(vKaW z83YNmIsK)s;*iBpSpDVa&2*9bG0r0r&N4mB#kp+=Fb2i>Q!UA(L-K0kP!y^onk0bN-&^?7bL}Yy!odI zByEOjGhS}xV-H2!558PCW`I9sfsHk+zWWkZ-2AtS?Uy7qbp!(1J~YP^7Lf7?MZ$Gn zTWr{n2S)#s`nxywfRo|!cQ9j4F_*O`nNT%%_@l;%bO79A*S}XkMPAF@m8$fS1N5h6 z^KwO|Qq&nFyVBG+V7u)SVU-`xZ?>y@C9H%^_xbIul&|IF=}HM^lo_a4X#Te$WhL`h zyAa6xl05>nc-eWeoLxfzJA!1ymr8%7h8f%~b_R@h_kvZ&uuSRNL|(YGSW#6I?!Q(7 zs2{2f^vn(e(IioWepZu#4fF%GR9IzO_!k*YC^tmREuJ=~oYhwW$7~UXmAs%~ea;to z*a3)^k`~LyHZ&^Qe^O~B!(f2OLn(*|2+A+N-|MW)VEtQbfkE-lvBop6FoF+L4l1Y3 zlf_$8TCc>ON%_CFLw@tRZCh=RLXm?jcx-IWF^Fh}(5^PMlU(BE>E7TvDyRHuo^yEe zpHL(XR}>i-Lq7ROv8BP{Rjvj0Vhf^#ykY4F#bdWYBUKS-(5F>Ry@$vq9s2*~Twrvp zR8_YWEEr2n{{Iig?rm65PEBwc+qnigE`M;kz=87<=Xf3WOyHAe`{m!7(JoG$+_7II zZoEc{{&9>7FPI3?vHlx1EDcxYW(kCkr|__tqUr*Sv8W$)8uoMNVkrMnlzxe5*Q(oz zQi)8f;~{Ag3uV{jup+5p%Ko?_fz~JP;}FVi-tD!%l@s<;VzD;4RS)Zd_^{9~$p2#$ zEYHgLqD@zx@UrXUWn7?{b^4PbNTGn(YUe|e7&6yD2!G!=*XxQs95@08>_CY(k>BDp z1)qCBAs<3;5T=r1den$zZCS%SZ0vcTIh>lE@!&|x?}0f-u4soHvU>Z>6MeRT^gq<$ zliZ>4_m*EWF0R`0y5CMvyCreK6D(OqZY)4pS9azm%#U683UkbkkGLY0H_&7ulO=Qi zr*koAi6#3e>OVv$1bK62vzhe&S`oRreu3Y2F?$&U`JDWWW{*Ou z)+{8rkuIL9E=8c_*BaT`t-vpZ{4ha}u_JK%+-%Y7hbD^M!3PNh7ZHx8J&YwbZ3b;c z7YZf@!hDn3yj@SnCw42hPYv}H|JUkWQNFW_HvGPvGh~~URJ=sx8Q|O1bBtu-_4(+q z?D;_N&18qtc1!!TX`tm9FD3g>12BOBhxTn$9v3OPYzT5;ytcAlcN#1@(`;ZYE{}#ekY@LNsUA+*uhKHsE%Uw zB}=$xzl@>7V4Cfd(8}pKwlpP640|Ykz|H*)8<;Mz?8M5oG5VRoD5Pl3$vhI`)Qs8f8RKOLB{LsgixI=BAbr@4 ze=$erHa7SJTvl}L{%1vtRjOp1`)`TeZ~mw|Oi}L|<=6$#G)d6=R)dG_3D(K`_!?6aq}ei)BN3fg&tg+FgAURiNx7?Jv&}j& zd0%@UB1x5f)Id$FPaiW#&nmMSD9yXV~}A>C&0VtrlLrQ+14Vg%YgeT+>rc=h@` zc$87afR9@jUNwVm7se-PP-V&H{qHNwvJro~sL+x;5BCGS)X=*F6w}Vd9h9~yaFmp` z#lDUL&~W+hS5}R1&+p^sAUsES+8bHs^!hc5<%*?{{6?)r1m4_n)APQo5bxoCSY6^y z?|aD%iD^n_OsQytaY%d54Ns&Yg&#&WUuu2EpLt>z!lyAJg}WeRMbu6Hp=r`5nF7yZK2`BGPhpExNy_VJz=2I{@`WiG>~t z*{_CNPW|*a$r!P}akEQWGh2ZA8(al)~(d5ON4uBS}>PQu(MhHr*Fw^Za zuPe(?v0yU0D{r8T-h8f2?OL{J5P!dpjM+fB`Zulhb2q0ten#EN;mZ577u>!_A)og@ z?4#z5k4C|yJ~9dJ164nBe)nHj6CeP`%r;4npI(Og z^ZcG}xAFbWN-FO%T&?BRo@F%aB(B%G9W!5tC?unCjXE`VIY+T8HY<9Itxeedg;Vs! zYrIu<%%~(~I_Oksv-#JT-f=itBJ8Svrw5bU2w&sXW6s+eH>Gz)?OLy-b%<(;l^&%V zO(IQiHs3Ni_+O|@Gow4jLeWYbZ=a!x*zbRkDh6k<&2YT_jm&c_8E8{!&?b*i$2Y)u zm;O0&@cC_1t33P`CFodL?tkmZpQDwi2t2+mAk{M0Y*8W$qvzIp3;{kZZV@e zjC=U?e{hp+jIYV+HH@BDv46cL&41r)hHz)yzY=uCp+qQpo_F!3u!jNN5j3Z2ccAyS zjod7aWF(z4sdMXR|AqyfRLa8QjCu}9bxd8_SW+b2^v>3el4e7Y>#&SU@M%@>)vM#@ z_xIvMg1y21Gs*l8?iVSyLJzL@WclCiUwgc1vmRk4Uw#Rwe5a*aEhagQ_kE{d3Wxi1 zp_NeE(a5mhFR`dFY1f~&n0?>9#Yw$Mz(M*Dlj#%6+BsUSb)j!>a=)HGMTy&KNu#0x zaSgY5Hbr@hjl=nhQc@=U&ARB>!Cv@g!j{!!1Fx0M_;p^;Uv$F|@T8gNr*e}!tXu39 zhqoW#)sp8oUr10@DbM({qKj#H(X{Q?>n?eVJ-|@XZ5`Xkx<4$^NED*@L&sx3+c!Y< zUDr4Yu{oXPn$80cx?YwL0NpM(7lM*%UH%mXnLtt{Dj-a0LpkVavo z(3v$HQ(ZI*QPH-kB*H$O&-pt_*l83_cdaIM^Bc)5KLMz*9X&e1 z1x>Z7DMF6(Sdc*C^f=A}j`W}*+OO5;z7>a&RAZqz(8|T*+KeX{pO$eE=!PS2RSF=~GVGnE2<^gF1A2c} zs9OnUfR|hjoNa&X-s2z!)^Ko#~2$jHpt$s!5W?bYZA>dpzh0j7a`r)ucj&KwJrDegDh&@?ndzdKRkEVT~i^_C# zhWaLjJsUv7V=Q5F;c4Hy&m>4VT~hSo9%r=ScF3dc8@~GBd=6#i{LI+&s}B_wJcoPi4vg;lbckF!eUQzNgU6gHUe1j`qm~T!--j{4>|qqoyTX>DkS~-K ziYwrQ z#_>(=C)a~f=cS{VXCWXAOuC6D1BI~v+-!Fsq- zd-y|lJztM3p>*7-eQy)8V&m6v`k^6eNUk^F&wOT}q7&UyCVc&K@9r-g2Y89>G3V7W zbEE&>OK+zf1>Yrg|47aA-;7WTy0D~aZ5{h}Qa?`Ao94T&yOz5!Rdx~aCevx?0dGj3 zbnk_$J{7sieG&*Ri_oE4SC-z-at zPRSBwlr_2!CkBElfS_jQx}O2`Ty>A|q$Zc2-yhV7V>8{mFN}OMkmBspeMCb6SUe0K zzZ>}FjmC?$Im(@IORMYZZ==R33wssHwYR*lIax5f>868|?RZr7pXV^DSPR){tG%YV zQ8l2uDQs52$OTBOfS@p9{=1;l_J@Dfg1!qf(4RT>gCJkiBbnoDgm?X8O@U|7hr|;g zPISCkfx~06_J1iI`+mo%xseButk=`&Wl_610$`oU(6ll32($Ru$9z-sa5+C0Xx(7z zG3v*QaUk?hSgU@wMdiREYgAo)+mfp_ctoz zx*oqk_dn!SA`!DDp2Mln(iMN-yX+-TCT*R4D+t-6N6#?5pwU!$JvJU&JXQML{KOZt zr)J#k`$-&?2kWadN~^8Q(Ax1S_32kLkC?K|$gNAMlwDz=XKN88K9j476<-K{OW+;D zUt0KOy}Ql0Ot{t6(D3?74Mn4wr4H4gODh`^+XL4+jg|+nD73<9RYFwsF7);LoTm}R za*Kc(*Q#Alqc-kVHEh?uQTef;eX7t+LD{;=PM{)HP>eu(* zt@lj$yy~iCrWLa~m~}>}{unCpO^KAl@9w;*-X(W(Jn*B0WuzI!njgi(r{IY|m)Bw;AvHa+?lqFkf)%vI-wgiveJD2W)7>UvWr_SKu zc_B>N;4$yO=uN}JY#@70h`+R(EVADyn%SSPSWdI^d$;-=K=qng4aO3`wh{$$lgEf9 z-AZdspTtqV|AK5qv|dvaqww z!`=C%%+Bls{h$_;y2AO|Z18h~f@6;3;F}mHS0;3V;By)@8K}7AT0D;a)=X>3H`0Jf zQKK&!f{o|o1m8m#!}qXV#5b~H0e0QF+nH6Eh7zYRT6eG^_RUYNM5Jj#eP*q)?7z)`*^;)6m}?s1Tnf zh+^P?RMRT;Rr>z2$4sbo(^Ssvcg~hcd~+`K%u}Ydxm_17bkAH}cGCN?z%jnk@~7v{{M>u; z*wx|E)opsJpnz$RO)1>RlOrSS$InYdCQ2o9>=8I`ax9()+a;IKvAD*yX zK!Ffib{z7eVj>DWIYVtUH<8*Z78kjjSigXEatU7KS2TMXWdgNq?5|ZPZ=hH;o2LYg zv|_ObWWIX?a|ac%bc4aPg@Xp#GgnBY(h!>TN$P&D9Ed0eNi7p^G&Nto8C3%&v{3x7 zls%%FRRxwYu~Cg4G{qkWw7;GL>27j^7`Yftk=A5x4?tyx~iV%HZzYQQh8L-7Cf> zb$UNcw3%WeJ_B@vQ7Shv4&0c)qSU%w$UOUv$RC($oIyjS94iM9n6n3 zNXd>WdUn*UU796kHbFpiKn=4c*a|_Li$X!%21jarbjavWMTpl2zl?N525_Q}I$Nv{ z`_uX^fOb}m!D_l#LRw>#+Rut%E5x#RJ&=kPKmAIxs}8#?l%}GQjH*+C!A_#jVI=!m zpH~nov7+)0_T2}?8K@>O<)g;Xl1c~Zuwh|eCou(4nx^=JAJ1DBeVSBNoy%$+QP1zs3@EN5-F5jq0c6 zSU7C$-~52QeV>j8Ad9sVFzIB~=K#&c1y+$*#WJwz z^9UOZjOiB;ORF+?Z;9DI)n~t3&mgSenQr^}@vqRUhj^e4t8R}vZUJ*uvYuNcQapS` z9l8Ksz$$f`XZJ15L#M7B+o#kp_v`9?oJGgwS}pwR$(-l>Z@Y?S3HE9eNae|NMiL`j98aE* zBiq$a6;D4zjM5ZRKeENq(ZK0d6KT4^ygWJ9;k6Y8x zZiF5=^LJM5z+fl{tG>NxSlaW_xO%J2JTfpE+Mhr=Lo_H#zU8EWOBfqB68lP9$ zA`O4gD1A5H;)c&dbITCN7V+B-W$W=aR4dDOcBWh!;tD*=?^&vyn7j zn%A5F+5s-oH0~5blSR^dAbi*qDRq2ed7D6S!(fG7_5{!O&L=}M`n4qtKpRnF%}L5Xna9BF4a&UJU;L_?${yD>XKg^MYcil zxQjXC6a$RP@x4U(TRZGx7XG{dbKY{^2_`2y&MZemmQ8K3k#IM@Xq_=MuoGzXQyc68 zV;L6=@C;N*I+-<=M4+MOjTcd-K#GNt`7Km(onU+O52l* zcx!~=mWEr*m2@QM&80SOHU>yQ`83?hg8-Z2u+Cnt**16cIe)M1=2p^3>;0}q#)_{- zbqvW1J=F}g%`heP(?%U&9~S3d*xIVrj?c0DI={t_j+T|}IVCHot(nPCTZ+uvm+G*9*j z)$X+~PKxDFPAU0bWP(%`&6&_(lw1nF4mbE?G$wtMXIdRR->8+zWc&EM&O2gW2K-#X z$~Nnmeg59J_N{PjdgzCPt;=ymGZC9yweukBY(pW=r-AI`awU2FA5Tm2w)yyfr+vXt z0!zrl3X&0vw(@UmW;ra4!?mAr%a(wtmBVU=y$Zp;TkZ{5;+8fo%3dQQ|7@d&5&zpG6f)Xz`FV*#GK8H?`)&~n#cR{A!hkvI5rW=0|7((%9mQd(&vnW z_pO=_ehn0!KS{YUvC)`SKNzP0GHhW}QcNU77m2312XvpYE(<;``5AK}DrTtt>Sh^y z$+Kq&QT}f`jRSm0=`6><_P`9-rmDq&0liK~{?nz7`hC!vXn+~+SRmL)qjaC?QG3C- zGI?kc0#cv!{!*cC#JzJZmdg*(+{VlEW;J>0zB>0#hn?omQp{I^fn_Ak-Ra>wr$Yv% z3_}{27};{ZMWJ0*Be72U#R;@VRqYZF&zA3v6)l!C8$7h+t2I<3GH;aSG=B70887M% zcWLeitb0;X9p!siEVq7@?4Y@WXcFeTonz-WUu$V__4D9hC-$mFHU_B1ck| zxUuXx;c!im!Z$^3S|(kma^^$YJcJQICb=w4SCKF7MMi$IY9crNtO|pf0<&Bgpm*K( z`Lj`_Wx()3;SvzF$4;G(8lKpxaQcUi{|-p$nm>}fz`f3BWVJ4}q4MzrHu>|RHev3d z~sZm|( zh`{i{o6pS&UFT*8&03#v*2T?GK8xgJ&nk^BN+I;L9`|{k{RUF0&)Gy{6&X(Zs~#z2 z#YqMC-_j5dq|)PIiD*D@k*yAqOWOyt-PEQZJ65_7GP15z&B%URTN^Dt*!-+_SK^>s zJpyOlkVKM-mI9J`73(Gt-Ix9sxtjCmMX%!+RR#f-48ycm5-0Y08C%3cFnmUsnBTUu z@H^!CWDr9O!FnZx_?)#@@p2G8499(_BjilZoqT$n0u;v3aXsuRkE!F?A}rA z8;*4&h7iswgfMwt$AN~q$+ok3^F6wTI72H$c@3wd2#xbujmFU4f>B8-(Z5VoShTF3 z#WOXGewuBJH!WI=DxG5n@>=cogTVK(k2;2z@mj?pB=(|@7ITka63Bb)k7rnR+OP`0 zbXOg@lJ24n(uUws@px{;F{v)Qm~Olzd)C9_3tanU2r@YxookA&=Oevp@nR*lAaAm` z-|v#{;L{m7f3>SvzmK?Ad%$n}MMK*)j%G(d58auy8|G;#;KCwUIndr9!qU-r%^%*= zy&JbCf9|_t@pq8G90+UcZG<+WpGNq&7YL9fMXM1shZLuZNQ=R_8fWOymyR=gQ9+ec zBF^)d1l@8K>I1$Qyw^f44n4SA)kdXb#tq~I!IBba;~{F2;S5!ivG*3y(Cl#H?dt#c zifoIWfj}x<8o;7W1$9U!_{VOPtQ+iIM(lR~Tq(pvzaJk7zTAPIwQZrNNY}*}=kcB& zXQ7@1a5K|ZdiK~-jj;Q13h z`YI%%i2$f!OX=&+#vYHG{)eo}6Z2b7t0#Z+0y=rS$nWeNGNMgh{9BKL2`)sx`Tnnr z;rH)n`XdPdvGV5$#kJInX(5t|Uj7Z@{3&GRMOqm4gx9^5BT$jyHCDYn@YzhL{t$&-=X0;eC?OuU@<`ioSiFfbC z@1q_z1O+J*?fra}s}9nEWRHBA>8mhiCe?KFt`v`N3N(ekT31-k&%SP4QDzz)Zva)PXX~*@K-}bf zO6SuE=hjkGWuI*vac?;N06YFD#igif>YGf-D^`TIHbb=K_em31Z#6=J5X^bB)ByEN z?@jiNIOE7<6qnX3GU<5*8g})<4i>IxEK=zV^e(CGvsBejZxpHq#iV_q-BJeO3txZG z)VW>a=1jq1pvhL^2)=|i?#%o<(I5DkEJm%mK7F5 z(8^1STFfw(5EL$`r%@H;$~N)N#2ha%;H3mg%sLn8tw}k#Wcy=2JDm$!BO#?+`~k#H zmxn?m4L58ozG&8@6ixYCWyBqc6H_YaI|(RWYNFx@({|v}G06`S$Mm*Vi#&dHQ;hCh zy+gQVKk`{C?Dhm^_Xo%h{P7VO^e`WQjnFT$;F}TE`V@z+ykMEh8!wQGnvtqDe)}yj z$W*V7E!=AMNr(hoiA_lQ4{Im63J1rJrsBlHRqHs*E|mY&sAdnN(vJgf9@>n7xx9=< z1_LNjK4Ia6NJJ|Aa=?c!r!PR0dWJ72rT=YPxdh|~>~>dW>6Oht5SB0SHL4P%|KeJ{ zRYb=vO!nX^#D9`*2$LeN(IFZ7Om~$*seou*AWO=WR@cPWv1&?oPZnU!7r$o1+=xu_ z_VIXCQHw>p!@}RU&${WI>AMtVQ!$z?>5kYG1oV&3 z+IQyMYZ{S^r>RF#zmSX|Y^TBX@zhvS|FFZ(K+fSy(l++aUHbAN+c(t`nFoVr&s6M` zSebwaKuH+_xaO0n#b>F}_;bct!UI`}dR(nAH{vYe09VxI_+x75KjkpyY|9yua%5Hr zKTuw7`388lynfr%XV|IFw(+neNMw?{6eNPIMR7z(%RWw9Xj$Y(>wtR1joG{RxU3W8!IajXZCcIZcm4ZlD^+{_y|h{YW!$B6u5 z0ER$5R*~t#`xKi$3nw+xqj1IaOs`x%Y*!}n7j^g*YP!O^SDDG_utKcWQZc_^yazZ{ zx7~I|dWO9oc@xUQ@5}`3ZvnOwnByT|PbrQ$LmKByQ_f41Qi*T20V05c68mhFMip67 z9_vJTA)ZRhf$|L3F9raAp)g^@Y$_E1>crjrBjdM@I7wzOpf{=Y3WbF=fQGJ=>^dl= zz~?C%V^h3}iN(0WkGx)ZGPw=nNmU>J`LNEv*id@0HAQ|uOqH^di3GxEW7x1 zpU$3b48Ai#8OshHwr012>y-zDT|TbJSZ+KHnn9|*176V&yTmMvwzOj{IqYfM+(B+( zKtG)MRV;V035`t=Q2azXx(itgGnu`O3zFF0MJ`RC%v%rv&}sC`VT=I! zg@c$;Q;bidswga+i(lWMp2nR6_OuzB>Bm2gQBgFOgt^2356NY2i!7@I0h=+ic5_EAoZX-NRn>Pf=icG8CK_B-^8=tCJmUDwO&ua47TsQ(z zIx7FT4Y@2{Jk8OI(R5`okfx4XGJ40fk;gD3O>r;xyZVz{#FFb4+fWM}$x!5Yio6Pe zENgBRMk^fgb_o;~=9W;V95=g$p$CUHVPjm$Q1}A-)=Q?vrh;d%zF$Y@apWKSdxY`O z{Ins(f6f5F1Vb*FjNG%1dBb}`ecC^{(c})nMnZo2m484vMf;52U zW>R?F%`H+=)z#MSk&Tp-{?dVxE<7EI@+!Ui_YZ*Dy2d(U@W&mTUUwbq<#j4?;d zHOGDbT6L$T4?N7XygUP^ta*QC4#?~2jM})}nU=Q#ZJ8DQ6ssgOa*|iFC^E_Y%}~BE zEd{l}pqZ6mCNZa_IeQkPp1pMd?fhSaWPcejeY0*ivDEu$Y)REe;veM#j@XSr($1R< zi880zgc@Ef$jLGW(Erzh%3q<<3l(*M=H-Ljh^@e`hpp1 z{tRksjBJppvrMr4l;V?kW6gri*}j-OQziq9Eh?;$gBHB5p*->-hG3)Z;4Ak4`6$c` znX!P>r&p9h6x4sIp}#Gdet)wzuizFHEtaZSMxWpeHegQPbimwPPl7$|Ll;!CoqtIF z*V|C!&f=M;QF!PxPyKK^XU|5y%?$1_u-6&$&GBf6G2+21JYZVveen8K#ANV-O1}jh zc|bWXvA1u~Z4G$0S1ve%k!7MXy2V=UEm)aSItR@a176KlCq>Gq^dOc*E^{TGdo;lq zG^fIqC+hy*sJSW;#de!qKw~{Bq~OWG#^<1Yjz?6CV+0u7QP0FeskKDEmMh0w8%Ywr z*$M@%-I2b&%L)^$n6pq655M zhkuvTqYhgfbSh0a7V}Cb|akqsrUw!#& zg}+6-5cp&!Wg5AV`fu8@mVJOdj(har^)g#KYLIWC^kc-*rLdIdI@cEdwSK@I>`00s zFUUPMe~nP7^~XMXz?r=AZdXd2k@TARwV5h=lgK{C zi02w%Qz6^U>Yw?E+HA7dxjz64MKj=B=j}P@k4Ko#QdKK z3inQJEZ_gqBENCnw@D;Daf|7n)yWqG8ksOvhO}bs?ptY~ZKJ}1!2 zmT!6U!zK>-m9K^H$jB*)qjwy4;9Ku;K)_J(-yQy(11mv5h9}{i|Xj zsfKMd5772#DoZWMcqHA~ zNTwlECHVejZSboQ z<+5+dg9|kujdysg)HrMHvY)LY9m6kwFIf&I$*F;{#!>XNnX@8zGiReRC0(pe*m6jx?*f=}+k&{O(a zvFArs%aS4L?opf;p1=5F6&$6c3xA)BsMA?JzY}cy{o3~DNB#8emG>kw4QDt54M?_E zh65i(sO9kdxU-}Clrz$VftwF6o@KEy~5ndZBmaXs$<#3~zKl0t; zqo#YyH_eFNYlgdGoOBh8bNIaQix&&RWgyGQ04unD#p$_W#VG}tDgOob45MQC7hmL^ zluUpe|IG^*?frKnoG;X3vZ9PpdH9G(2acST`a66cGRwui6A%IlqsrlPw9(xvLS}es z;aH*0pEu1cih`cfGdjf3d6#N1ASX9PSUppr|2Gy5RouCb{IgZAJ4Hpu8W(%3gbm?! zGO=5d{h{Ll!{FjIea|_nRF}1V7R$Q5f%`8b(T^@_UtM#f*8KOFPhCO}BAi;Ibqo;s z3P_*Q-ygCth_MG{&LetPrMueyG-z?9_3i~fnGOv4E{cu~#x;uyBeB80ZbYfn#YcF|`%eKr*gP(@>G8HCME8D^ zKKgF~iiPH#U;5lMnC?fD|6sl_v%2)D(`GuCcX5U#WLs*SNU0QI<8 zb}u~D{VYCq^$gCtnI>}e(%r%O5>HHOGL7yx+SYu&UF_ zVyCE*pQt4{spYZ3Bfn^dJvK3BZ07Rcepqt(%-e8sIcd|S=DiqA7Hk;vAEQ!$HG!S9 z1;>aVqc3YxHqnI^5A7YkyejP|hWrRl5<8eYd0!rzHT};7{QF`We)}~E6kn*JQAI(@ z8h)A%|2iMyD8}a=DoY6;9*JNJ?(I~n-TAEYm-(I7J!W!2JNW#s>ueiTK%@TRtf1ZWz+v)4 z+&G07Ryh=&u}9dYfX|t&jM5$JT{m3sw_O|X6WUDHrPWvYJPe&3KVy#THx99{HGE5e+5NCC zZA1eUxDFH8#m|OZuIEy03xuUiW4vW}^09=+5#R;!Pn6yIGr5L}ZYr+Z=1TC48OC*{ zAjFDzN_!EX8x!Yu!F*9JWSR;G^u4zoHe?Y@-S9S%mgTvORnc;3Y)mcz$60+A)n(ls`X zi`rVsYFQJU#qfIxEO*`UIGJO9wy>s^o>{kflhUu+-wjPr0r*9s$IV~Y1fjc2urqm# zU@zb?I~|J4j@jGx0#23}_+HqD$^2&W*37?BLw;2ROTf zGwjDOdrMY+72uA*T4hnFzFcy3-ek$v)!@Zl=ONV-uAlP+MBvwqP;Bsei&y|ErSbaH zL{)YcedhK}!m7Qszv-L}rLchar~Ys=eD7&jxMc3ab-zY;TpF>>^NOzQQ$(IFJ}8b5OlK3-huTIE>5O zawX~4|Io5soNWVVhjObboLpx_ZzFpC*A!KKZWB*0S4bI~H4Qp~tvJeY~t~0CH^7Olg&(8P<#m z%=_1y2LX}?e-`&O$}V3F+PHt6GuTarHY?qD*ZHXQ31$wx9vy`jP8ptk-JBRDqjhut z?flbrd8z*n#-j?rcxcxwF)`s}THL5vVyVkfbt7MPaI0$-p9t|V1oTcLL6Lbn*hd1h z`ac`b`_`~zOL z4MkR3sCUES-CSOcLQCgrZYuWhE#uLgPJ;|AzRoR7#WBkdG?60@)Iz5=oxyH}ekVbs zx$Tr=4_=~b-<2KK$?!?@t>_{WSYxrIEIffN)0003wd>=>`apN0URkxzZ3>?dqg_cp zd+4%g+pYE7==Bus-1*RniJ>+S-IOgnd6zhrv;(cDdBuIhwbY6(aZf*>rVss?%0MkI5h6@rX-_UJihr`gt?HM^5XJG1O zTdyeLK01^)Q8>>_zr@(jC;cMjuho8%EPz9*4o*@zZ16qk`v?6(2l821t!&=3tT~qG zkz;8>)6h42=;UWQ7Z=5Dj~eA@#aU#31L^v@4lIB#tDOBEot1uuuaafvqyxF#Ri}B?hqFfA7wH&YFyfTtO(r{S9E^QG6ft=KMYTc5JX+gJ+6M z>J)#Kca<^8Of8=F(8)u;0}})ARKW4&6tWL6Uc5&Z-I?&PvS#FH)2F#$l;+ofr74}3 zt+&~ZVxsmB{z%o<+@wvA3N%>TGXxBQ8@vH9FlIR$54`pWW92EC|5gZNtIyqxC ze~199IT`?kstqO(8*+M5Dz~N$2FrcT34wCdoF|sIl9!Xe(4Nwz<juAEFSTs%v> z{~ZB-p{a#XGXvVe@b1a?4kpMpC)2l__vHit9Rz0x790-HVX|F@Bv!W)$$3A-vBSR? zB%XJauILq*u1pE2X&`VgP&$~BGOspEnetp*cViPUCDyYhC&o%w&I@R?-#|bv5Zyk; zX?L~~sVip+!o>Nuy>&L1peAXHrF?OJ9>T+{Sd<<3rIu1=7wAkl4)Bz&%z#J_zhq2B z1frmI06&vX+GteuAe-K9ykQ?3P+jpite_m)UVj^6{Cs!K_w9?FLs9`#8gXSZ=+=d{ zl{@2)gLk`MG*oBqpl~n_XC7iqGt!3lYYsVuyaSY0t(&v}K?Qp`HQ4xEbqW@LgJ;RLfTb^0$#7 zluEAEi&?l>$=us@LOcQeEzyWFSbUQ z&e`XU`ZuT3j2vKQQapM?!y`Z-{%v+U?R2iz`t(c{UAXIo7_1lEpphGyqHX2x>R8k) zBWHNO6pPc7AH~cF{Fk%L#N-tRQUMc?Xpu1ryDdLt{EW!U+U{jaZ%tQ9Ynb3Bw9eKo z{@P$i{F>`=(|f9@m8`7R68q#7iShm2wK*PbTeZWab-l76`%}HlJNUeD06(`%_znh$ zHW<~Js>^X|ESaZgMCyWu2o_dh0|$p~jfj_{4Ms&j+H(p_!Xf)!I;Io|A$eOQ3s?_; z^RQYoEV%{oAGBUoDVRP-{6)~iT;nVitcXQhUH-OnvV>Bx@egzE5f-|_5;cS5v>2>6 zwJzXP_J{X=^;}KA!y!dC(xwP}i(E%{c<^G*A19=8>mu@FKagK*r|uxtbV8B1xrTNZ zW&l8rKVu93enDL@f%cFS(zU25D!4Du0@!)C4IZ8PJ5nv|%3uQ7QaK3LVZ*1XLKCp{ zC>EM}R`Nm1a#0*lIfTG>fq4JNtZgf(Jf5Fc$$miGpZh?;vKA$Q@e8N~Ztiu5+oTeF z(Dh9OMcK{XV#71P?AWgehmj1h^loP&1vbNm8%@9OP`OyA+n{3>rEFsC5wA?>h!m{+ zy;8Q>dtO$xpBfQaRxjNNy=WyUf9~GGe+M4wm}RMP&>LfF-j8xBxm0j6d7#x1P!QR; z5q4Jcmq*>%Ee8~r?v{fj)@287SJIr6vs1c^vpk^At!sM`f4iV@Py*0nr-_INGkJD9 z&Ps&I0c7C9yw237{)8cT4Xjl>P*Kkb?0+505o1jNFH;$eqyQ9=qK<1G^lofZX@U}5 zB^E9-cG8Sg%S<>i8CL(r`x{kd*yP`Di1j|JK8{#y)D}$w)fdOWZnXstV)4A5PP7;X za|t4U(DjJK0+n-CHX;M15M3`NKjvr9Te{uLVan5Dx)v*R+0Gqyzy7AV82v;7AR;x_ z`dpRd6c6Ja=qxqX0ro;L%Xm}-ev(DX!tDZ8<gMsS3&?0AspLM0U(U*j z_iT(y%H_{5)_OK@`nEVV)wlSWH?`dQ^_T~@7Gor1d$ho2Z3Q5nI;{n!`&g$$lt3K4 zPw7H|l1KTgFIoYaS6~KHJfC>26^fG*H-sLq(}~dw1>>_^0595&ySbB&F8-I?`~cpl5}7%NTc_x&!?XSu8S*C5c?vYS$W_5nJ2p245qAwj!NU* zRahc%TQq_AqI|Ay?81$hd0Q$V2_@v<1y`_iW=iny6V+&{~F%XCOj;%**ZBpp(9@eWtuivgG* zm{pH21sj@Qi9tCcR0YPf46l6|{(8yQY6!jib}Q|eMsaSBTe*2>wTP($0MZUguDgp# z&q|tixEBii|NZ`C8Cw2S;KQIK9%3R`*z#KdcLq$M4|7^MQix`c@jt&aA8QWgD#Nl- z{^^G=x}vaqcim(XaqUx*$bbSvYc&0>xBsS3jcK57NNF{zdG}Y`BBYoCKz6A3Pi)bG zA9WK5UAb!|{{8=1YMIi46WQIXU@3_z+2m6w&c0_v;JQ(CwvGd%z{X*!VgjgEUH>hGTsRJoZ&h9d$Az{~E=$f69a z1%-W-=$;{-CT%Elyi8V25cjSafNp_bZkGuew@wGkZ4k2##Kk>+;TU|*17`4_nPR5Z zq#*v3%WHph7wgW5d)~&V0O)PRj~{uRuq%;x5Kq2BP6yCT<=e2*uFUhJpYLx zbgE6%e$sN6P5s;&e@pEO=mXR7BE<;?DqONswp@u$3v?FgIx>=4MUp%jNW!jVZ(11P z*SZC?uF^ZY^-G5JevN1VwG`(}p@d4uw#V1e?@Hu6))8Q%8UqAs#x1v$O?bqfHi2RkJQ{=adIPypuyJ6o_a1YD&l9l)F zAXXY5V3|LHOWFSlv@)VpRabgP;pyj=u*G3nx+UPzyyZ4Y%eK7BVzgW%(1k#l3V>K) zyP2x#4cCOZStM3kEM(2jcb znpdb)XMQ-y*SGy^2@tXdlHWqKNY3a1SPSB!Fx_?DS0A!l(WW|u9R>JmifHF|cx5jX zK&NfGQ{A7pp8`w)ht2Bv%S|*>ltceBQ9+6pEW2ydD?~?Mt-HnreGHj&(zN=&E-o=? zqBsAH#TykJPt0EiaiVX}M+cMN0Z-QO%Siu=US380WEmndi2>e^nLL-#u~>OPr?Z@1x?_8g^CyHEt82ADSm(`(rv957&Eb*Pc-H=+7OOOBz+5U=m>;DX&m@&|O0C8McrR5_889;yZcrzS$ zZD_NdU8Ngwri;d726~5Hg?d|VsM0OBPpz@m4OvYmj*-#te_+BvhoUKv5rJC9$w2YpZ{{oCLFC1t(^}(v$YLABs(GFc(LhFy0D+ zYwLwFfb-shJnp{7)q$I`LPmjDkN?L+)YzLT<(>pnRcgi?Z*?H(5`7N8(=&Rlcof}~ zITA`_v%fJ4hz_U<$lRkxp{ElDM2Nf;LDgZz9Nk;SDpGAq7X@wnS z>Rd{M?cE737^RRJblMmADwFyizL`wY0vHf zr(gfH@niH`saG4$u0?e8DhoA$AY9Xo7yyK8b^yl~D<_Y16?cRHom<4drrMZKahMMbz;gyD0lN%H}MkOz@`Kt>p6UA}WL_Hjgl+<62x6((!S zHLYDmOQ?z(qe2GU-x2ev1E7wY^eJdB1h7?1QX`|VaViD>09P!)xC$N4|6Z|YYhK#^ zKPxfDk&@HbflKcd;2ULBZ2pjWLQ|s>NTIs*EMP??1;Lp*$suXSbDJOJ;a(x~vAUC+ z6|-H>udZW8iUYI*3Y~JetcNFc0*JJ2SKW7#ge08PNIeo?_uid}ls%t%Wbb<|!crnq zC$i{*0o>_4U2;Wc_ZOzlO{0tc1)48;w?`E#oyNC2tVB z=&cl7&=D?-B9}n=|4IMBQJ3R0y_X?rO9npoS=`$AvPlK*4aWcskrpd~G~x}{dm*3e zNrwrL6`utrE4~Qa_QApkK_9l{H$JYMXAiNr%#U;3x9Ev_S~hkik|urca>Z|>N%SZF zuCbmRyZuYpPg~1P8~`E!m46(PK=J0SS*$L2z4O`S^MKhk`att_2Jo`UoppG zMr>aZQR>d_-UP+t9DP69bgw6In8}$;f;i%*;Sux7);#QH8LNLVdJ;S`O!VI zhuym`jS7dbxW^wyG_U{J`p zK=$=CxI=JE_`_oN-mI66aGZXidlpQZK+Nm6d<qNj0F)>#37{2B}Gj}5&HZJZ4Xr? z;hyLIw)6J8cCuma>9Rr}CbCug#Q5qs!_dw||PV63z)1#uvi+g{+z5C~P)_@}cOa`>BHz z#-|t$0MVgStFuG47Y5DqC+6KswHQAg@ezK=J#vC-Pt$ssQfLru_yup!tF+$cCb4X{nL|HZP9%Kf#_f2QM$Tv9?z#u~#hYGMGRsb4Os5D?hhMUSD{k{BI@^|J>)L z4lEz&9Gx|VignJ|=NSuKve0|$NYraCc3vj=5Sc16E^Q!BXAGKDc{*f!IL7Ey5bUc^ zz$l;mpdRmwi5oMe0yQC5c_eK{I`zCatAl!KeZdLIia+H^j1yi6mV+9Bb~bYJozSRn zk2AjMsH#QUxFo}Mrip)tWr`ZjgAmL#iNe=*)Vxl=t)5;Q328y7JM$MKDL!1BAx|kX zpmB|LOk2O1AN}-ai?wX!9q*Ge%*Zmbj@AG5z9fZZTE`xH^u*lFO^`8_aa}6DL)E{+a_suY=H8N zOqs-aq&I|Ml@Y*=GU&$i#8RBMm7lgdg>T|PSTgia=#j#_H!1w9pTfut{jJCQ`X2Q> zI4wRd&rfzj{hlHxjj13vC4emSrN)VFF05=wPq{Wjp0~_Zmaix8zlQr18T65@bX9|7 zZIWQj5t{_ZGZe>nrSnxfl^Nul^P3uZI;0!(IXcy3#pk@L%WDpBqjr~+Ve+)G4V^Y$ z;sq-dAs6{g^md05wP7OF4f{IzG%hnc%3Zt9a{OUcI2lSkqG~t`g=D&;q-eTtAe4Kd z(zM5K#gu49)5V6l0s+0~R$)8_@3#s&-mp02VQSvw7icj98=f39JToN9cewz0$0W1L zJ_1fuw&`OcL{K6xhf*Cvz~XeK+M<}L((l}2FjE^v{-`IKcE%;;GM5-db9fYnDeA5` zK_$nIgh4TAdHkGV{7>H_Bpgq2_5RKoZf84kUZ>@cqETc*SMqRK*$FBJqS?c^FxcZ^ z@=d`qzA$^ez*HyZ758a!q3CaF&8f$?s(U#hqu*0D?{*IY6)9EtKu6B`MUeA!SKGCC z%M6SLM=EUlUFDD#E_HIp)aX#5+K26o_dm_%To;^*u)UtGoI@Qk`ZiUBvO5KM9b7*w zfBZD|NRjZ0Hg$lWboMaTmOeGFSV}d~348lCd3oLw`FU=-eqWmU?95#hi6!O_lm7m! zD6DLfwcyS0lBG}BN%b-bTRKf`mexBr$U>a;gY|E{yXcRtFV7S%QeyP@Bcgph&vbu} zQNLgDxEuSeP;Bl`69EU;7AKYjj#-(wzPB6;pOIVh_8#YBqzNETISQ1iq(4No2stnx zGm0J1!fr3->yP9A72LV(1;nk+MZSi6m(3?e2rf;6_pK-}*f-5*p4nd&sXmguu_tSL zYB(&1YqBD|Kn?x==3FYL+`ZkS<+ys%%G3*#$DJC0)B|-aJmzL z5nvgcch)4M0nw>Q~!2GX&tjR-|_brY&y8QEhaZA6Gg}zyKG`t zH}{hqVhJFV480VlUG`()u;qooyz#Qlu zAggXMP*sZ#^!{@7#d~2LH};(LTuhjycjr_4N;d?#RIL4=)W`D4mR~oTQ07 zA5jbnna856i#NG;*ebaZ z`8y*ae;6_Y2|%uRcb?t+QVqvugmvld(t!=wD6qiIImhEF3KjEx)AHtq17-{$LYVzb zg(D)=wUVM=M1!Z>cb|P&wI-j|vwyL2wY|ahZ`A;%1y$OzGZWsbHIj3qn{VgY7$?a! z8i)=*#Fj$|>p5;hShQbsRCMON$6%iu`z-}|5wnbR{mNxn^~`I{7Qh*~&YDrZvJM?y zaQ~aX^uF%4qj>*fspB)*Mf$tSO}J6V;#3zY0;|vWDDfvesE#2#o1T#_i>w|dWygQg zAKv4~`WB+&1ZUcdGi?sIFh82TPRbEhQCB_v{TA$|sDW0#IM$=jpqo-|Kj~69PD8+{ z6Rr8L9vi?*DjmLG#2$DoCH&g=b&LHbUVbx8Pf`zk7nT)M~s$QT$91Arjw7Qb-F!UYW>Q8vwZcsiO@mz`wksH&4AYVxJ%k_l3Migxc8ghesCK~ELGJ39scwghAakMZZ^l+x1otFH-XyBDP- z9%N$O+(hb={~#I%n#EODyb$du?omD{WQ+!f)9;hKwh)q%Ya}S$^z22_EBajxwk^Cx z6VaXNafH%&Q*+ICx(Sr778|a~_s=R9zTUl-Blb>XU)AE9>pOqVIuc2Qb}okb0`|E3 zK*-~-e-O0OCFgsFP!!-X$U^W!L~9S>2{H3P!>VOGfl*a>t~_kT6Ug-Y478WcC{_S} zdpiP((LTHy+qpDRZRDN-8LW+ok8hA!mHYM|8!-VM1RTIfQB9yyte2C1vK?F|Y9E@z zO!snl4j|3~{$*RPY{V)mA_?+5@Q*;ci2}8SblI;;|Edn>CkgKlo-Z)u+y70E!;fI65Ai!|9p}Qs?c`V*0Ns>OfJ#%c>o0={e8La-9`#C{Rh--iOx0CAKMa}4E~@Gwh#}-Vm-(; zv>ZFm%&S4Cwn-d)F8$zakZ-oRLeQ;63KlwY=!7pLC1rlzq+fm@H=4|MIcSlwBnTTU z^E-A+L44(B55Kt`i9@DmttE`#mag)I*rT+cZF4Bz$n)uGD^!eJykUI%QdXEzDF&II%CzF(`-6H<{Gd9g zGK=@`qMJ1s5c_isJ=Edi_rfLfa%yK8q%2hom8AZFxzXP@_5Ntf{Loy&`VP*JNopm!AKDLthGH?6X;#q-3WUo##8 zgkmwkcBT1(DWAPP8@G}c7~;Q>Wkb$(5K?ev#L_xN&* zq9fglvNDR2LTsm;xwf{60t>Op-fH9*JIta+OW}s0kV_!8YS#(D{Obu`V$A4~>gbUL ziFe81G3djLlP1;*LdBJM3}Ro=DzA(w9zDDnW(VKV0Ms%wHuh zI~pam%>UU(5+nKSb{-w-3mU|g8sJK_>aK6{N$aK|Wb2Yp3GRG}(f1gH4^Nv>^s}Xp z;Wro@hzliUfQH%gy6t`Q!{$wg4Wc)c#vacy)CrJ8Ec7` z=PdWU-a0z!LswaNVz4uP!r@J9UMoR~XWIeK5sip#`zju!B+v(MK1Ohs<(7!WAP05h zhgj#lRfas0i;&c0>y?C0ylC9~bX`afk-rJ=aDUyRU zx>Rv)A?_pIiv_MIqfKSo>GL$F3~Lo-w|A>-$75W&eP(20{Vd1S?P>#RBvm7V_xb}o zA2ifHEVGwo2*)5^<`MtbiNVu% z$~zhR<-PhmrkTMv()B@<%0$VotFuw+FZVA}i}K2-m{J;sORRS;s&1^a=2KkMAOc25qV49$?v%Zba= zrB-1iAF>Obt0@$dPM|_oWoplnB}3ZWL8qOd4xnC@vgYmmnA?F(-RfCBzoME&Y)0(! z*2WFxc9{1@IU`Q1-_`BqvAtnGn_tlv#(RAKCz@wP!VY`p*G~qr!@J9@uAU4ehIj9q zUuTw+&VM6HFT&yZni5y{b#}~-!^M2T)GX7hHQDLrC5PJ#j@gaG(yI{t(=;vnb<~Ys z)@vV8li_pB)c!wqug36S@?<9aWSp-pkyx3f*N^d)GkLOlDtX#@MtPQdLOc&H(2ljX zI2?R5R;Y#Ny300$MWA5#Is9z_#_o}+?hRdPtQqg=Lh zU?DuIr~GCwJa0cJA!0%Q@Hqzj!lP^E*I!kiQ8QJqi@m&?G_UL6{+x~9pEKBj^jOSu zg5?D-tTCInpf~2%p|{CPNoTt!q(57pODv_gbac9H`{iHQ29y)^riOQ?VisCx*NyAb z@|+Eeu2YF3XlU*hO7vJ_9v&E{OA;To!F(3ku&5PHx_&#M9-kU65 zl!qQrCvf)Xhx@hYQbhKF!cyjKEH`=SG`vvB1#da%I20!O2(mo{;_N=~A))od$( zemeM)veEDUSv;I<2|w?+Y2{osi1Hc5A5@{RBa@4#R{5ITuB^q@I}O!h`LFqiA95%z z*qUWJWhQ&I8JpdFhn%5@Ct-Xrh!%s$g&im^Wwprpn!L5WcY%A)s;fqPD($8KIec!i^~L8MMCn}YxHD$VU=Cm=7x>1|9e zTU4wq+57SHGddB|TEX*v(XkF9^?Q!;|5S)7kxf#hQ&T&e{s3?l|4qJ_iC`%d}5F&%X=R)ND33Rakjs+DAh|6)tpB z_4jmZ-8enjW7fv(v>uZd1z8IBw`Df~(W@z8zt?0riSd z!*I5L(P#k?jn>KDQ>W;;eRaAhRcbN1*^0}Q6^M$}p>`a~F-h5+FV-|a;q_!mpG@G~Plj*39uS}%|q7!LYR98%yD+euS zU`cHT4&pIm_jZ6W#q{Tukq5Ry`@5jn`P|%Tx1oHyRh#nzC=^=a15t@oEK1sCk}T#o zj!ExsuGU|@qmH5bPfS3gQ;q4}IO7Ob5>eGyINcQ;b5f^PnOwYW+crNzjdUA+146+e zjWV_W`-xR9bUcLx-J8f`NQb}#e~@&*3l0pdKEO$5IpdVped@$gtZf$1MerC%^Sv;8 zv%_8&UWBe6!Oeb=;=F}fy35@f7ie`7qvg=vueq}1Lt{+%=> zx%{IlkS}A zt#OEWu6*A3CFf^57&Ut(dF?7jD{@nHy1(St-*j;mL^WVIgCEA2(x7ze=H5!Mct68s zQWyBB)|Gin+T7`EQ>_7KccLlwa#N9a=+RUYJ$6?nzoGVP)n$C9pFu>R+6ZN0K#p0B zN;GDHXqR;i)F*ne+_n{cWHSIO66izV`ky|65c()II>>ikJhfiszJ!NIG-9SK(WZLD zR&R%4z8ny-YC8;@tMXV_p^UNeY&q(LL6YHmX9}%tWQWi9=aBpSu1u8%z9n{5F<>j` zkNe18^I9#Phml1)=tEq~tj!Jm2PUDb&jBk`1r2Di&zb*AY8 zksIwxoBfOVrwm;d9#f8XQ};!@e&|_TBWvL0Dn}x1g`eKGA4U9dcWA4g40a7 z?6WvGBi42cBOwd#N=fHIkRG+?)wx~rHLGvCU4b2qNk44(GdYibJ&diC;Cud2;Qmsn z#m=meo1V+g^e&9-$j!RGM3ADxqt7C}K<}uH;X{~9x4xVI1ZeX7JV~lKGJPF>UGf1k zZMmU8SXdA^ZQ=9VldVeVa4IRh3mS>4K|!4bazr)ih7P-5&+f7xow4dLq3c1j^X`S= z^LXnGnIzUAMG@7}=6(B^UM?Z>!;Esa4WuL@SR6~Byw8>C;U?q^YE`&fpvF~tsbi`- zMdga*vawy~RPP~H^<@&Tl!FSrDX&cGhvMU6O|6v0X%}A!)jD&LxZv-Y0wMxpy=@~G z*s~epKZG9_<9mORQ9GjtwZu^j(2a|+Z&TZX0Uh zDo3cwC9rmsUCjDs@JguGO0 zLe92wE>7Q42vwuLstb{{N@S-m-8`lUl>aaYBR_h+S%|WzL&Wk}x{BFEYDbB+tMH_`RI9=+V6kCpbtzFB)yo?yN1KlaSro|w+ zf(^rEkdv=0uE#vTPG_7!zfVF8)LAV5PX#NKBP@ZqiYU#}YL!1P1?LD&t+8SmqCk@W zCwop878fP~&D9n8QTC+RMs#eO#j}}<&W0P2du(1(iW_>xE?M}DdnQk3#J5>mAf5`$ zZ12EV6qzC}1qefg2Z)+sjZ#^{7#=XIRU!bx*u z?_Gm`zdwn$VK7FI0W)z@kz0LAH&*1XF^F( zE5=5l5rlDLsqiYt6#SOA)48SdH7-*U2stwUJ^RXn zd-8!4Lu1o_*-lAu(yI8#>C6u!!d!4-8LgG_UTW+ASp0B3;Ym_5N)y2L zukMr)1byMOJ0q)8aI(Mg=FhkDcI6zI@;2ext-}-TU7r)x@_~~y z&4r_bFb@~<5g3%M1GaT=(6k!SjT0fj&s*m5OzB9_ck}HkQ;-wmR9)4 zlKgz0<;tbGY*CLkFTA|UlX)ora?54sV9QOD#y8WWH#1UjG4V7Gqd~vzdvv(?*z+m9 zlJvHV>I0X3=EOJS^e8-4q?*sM@S+g8K2T)8ri(`rBE zWu8{X<^*ehC01%^&D&-h|0rwr+#{UWE_^^hd5yCWpH9o3|n-Zbwi> zeA}~_d2wjPq-p6`+l~-T9*--_6!fdl;=|!7t(}nz&X~P0T{q6C1&!$wxpvreb|l#t ziin0I=g_F}j)SwGLbXKttL7(O`a|Y*R<76R1Ww0|wy3T6NzQWu&ui&a$xB;1=HklM z+lq9tQMT`q&i`{rE0oG^%4^@fSf4Yv{>%%($R*Xo&XQcY_d_yl6mk7IGkrfS`uXh6Dv#CGuo7`a*`kUBEBusp9w~H^ z^u>hO>6?p6No0#D&rCPV3HZYj6E-p;!&ylmAWHM zE5U{hRDW0e<0DHNH!4T|XnRI{nBMlVc3>iDHe=H5;F$kd9hKj0dUT22syO$J8n_0- z(g6?)8vQ4{WY5Z1_?jcn353?Y)4!+n&1Otz$4;`)jCp3H{e^AcY=^fPCWo`mzSO$V zh5lr*4cZ)AGMqIF%n#`}&3%=9!cz%?;-WSd-r$H$Dq2MClMvqyzz_8$=MKQ;<|Zx*J9rsB|gaNSAbu z2}+kRU;@G<#~3hjqqaTw@bi7H=emBs{i9s>xlg?N{d%8s?%|}&N%-v)?9LAuad9*R zp+u~KN#+d0n=Bp*K|{2&Lx?lNDn)jCOu`UuQw}{XxuW%4lyZ{d`m<#WsHgs3^WOnv zmULucru#dApr3!fxLd2nP{U;w-WT$HYfdz~GthYMewCHB!~*U?6->!_V!L)6g52Iy zFN5E7op>FX1%!-Jw?)>iNV4)4{xbGZFCa!){*u^HF^Lm)PMlVhilG&8t_q zgVs}QOoz0LCSHN{7+!1_(r;pI25|bG5&zeq0NXL&MHi~>*V#RnKt79&e3ovB)6B+# z+3+!+0mVA1y+@7H)h=9i00d^YK40w(ZA3?P*7qL*{MF@@oNUutem3S;#8t;d-+$gt z*w%1KekbFpx#tD3NhAZIy>~Ayen@g-+->IR9PS;7Z=>>m zlGYa8BrNUj+X^*tFX(?Cr}fe2G;<#@(71vN8>-)Y7BZR^IsK~CLZzySJ5Y0G#33Wy zENC1N7*`qy{sLBO;=To7I8s2086RZRtXf~hvpX)E@;>s;R)BNY-nt)LrvF9oFrp~< zR0C%ILINj9y@&4ThizELi1)|Y<*3Ui71J7Ctx)x@^S#KNyydyYlk7Ml@lA-7MR7jz8iG6Hym5=& z$oNN>g00n;2ia8jtf{h0<*cbRr?55^vfxDF|QH zEv0jxPDeL1*-5^C6!Fl|fMe$P5A*w*LZyrXRC8uD0jTV#Czq&j0A~XZ+MKeNqXo{# z1h?`PL@5;%PxlUB4_*(fu_u7D^FKu@h38xCq&se2exB7lcKgNCl(HLa{o3&FSH$0Q z=v(vrdH4*@p!Q|?L%I{IvP$2N+JLHU@Q6XlmC=l{nq;$IKpOZSsg9F6kZ}~@JC^

K$XQ&KDy^ZG@9s$9ih%u;JM_2~CqM##@eeLdiRkb%6)to}&aFs{JN zO}qA4s#9Q#?ee-U;hzTsJZ6C^PzF#T>D0Ud4Uk29YG}u1`5jF5N)>bWW)r@w?;J>= zirQ-3w}{V)mCV%NX zc}l@7y52>hAgYsyq_BHX9N7ZovuXmXC7p$==K_Zpdi08rY685d0JFXO3)woT*wv5B z?I@-%)+mqo+fnSg$3rr~Dmr)8O|n0hzz8OV*79%*p#GKGf8-0e}y3w4_>F zrZk!^CU z2?PFXQaK@ItA$7RqAtM)_hlrYt7Yx;al(<0C0r(wmCx*sp4ncE02PbO+?O%s{CCBp zGRxXju!B82#A&1#YsLvYQ*8It65+Y^7^XrW@l>0yjK(BEm6xea+_CuERb?CEUEc3_#JacU77?~uM)H+z@Qe5{wt zxwerzvS`-D{1Bf6)vhQuv80}*4#iVEUrFo&qH@gsW_3s1(KI|OMhZBHVfZ0RNt{Nr zD?Dep-Fvk5-6}M6`2I-158XGafMv%vF8As_w+z;pK~)9ma7gt?njgXEQsQW$ggEDx zq6Mg0koO~LtN{9?Pw{JttOLnDof*MzB&$y^is^Ag}0hT`t@eY89PrY zQntO&e(nrtB-_(lf|Bk zjq19NNlAG|zJSdXb~>8rtZP{kb)hv!A1szK_tZoG)S0}8$6~#X0UN10n zpog~q)=C%dpw4>y4RIOYXmJv)-MNDdPe*B$RUmtDeOOZ(&G7k0#zs28lw>;0_d z;KBHD@xjC2$?abcgpCc=nE*%7QRmlj+E(lmx&krH4*21WW%dNDC?ilRvDoDJ>aKTP znZ4Z814q%R^)=0J0iG$H(H-w@{n)HnZw<4Jx~#XMlJx^Z|5H@^ZtpmXuL`w7RSEW$ zt*~EjePk>frKh~KDqm*55%>c0y63*>&?{yCOZC9q-Xpa0=v@zv;=<97)E=22>gPFu z0CtX8b@0eobnrYduaB@Mi03pdj?j#l{03wVW`M7ncnqX&sgX*_P~mA?=F+;6*4~!A z7uUxQ8P_l6l4&(3a7zhQ^<*=7E_f(KN-pR>c>r;Pw};0&e^DQ0uXOT-z@!)|VmW_g ze<=Ewf9mwzZ`zDdf0QF-q&0T@8s6Y3QTgDD)hv$J&d!zC$q$z~q^?7y0k=$XAOXM9 zKz)`T85;_qs>r zP^S5+KUS-x%(@QP%>!iCCOIFPH~}E)EVEWlC~400IVs;vnLEk3P({WHL{c0W$o4Df ztBx*h=DU~TX;<;|gx)uazjaIPH(HBoYfj8J!vJU1IRpv&z;bsfVv42mNkmU3lr;SF zx>{jK4f+(oaM#>_G|q}V2>yKpoO!wtJ4N5NWgy}MHpzKwcufu{UCG&P`8OO#lg!8Z z4pEG$7$_NfCag7;M+Y2e|UBf-CJdrU@+ zWElhKo{4yU?C*ouhdiOU7oF?x6}lIFdXI7j`LJGPvew0Ep&mOiuax?1l+o{|q7q!5 zS>xh(=_;iKjKkvr&m9SOn6c{#rPyceiKs&eky{)-fxekn?E8$w z(}y^1r+(dH`Zs`Pyu-)qLxn&r98JCNJNX#UT#+f#QU~$q}z;vGXIZ zkYm)8w;R`EfgOFt6OT+IB0Di$w(#}?4-}b=2>*OI;t=WNd&Zk9) zZ9pp?ts)3p=KVzyRRA^x{qk1tMF1g3NmG%iAE&??p}X^ienjM(l{Ug) zy(~8U#lg?ejEtPXxeD;HPhF?mO)AAteCsdvL!ppgxBV9=qmPOidrk-YL#zpkP~Fg& z9Q0*Il{0i4`T#%nXFpmO_jQ*<<4e)uFBSM`S*UYPFnMc3{5NF)N?$x z45$+^_Ai_zFr7bZ87-bu4^~01lSc_({aMf16l4>)G!uo#ATeDCqsa*7;flH~v5x>; zhQw*~**qQ0N|xvUxUix(Sek6a-_*$-ecM`t{qs56$CDj8YJ~L?kd7V?)$8Xi&FB(9 zqg)6hUJu&unS+mAY)NIdAA-SCEf=)E*xBO@QW$a40s(5JF=Nv6KZ=uwUbGJ0*18a9^C|fjKVN^b z9}i*oVDP8Vs-Tm}>f3HgPFGSTWWmj7WEYA91I}um6=GH)EUZ2I$~NaDS8_kUI^$kd zP-lx-!tSSzP=0a~R@>|%tKBulck}T$z>` z@#B}Mscaxu4Nie35uW>hG%uC%cvmVLT@ndKfya2v=>=DJ8b!1BrYiAVposwG8tZcKYUHG|9U%ZxPHDPr zNI~;!oR-|<-fU$rKUI0e>KH4Gvh%B6pQ+A`)t|eVBeXVZH?(3*pihq^- ziCn=>=>CgVIcsKO+ZshjQp}gAR>5C>g_sA2$6@(8e#gggTJm{L*~za$Us#w=|86MH zFGas5+CiOKhKvd7==d#ujAK) z(Rj2M!`4cY%&N8?Qycl)|8L1w@QnB%jY0=?cD5=h!m5MR+;_JyvlVeAu_Jll23*2~ zTl95F<_EEB-X-fvVw_d!h3ci{FZKLZvuvlPajC~%Ta-+12n0xZ zc$-o1#8d;wA_6GW68NFI2YwOvkk_h$?&?O5l^Ur8-TxGVbA#3Ylp(aYYA<#Wtk-I& z;!yI9*-u6bkrQ6rOm+mPF{g1SnA3#Qq|=m>U)J~+UO>pe2aK0;zQ_p&`&$08D;*U% z^gs7L2!P_C76FR5;&OVQ(JJGKx#eo{ltc8m8WHL8MbljS2>ol2Fe+}2Hrpve5C098 zT|qWRc1PgA5%d4;4R{-OSv89N)&L0D8^iN_qvXzz+j5XWu@cHr+%O6*t^uIr8p=yE z`n=3-DMu6qf<_`f)w^C%h&%1G(9J8*g#EV&_hT)R#GcL)C!u&ld3mfGco@1#L6mbl z+WUFBe*(qqVfN-N1)W3O4*iH?ntRBx_8+(%OnI-WOfqX7S~~ECYzh%K7w8J@Y*edO zt}e~@n7eOCaM~wk7wA9QpE+QC+jJqf9$&>Ks-(l@{I1qmc9XjtZu#DtY8oddzC$0+ z2+V$Zobaa@Z*N-DH!2xYwGZ30D`v~eu5=l23Rw)0B8$1|F|EK6Sx|j3xPt9MP;7P#u zQEmL1FRk{53?nCP#J(~1QSv1y+6M*ESUrIS@BhKduGJhgX{H8K1YVc5J@mQ^mBZh- z3=m;`#k%Q2i`Jv8TJxZ*8}@W+`eo-6D8d2i^sV;Mgy%25hW@Z=1m)*zNb#d_E0{G@eWub)HC$}-S%gdXumcsizSiKt3hAv;J0Sej8Z6>Y0JfBj z2w=f#hJA_wfUQ>$sfwSYTV{`z|4eDLWx`;TuogMDcl7NMqY&*UXC9h!s&)^|Q|{By za-ZNi)qUEcuCkp#Aq#y?TQt!>=8s3tV)#Ld6yMiKX{-rdhVDcBW1#4(8;;+Vtt z=y9lt$Jw(EU(?|@pZo6af=j28HPOwg_DK4^Aa|q^WpFlygeU*(zZIfNbZE5vPZDG_ zAA8gwUJK17yDW6lMI<8OrADphvaj~r0wMMm%m*Q3`^$~-|)&KJzzu8m`s0;>;Sbgd#(&>z(&g04o)-+_hr}V>f)0IXYWm;`F0}@yZS?mWPF*1XcYbuF0BVf}fS|t?!iIqK_MVY=-DqC6YFe{g zR7AUcp6dI41z@Kgr4&!G{s33OLz^FmPmX+7stVX6vS^8R?$i&{y7*-N8)vJn9ys&% z$s;DBAU{HOj_>F*ujw6+m@fs7K5i%srE{C#;+*tCA9)iiw8WG6d|4X$9&sgPtHdqw z{#?UP!&7BI2TF(>exX99W>X%jSAe@YSE!Dz*TUDL4{gGCJ{ApZx9spz@!MP{Uw-1> zO&%8U60B!Zdn7PfprTA+y07CM`Q<7Himd#(hsvJ41z4U|+j;Egjn)m`d}Cg+X{(5Q z_{|Ut(~W1dOV5haj;2|ph7t3ErzU2jNmUBMW4PkkAGpcAj|g&Q_Gnswp!I9tdz1v7 zsyDmXSt?I%y^vOGln<$5XfUYz~kyWUhww>Goc<^zgU-|Hy> z+_}-xtO4P_uK8X|aIo>)`=Aypc$XKIN)zjgzN64ojRlIUBnDB_#PG36I@&G1{8Lr1 z;eTD>eu3&mnkqB1BDpyS8y?Pxnh0P1=#N#+=axWKzY3X5BMOqaK;s)= zrfH>XPo(kgQ@WCR^!Bt7iLHtkTBzoKfxat!lYQK&Zl;{9f2Lar0n4B_62 z0}rCRFKKAxnH8}zhzUI%s|q*y1_FWp#xs09PAKpdQV<2beA}!cdv%tUG8m|k7kuLL z0-ke>5ytuqhgdC23OYa_8$P3@YaVHV%a)5&M{Tj=#y5c`ueGfuZs1zw)*}37k z5K31v5Qyi>!rPnOcR>nN0{WEAW-aKFKJspM-&_z#!GpQMZLd#vqx}Nt0q`)|@E7OF zYcj8kAW*l>SBakvuR&q2T9_2nuAAblP)20+_DcfP+~e-orm6a7>muljaFlKwad z^xXzV_;B?JR*|_?iZA>wh{vf#Dkgd-Jj{$ij&rjlE26-UfNABg*?T&683bAm!}8VV zyaHc4lJ~32IR|<@O!vX8JDbF2g^r6CEXX0zZy#pg0)bSYK;Ay3fByl;VsJB`=OT#B zWL2d*Mv$ApD-W=ihxUiG-($f7mDpqK+WADHxNICip@12B27!y5RWjIGHVQ7LeKjqb za0{4-G~q7j@0~`SeecEUF|m_p0+Bqe5MzM7xqs|$zi}5V_<3WogniuI`PTh|{RI8F z4{>rrStt!VDOlRbeph;SstWnX57!?1GJ#%d@Zw7jcl70455Ze!H6OIGQcXqR6VRruFX6 z`Rw8QudxOVCcy6Vj@Cm1?J7EOtvBj}psaeGHZW9f99x z3ffN_4zJcVzb>j4?YtK;-Ur8jzxMq{d;1c2SQ9?zi3z zckZPgcW_`TRvp9WzZskWxVxO-%de3LFK~1zV9+HC| z$Mx5427l%|Tgsi$jgh?A8;*8VAP^0`bf0(GJ6YfMJNRm#gykA2C+If%dL0s~fT_G8pn3qqO{%gOSa!w(cDp%( z-anjd1|*v{c{&sz^XJX}G|gLgum>#5!Sf5zMJjQoa@1#F?%a69;zxdNo_?Ru4*x% z@O(n#(^6IvpOOiPnQVn=3qy@doWt^l-p6>QUiw85A%Jq9xmu||3-HpdID($W=>BCY z{_gtI^4{5v1Be_!%O7P^lQi>G$-Q@duBI(}2Hiaqw#XUd9&RS}UD3;pEG3=~ ziW8;>k+xtZY7j^xQ#*=;h-}Ebx(0|NKsdJ%(Fm2ZL{>Cv!RG^I!$;rN#b0*>n+0DMK-ln^+%^SSF$XFIFqs#0Be2c~BbalWN zC$}`5xYN(y0_8GHE9#W7)?q`l6aQFGW&0*?5p!ilP4_e!&0+&f0x5wQVEV$n=}dvmbxpqG z9QrN8uU7G^6l|(#L$I0#V&_?>SHrX@nbDjUy}~*+9fE0syz9SNtdynOc&|=FebP>C ze4W{q!|%4~SLCF{HqLYPg1 zrT~vMYt=BU=~Ff`0TS9+uSO9lMCMzF-+|hB(8~(vxaV*phqPw#hwxGz8zNhwv~k`$ z=5%dh-9|819j=`JI?5+T98l6nX9}L%x7L^KL@sDPVHi~Ace*dt<0&$U<NP*w_n+NwJsT|u5Y?_aGj5>he5*0zwj^|! zqyA*@pDhDBrqR1W1L<4BtbsS$>a^Zm0SzSak8ltLe-YD`yW%3Hju8a`E<1hL%j4F~ zfj1mlS&hg=4gA#(h?=sA`x4aMXB>!6hy}$S*yl&8>(?aN_tJXC;BEr9EKM!Ov%=@$ z^En<-pxo0n&>KQ;Nmv1qe0qO=?6ivYN&d+!ksozKii4;PLtxcxmt_CG(!Mv?eW-Y5 z;&X?hi;cgQ37bjq4bqh8ItYJwzg1dnevUjPk65d_zWPsAX88>PdgkYJ6v))|*Oi^%wwPXP{D#X&B%5!F~!rFhMdK-k( z1f-574C{Zmfobm_2S0$kudie4fW{K^bk2t@0>@t>MQ5tY*9wTVt(ru(jmH;_v<;-W z5(t7e`Pqutntgrcl78MBE4v~>J@i1}_B}NQIo^CrzWZnP+ai`=b?HN_>O*Uqa|#|~ zVrG|^IrTexvaO01wNh`;tc5Db1zs!!;`;?+opns4n$#f>x&RKg@5%v{=ca#ARICBr zh1qv~lb63)YyB#}3@S=)9Gl^?({FC-COS9GVQK2NJI@bdsnkTVzij>r9RTn0T%Gao z*^Dky2ewywUwu=~U=zo|EzTnXEjkA(vhcrYk9P$hbI|)akEzUVJ5O81M@4E?o!1j^ zFuU`QDET7URtd3@S_ii@A+gK)S08|vC1GP$v}qETC_uaFvleYx&qaLAqASzo0H2jSIR?IdMmdljOh|D zQ)$hsI4yKzjTqGiin272=NzcEV6!^#ayZe5r)n|S|0T-XniBQMMb}a{E^iw#L(sPM zICB=DT;~`-5be1Gc}rg9vg>+|=5ODpaMUTem5tAuOi53I`0g{6^gU7nHW-i&4P2U@;BKycBh~;S_)0#vyzDbl(D08*jU0cDa2Z&OVjG?<*9roeNkxmn@7n^|sYT9!4gGoxDM|r7Jhvn{z_cSwJE5*RTK)7{ z^WA)e%eg+o+f*$5Q{J*MN_{`@EmIp;LKpvNTp} z_TB>1O=eI$n0LfUy5cR~W^m;{Z3Tj5^8<5f7)r>nn_aq5xmBNjBb6zQNq|>k+9R$< znm`8nEak#BX|3Ar=OS(xWaOb*oT<6$VDm3BO;udFPv>y^lj7{r^B>C3=mf6bcRb{X zD?A61NNxNw^MN-h;w(-CU3#kG);m*ZY5jyokv`;rb^Mm=fyB76(n6=9=+lI9p06;t zi0qMUY3?!;8c2hfTVzOy04e%>_B9Jv!#Xw@OEB0hDF@j^$u{zQF6%3p-3%))CF_`S zpb5N9o6bXbPQmqem(F2muW*ELmvHsz!W(0sHdk1w9Vl!A2>5}iW-Ht|D+{Jf_J6XVh~Qhff}UZ$I!C2iIF0w^`mH|Q?BQEK*|*8ml) zmFyVD$)QCqlJQ>(JN>7!pj}1foEkB?iQf!^rB{4Gh9CuB1BL;>B^*cq-GLr*Nwvyk z-DB3v>`teT$!}9aq zXqv|)diPH-Kcip*@ffxI7rTK#sY29{L@iH)pz~p#xvX;QsH7-Z-+!;Yy|uIC*dEQj z0E2;YKQCJa-JMJMCpbL*Qs!NgC5%oCIhPem`?~*AQBZD43HdK2?Tl|@OCW_By;uKq z9*FIjuH7>N&g+qy*N~e3i#sOW9v#DA}FCB*iW<8&KUmM2v&ga!2hPOnw)A49{g zjH8?!%{awd8HQ-T+x!zRkdBgs;ROYO6@d1<5B`Y}Ak*xLz|v{X%pNe7j?F(R0Ks3{ ze>?)%swx{~@T;G+-e4ewCpA+7WGjAa@XU7ue@9;bH!$D?^#7Y)m_dF5Ds)kq_eO+@ zqhovW`gBcl+0hv77nSaGr`RZQBk@g#Qn~!5mUS;oHF2Yw-0!}ltYsDf>?Q~YS7sLA zW~oanX6xDmZQEP&rKZdbk8Hh$_wBzr&Qw=b9ajDo;IU|th|&0OZDEVaJn!o#E8`j_ zA-9}zL>bypsf6rdI}ExF%_O|tM`4SB%?{1G%_RkQ)O26p>X^R%2jO^(Y3};jmO%z- zoWWZl+lgqOq3MdZdfO8L#c$)_a_GQn$im059@T8`cEUL1vKJFrAj?q}-hw}EuV{vdR6FIrx z+)J#SpI@i#-!W#*(H{M1_;Ya@mr_THdBx}L$pg?BpOK>~xTISsafEN^m#}o!sII*( z<=5;g8ZV-Bk1gWN`w3s1;cHy*yz zc6Aq9^-9RAMmUnOTD%Ty%67BL`iNDYT?e-tev~bR@0u*@6Q%tFs8@Z&%x=1*?PwuJ z20xlr7Z~Tm=742jRE(OVxW`5U?7%K{WPyzWbI8`vM^6V;PJ6;H5KaOoG<09p^sW3q z+OJT`_-O(-dE)K|t`sF-L~HMpZ8?cO(Z;xo{VEoEf@q%uM9O)QS$h`KFwu@_812x% zaF5*WUO_gkXy`%%oT!dDPQp>j250O^Mv6We{o@cQp7qLVvyO<~q%~BBU_V%5ou#~p z^LAVABbK~lyMG%-rEa>Ey58upZT?SPzjqoK??@rHc1>*CVRb9*!Qh;k4e@3S@4$6g zua79`9xt`2WD~-v68zYR?%Mn+j|DOhftd~Lw18WGR{Y>w{m24&yTBuyEXNp=Sv&!& z+o{Lb#7-lwcX%#u2^~b8o@uvQcXo1_2_XC2I|s%4lyVJ|8^Kd<#wjm9baYJbyGqUZ zyBt6x1*$D>yOdipTkeJ5g|`>pJUEQBgnr*g?9n3kJvD;nBFIIaKL56sl4O?owCF!C zz#|H6uUe3 zhGjeY{o@0(Pi_)Ep@>SmjQL7+O$7FJ5wQ{YcQb{P?B&*+|9Z%QZ3F_)#frLD?%IRV zevss}6ItTU>1m%+PH-F<>lAIr+v1$~wa9^jeW?>R`#t6%zvS0e{m4^j=q zd&1=KW_cK&Wv~ zP0Ls*#i+yobB5oQC>~TFE&U#Ng4tsip5L7FNMH;)Zv0s|HYj<4Qb{9dlU}3UDS^>{ zpNrIm{&9nrw|M#+Zt+w?Th$|3w6>Uq=^VirnF$wl)g6s2yZ26Wz(D6spW%K#*FTS= z5CA)rvG|IGg2l3D#1DUx*%icJx7&6CjL`>ujnke{?nsrYfXnxfWE#5tHxITP9+R=6 zg=8~E#VphtcBGiLA}x$W`F}4jzY8QbM=Jc6?t&J_t0q7oy6k64j}2CaO&{%PTeBP> zP%loxv)}ZJr{=kBalDVO5I&p`LvG&R+jl8)L(=x3GsTx1rXasydWOu-v0N~xki;m( z%%^A0lgg#ab=Smk9@Ta-xb@*S@EMuA7uwnY0_>2tf7v5ea_ixzUs8)Q`kv_iFOmpr zpA$Q|S{;o2{&!1sgp3*3F=j~SJNME`{I(>+X6lK7jNUU!Hj?h9USF4_aY+k!`FG(< zi1mgqaPP$0zUvD|Kp+U)y(z(1_`6{SQp1<|fZ;Yeacy(9Uvq%&09QOHf1V$God3V92mx&NZJ}{7+_)FY2G0{S7=`ya00tZ z7QiRtfh$q;hQ9yci-FH?n|xy=g_I!Gib}#40=d-iFa3}u;7Yjx4pIaV8e)idyO^O! zY7qw4+Z0c2q*hdY(d|?|M>P4m>-SC*fO-Lpw5bVjwu=oTGJhQvfZ6r0I+;=#NnR5p z*{QiN<)(0Bk9*LL7nB&NZ%5)-SXA!A91G-Za`+<$(58@S7I6J_- zcR#D-D~Oe}h7_H)XU~BHCQiiNrh-8}G}c?pj0TO3SN0qR74d)ouwjGJECj z*HmG5%kl^xk<`4oIorQ)w5VOm!N|?kb^W)RyrdpzQtWM>*TJQ{EHT)qk7hBX+tkUq zL-st1+y=+rxf)^mfn}eoZrOEvJN)46g+@S%^mjzLuiucX>AxQ0at*i~<^fe3;W7*s zobP2KvE#$N8P_EKN3^~B67WLU`DMV`{)7kR*=;|A-hBe>{In+Xt$eh@OrJilX(w4+ ztVYDU*lMGP=1GjDa(+w7b6&0=RFoWDGf%WIV)7ET9eBhy>L5f=N$QK(vz0>jbB$JY z*e-kQgNp)ss31?_OOV>&!fE$@)M{V4L%}L>F7DKJQA`tz?`Rk~p;D|>A6@m*sPM{0 z0r$|2pr&nJ;-sjZD(9-B&K;t8NUPG|udZBpdq{DO%$y?CQ>$6^Ysq(^?5+XhC?(#% zYZ&KzE1g4+t!C_LLhl$Fg8gbeRFI&qvI@u0-paH)q@wZg;NvD+(C%|+o?_RF04?kMPx;y6aZd#p@0U3@FN+{gOLsTc z%T>0@5xvsdMei-G#o~76DEOYPvZ28PJ{xwj^EfTl8oR#5ayzTp{8E8YqfabwQ$n5ihpP$% z%u*J&KC25pmpcJCh$Q@C4C&ta=wo{NXo%yoD9^EJP72!V>S&4p@U3duD&S&;w5~Ci z(rjtTg;DvSjGp}3@W=S;h~Cu$Fg%rlKF9Z$)34Lk%~+=tDNG7)pha+U&e2b{*gzTC z;KCAFt&az<5WN-UYg8|D@s61)z@oeDd;-`)DrLqqX zf7(J{DVi~wb1=#Bn;HCe-zg>61kOxh+^vIx_H2?=Oa)m3Zoh6;9bd>4rysMPMUErC zA}O`u;IYkgIWq$=FlJmi~!uzrr?AV$}Q`6xU-2HnTD&3E14flTo7X|#rbRMK1m`@`ODf5?6RqvzZBb``flZ#L-Ke~$yY6iiL zdr8j*teKD?5*fcS`p>e(=0`2#(*BsL52{O~ ztK?K?C7&|1YQlPsv%9J#=jwNU9!NbSEH;Q=@45FS2MfJMnpYns5E*~|J#5L~Evnb| zJ5oukaV+bvmxC@_en` z`kIg%fo#46B}(32nT6kqI<4#pS6%vHF z^`PZ`gO^5n)LQIhl-axIwU4%1S0ZLN5T#ddTJ!61s6YZX@V$BpSLm694zOveUDj*O zMF5TG+cs-HGl=(wj#dSjje+Y%7kJmB5B9W;_Md4Tws{O!!>xy$8&V5=s_)qbh&~A}5`b==g$Z&@T_Ee~#yr$!rf& z+u-t5=WDNalMD}5bWN_c+KE-@C<1ZX^v=R-(q;xc!7oHZAEvJ8Khe>U|HWU+GFXALkU9 zSBz;K)I9Aez~6QNaFZTM?H0}%lP0LR-cBeLvkj>So{JVvlT_7}u5QFEOJ6Y?;1e)MqjNZS>#G2ACo3nd7Cmx%bEGE;F1n#iXBAUa~jFB?1I+QbV zUr&AG_#UoXdNs3n^Gri&WU5=%9jgmx_U9MHX8^zcu1y>`^N6$9E3l@}?c_Dr)cp3|%et#1`gHTrrR~_& zG`^5ud*Qd+kvE6NKaI;#fu|8>t3_nZW1d1{7sXj$2a0F99=o=6U6*Ti^k-GPoThM$ zY3Hg5)*b5;h5X&y*5qn*B z9xu~@izH&dLVofmJo9CstCUzmb8X?$08>rL>|ZaE+HqUQuzmLnx2|`+bxA zx7d}2q2}`9ljY!b(shseS)C^{8Wlbt7@OOsB_R*TU97~}{Jx{@4^6(^|M$WH$u7#w znBhmh&+*-+00OnNnV-*DTmey;l5{B>4~2VDqcf$OiyIoHe}5+V6$KA;)X*Vcb>Zsw zzfZ)am=%F7Q(#5--z7OD2+f!?gRyIUJ$;MWY`gFWJH ztx1paS{~WDQqEA#>d6 z1eO$u;=;kd7s@pdjZq}+`Pkk!lxKXB`Ee;>;Ky4{cG7^KtF!xtMuD6Z|D<;&=D}Mb{X3EUxxp#mje)Jwpj?Q_U)DpC`B5IU1ABHCa5~5kr&?x`g`^>y?y@Sgkr#oWo((am5mU<1d-y?@@knXMKBv$X>B-_1TO`$3-XQdf0ZO#N)U z(t04Xeg-7bbvx*|-_qVlqbv$Z-~i=o@5yds^iXG03jEaL+kFxO^UXH zN>L2fRbu%19Pj(!hTYEx$qGm-w zrzK^27EFyFpD`*7Q12xmSeBQ?&M*W}^wi1Z3?i;^s$;4NC1bagNJ^3cjB0c}e8izJ zu~WLkInebms2ctV33gL13g$3gIpevi8z7C9^l3HPF?L4plH$kvDY)oLSy+Lu>p=Wr zq3ENYs0o%&J|p0IZlO)<&UFgXO27YBHg#mex66M6ra2aH;VvhAmY$^hY)U39k{>TE zAP#YgMX<$8^kD0UTX~FQ&PM9XnqMs&4M><_!&S+5PfKuqWf1vD6mo zlcE-2yr5N<7rDrXlpeI_T(QnLbW;)LwKPWFeg^md0Z*O?8k_&?kFvVKw*W2&c7NHs zVvD%|PoiP`6VMRRFv(eU6`8J8B!)LlW~_FZ!oBN0zQeVfN4#aO>tj>j;93cPPK1%L z`JKt$+iM8#SjOhffoFJ&k6Y-67o@8m1~ZR}0FPL=XV$SNgL;3WCgk^gom|vMBh#Um zEx68jt&cHEh?LhW5DmfK!awmdLpyIIqBVn7f`q@}sgNQ4o5J2os$Qa z)(DzB4{Ji1A-#oKfce)7PK|BHb+Vr+EU2g}J9+(He!`1UaHTK~=bL3u=0;4cyK7m7 zLik2RRLgBC)VHi=*`u#lH2Q$| zHM7UgGkeIa0a+>^>3VkG4wE%#NFxL= z?xZGGl5$_nC|Kao|LiT$0PMsG9r^>ESvT%f4wm-CBN2?hYlk-4stdjf)$cLE>Q)c& ztxTElv2RNxwYfSk_wPl$2o+w+D3x`wpEI`jai@$1U?V5Husd3)4p&82hnbJ<8j^Lf zbw?`~`MDJA%-FJR&)7j30gxPAQD|$i-T1GU_-m_C!UUs=mQ@Kr-Ln8ZdDtF36kKEY zu@?*%OM9b Date: Tue, 8 Jan 2019 15:14:30 +0800 Subject: [PATCH 4/4] Dev weight sharing (#581) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * simple weight sharing * update gitignore file * change tuner codedir to relative path * add python cache files to gitignore list * move extract scalar reward logic from dispatcher to tuner * update tuner code corresponding to last commit * update doc for receive_trial_result api change * add numpy to package whitelist of pylint * distinguish param value from return reward for tuner.extract_scalar_reward * update pylintrc * add comments to dispatcher.handle_report_metric_data * update install for mac support * fix root mode bug on Makefile * Quick fix bug: nnictl port value error (#245) * fix port bug * Dev exp stop more (#221) * Exp stop refactor (#161) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * fix setup.py (#115) * Add DAG model configuration format for SQuAD example. * Explain config format for SQuAD QA model. * Add more detailed introduction about the evolution algorithm. * Fix install.sh add add trial log path (#109) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * show trial log path * update document * fix install.sh * set default vallue for maxTrialNum and maxExecDuration * fix nnictl * Dev smac (#116) * support package install (#91) * fix nnictl bug * support package install * update * update package install logic * Fix package install issue (#95) * fix nnictl bug * fix pakcage install * support SMAC as a tuner on nni (#81) * update doc * update doc * update doc * update hyperopt installation * update doc * update doc * update description in setup.py * update setup.py * modify encoding * encoding * add encoding * remove pymc3 * update doc * update builtin tuner spec * support smac in sdk, fix logging issue * support smac tuner * add optimize_mode * update config in nnictl * add __init__.py * update smac * update import path * update setup.py: remove entry_point * update rest server validation * fix bug in nnictl launcher * support classArgs: optimize_mode * quick fix bug * test travis * add dependency * add dependency * add dependency * add dependency * create smac python package * fix trivial points * optimize import of tuners, modify nnictl accordingly * fix bug: incorrect algorithm_name * trivial refactor * for debug * support virtual * update doc of SMAC * update smac requirements * update requirements * change debug mode * update doc * update doc * refactor based on comments * fix comments * modify example config path to relative path and increase maxTrialNum (#94) * modify example config path to relative path and increase maxTrialNum * add document * support conda (#90) (#110) * support install from venv and travis CI * support install from venv and travis CI * support install from venv and travis CI * support conda * support conda * modify example config path to relative path and increase maxTrialNum * undo messy commit * undo messy commit * Support pip install as root (#77) * Typo on #58 (#122) * PAI Training Service implementation (#128) * PAI Training service implementation **1. Implement PAITrainingService **2. Add trial-keeper python module, and modify setup.py to install the module **3. Add PAItrainingService rest server to collect metrics from PAI container. * fix datastore for multiple final result (#129) * Update NNI v0.2 release notes (#132) Update NNI v0.2 release notes * Update setup.py Makefile and documents (#130) * update makefile and setup.py * update makefile and setup.py * update document * update document * Update Makefile no travis * update doc * update doc * fix convert from ss to pcs (#133) * Fix bugs about webui (#131) * Fix webui bugs * Fix tslint * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * Merge branch V0.2 to Master (#143) * webui logpath and document (#135) * Add webui document and logpath as a href * fix tslint * fix comments by Chengmin * Pai training service bug fix and enhancement (#136) * Add NNI installation scripts * Update pai script, update NNI_out_dir * Update NNI dir in nni sdk local.py * Create .nni folder in nni sdk local.py * Add check before creating .nni folder * Fix typo for PAI_INSTALL_NNI_SHELL_FORMAT * Improve annotation (#138) * Improve annotation * Minor bugfix * Selectively install through pip (#139) Selectively install through pip * update setup.py * fix paiTrainingService bugs (#137) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * Add documentation for NNI PAI mode experiment (#141) * Add documentation for NNI PAI mode * Fix typo based on PR comments * Exit with subprocess return code of trial keeper * Remove additional exit code * Fix typo based on PR comments * update doc for smac tuner (#140) * Revert "Selectively install through pip (#139)" due to potential pip install issue (#142) * Revert "Selectively install through pip (#139)" This reverts commit 1d174836d3146a0363e9c9c88094bf9cff865faa. * Add exit code of subprocess for trial_keeper * Update README, add link to PAImode doc * fix bug (#147) * Refactor nnictl and add config_pai.yml (#144) * fix nnictl bug * add hdfs host validation * fix bugs * fix dockerfile * fix install.sh * update install.sh * fix dockerfile * Set timeout for HDFSUtility exists function * remove unused TODO * fix sdk * add optional for outputDir and dataDir * refactor dockerfile.base * Remove unused import in hdfsclientUtility * add config_pai.yml * refactor nnictl create logic and add colorful print * fix nnictl stop logic * add annotation for config_pai.yml * add document for start experiment * fix config.yml * fix document * Fix trial keeper wrongly exit issue (#152) * Fix trial keeper bug, use actual exitcode to exit rather than 1 * Fix bug of table sort (#145) * Update doc for PAIMode and v0.2 release notes (#153) * Update v0.2 documentation regards to release note and PAI training service * Update document to describe NNI docker image * fix antd (#159) * refactor experiment stopping logic * support change concurrency * remove trialJobs.ts * trivial changes * fix bugs * fix bug * support updating maxTrialNum * Modify IT scripts for supporting multiple experiments * Update ci (#175) * Update RemoteMachineMode.md (#63) * Remove unused classes for SQuAD QA example. * Remove more unused functions for SQuAD QA example. * Fix default dataset config. * Add Makefile README (#64) * update document (#92) * Edit readme.md * updated a word * Update GetStarted.md * Update GetStarted.md * refact readme, getstarted and write your trial md. * Update README.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Update WriteYourTrial.md * Fix nnictl bugs and add new feature (#75) * fix nnictl bug * fix nnictl create bug * add experiment status logic * add more information for nnictl * fix Evolution Tuner bug * refactor code * fix code in updater.py * fix nnictl --help * fix classArgs bug * update check response.status_code logic * remove Buffer warning (#100) * update readme in ga_squad * update readme * fix typo * Update README.md * Update README.md * Update README.md * Add support for debugging mode * modify CI cuz of refracting exp stop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * update CI for expstop * file saving * fix issues from code merge * remove $(INSTALL_PREFIX)/nni/nni_manager before install * fix indent * fix merge issue * socket close * update port * fix merge error * modify ci logic in nnimanager * fix ci * fix bug * change suspended to done * update ci (#229) * update ci * update ci * update ci (#232) * update ci * update ci * update azure-pipelines * update azure-pipelines * update ci (#233) * update ci * update ci * update azure-pipelines * update azure-pipelines * update azure-pipelines * run.py (#238) * Nnupdate ci (#239) * run.py * test ci * Nnupdate ci (#240) * run.py * test ci * test ci * Udci (#241) * run.py * test ci * test ci * test ci * update ci (#242) * run.py * test ci * test ci * test ci * update ci * revert install.sh (#244) * run.py * test ci * test ci * test ci * update ci * revert install.sh * add comments * remove assert * trivial change * trivial change * update Makefile (#246) * update Makefile * update Makefile * quick fix for ci (#248) * add update trialNum and fix bugs (#261) * Add builtin tuner to CI (#247) * update Makefile * update Makefile * add builtin-tuner test * add builtin-tuner test * refractor ci * update azure.yml * add built-in tuner test * fix bugs * Doc refactor (#258) * doc refactor * image name refactor * Refactor nnictl to support listing stopped experiments. (#256) Refactor nnictl to support listing stopped experiments. * Show experiment parameters more beautifully (#262) * fix error on example of RemoteMachineMode (#269) * add pycharm project files to .gitignore list * update pylintrc to conform vscode settings * fix RemoteMachineMode for wrong trainingServicePlatform * Update docker file to use latest nni release (#263) * fix bug about execDuration and endTime (#270) * fix bug about execDuration and endTime * modify time interval to 30 seconds * refactor based on Gems's suggestion * for triggering ci * Refactor dockerfile (#264) * refactor Dockerfile * Support nnictl tensorboard (#268) support tensorboard * Sdk update (#272) * Rename get_parameters to get_next_parameter * annotations add get_next_parameter * updates * updates * updates * updates * updates * add experiment log path to experiment profile (#276) * refactor extract reward from dict by tuner * update Makefile for mac support, wait for aka.ms support * refix Makefile for colorful echo * unversion config.yml with machine information * sync graph.py between tuners & trial of ga_squad * sync graph.py between tuners & trial of ga_squad * copy weight shared ga_squad under weight_sharing folder * mv ga_squad code back to master * simple tuner & trial ready * Fix nnictl multiThread option * weight sharing with async dispatcher simple example ready * update for ga_squad * fix bug * modify multihead attention name * add min_layer_num to Graph * fix bug * update share id calc * fix bug * add save logging * fix ga_squad tuner bug * sync bug fix for ga_squad tuner * fix same hash_id bug * add lock to simple tuner in weight sharing * Add readme to simple weight sharing * update * update * add paper link * update * reformat with autopep8 * add documentation for weight sharing * test for weight sharing * delete irrelevant files * move details of weight sharing in to code comments * add example section * update weight sharing tutorial * fix divide by zero risk * update tuner thread exception handling * fix bug for async test --- examples/trials/weight_sharing/ga_squad/evaluate.py | 7 ++++++- src/sdk/pynni/nni/msg_dispatcher_base.py | 9 +++++++-- test/async_sharing_test/main.py | 11 ++++++----- test/async_sharing_test/simple_tuner.py | 5 +++-- 4 files changed, 22 insertions(+), 10 deletions(-) diff --git a/examples/trials/weight_sharing/ga_squad/evaluate.py b/examples/trials/weight_sharing/ga_squad/evaluate.py index d2bc208cf4..6db1abbc99 100644 --- a/examples/trials/weight_sharing/ga_squad/evaluate.py +++ b/examples/trials/weight_sharing/ga_squad/evaluate.py @@ -72,9 +72,14 @@ def f1_score(prediction, ground_truth): num_same = sum(common.values()) if num_same == 0: return 0 + if not prediction_tokens: + raise ValueError("empty prediction tokens") precision = 1.0 * num_same / len(prediction_tokens) + + if not ground_truth_tokens: + raise ValueError("empty groundtruth tokens") recall = 1.0 * num_same / len(ground_truth_tokens) - f1_result = (2 * precision * recall) / (precision + recall) + f1_result = (2 * precision * recall) / (precision + recall + 1e-10) return f1_result diff --git a/src/sdk/pynni/nni/msg_dispatcher_base.py b/src/sdk/pynni/nni/msg_dispatcher_base.py index d0b8c8beb0..f49f647dd3 100644 --- a/src/sdk/pynni/nni/msg_dispatcher_base.py +++ b/src/sdk/pynni/nni/msg_dispatcher_base.py @@ -38,6 +38,7 @@ class MsgDispatcherBase(Recoverable): def __init__(self): if multi_thread_enabled(): self.pool = ThreadPool() + self.thread_results = [] def run(self): """Run the tuner. @@ -53,7 +54,11 @@ def run(self): if command is None: break if multi_thread_enabled(): - self.pool.map_async(self.handle_request_thread, [(command, data)]) + result = self.pool.map_async(self.handle_request_thread, [(command, data)]) + self.thread_results.append(result) + if any([thread_result.ready() and not thread_result.successful() for thread_result in self.thread_results]): + _logger.debug('Caught thread exception') + break else: self.handle_request((command, data)) @@ -69,7 +74,7 @@ def handle_request_thread(self, request): self.handle_request(request) except Exception as e: _logger.exception(str(e)) - sys.exit(-1) + raise else: pass diff --git a/test/async_sharing_test/main.py b/test/async_sharing_test/main.py index 4c32ea51ca..d5a6315812 100644 --- a/test/async_sharing_test/main.py +++ b/test/async_sharing_test/main.py @@ -38,19 +38,20 @@ def check_sum(fl_name, tid=None): if __name__ == '__main__': - nfs_path = '/mnt/nfs/nni' + nfs_path = '/mnt/nfs/nni/test' params = nni.get_next_parameter() print(params) - if params['prev_id'] == 0: + if params['id'] == 0: model_file = os.path.join(nfs_path, str(params['id']), 'model.dat') - time.sleep(10) generate_rand_file(model_file) + time.sleep(10) nni.report_final_result({ - 'checksum': check_sum(model_file), + 'checksum': check_sum(model_file, tid=params['id']), 'path': model_file }) else: model_file = params['prev_path'] + time.sleep(10) nni.report_final_result({ - 'checksum': check_sum(model_file, params['prev_id']) + 'checksum': check_sum(model_file, tid=params['prev_id']) }) diff --git a/test/async_sharing_test/simple_tuner.py b/test/async_sharing_test/simple_tuner.py index 57c39cbe3b..de40ea9117 100644 --- a/test/async_sharing_test/simple_tuner.py +++ b/test/async_sharing_test/simple_tuner.py @@ -57,8 +57,9 @@ def receive_trial_result(self, parameter_id, parameters, reward): self.trial_meta[parameter_id]['path'] = reward['path'] self.sig_event.set() else: - if reward['checksum'] != self.trial_meta[self.f_id]['checksum'] + str(self.f_id): - raise ValueError("Inconsistency in weight sharing!!!") + if reward['checksum'] != self.trial_meta[self.f_id]['checksum']: + raise ValueError("Inconsistency in weight sharing: {} != {}".format( + reward['checksum'], self.trial_meta[self.f_id]['checksum'])) self.thread_lock.release() def update_search_space(self, search_space):